With the help of nltk.tokenize.WordPunctTokenizer()()
method, we are able to extract the tokens from string of words or sentences in the form of Alphabetic and Non-Alphabetic character by using tokenize.WordPunctTokenizer()()
method.
Syntax :
tokenize.WordPunctTokenizer()()
Return : Return the tokens from a string of alphabetic or non-alphabetic character.
Example #1 :
In this example we can see that by using tokenize.WordPunctTokenizer()()
method, we are able to extract the tokens from stream of alphabetic or non-alphabetic character.
# import WordPunctTokenizer() method from nltk from nltk.tokenize import WordPunctTokenizer # Create a reference variable for Class WordPunctTokenizer tk = WordPunctTokenizer() # Create a string input gfg = "Lazyroar...$$&* \nis\t for Lazyroar" # Use tokenize method geek = tk.tokenize(gfg) print (geek) |
Output :
[‘Lazyroar’, ‘…$$&*’, ‘is’, ‘for’, ‘Lazyroar’]
Example #2 :
# import WordPunctTokenizer() method from nltk from nltk.tokenize import WordPunctTokenizer # Create a reference variable for Class WordPunctTokenizer tk = WordPunctTokenizer() # Create a string input gfg = "The price\t of burger \nin BurgerKing is Rs.36.\n" # Use tokenize method geek = tk.tokenize(gfg) print (geek) |
Output :
[‘The’, ‘price’, ‘of’, ‘burger’, ‘in’, ‘BurgerKing’, ‘is’, ‘Rs’, ‘.’, ’36’, ‘.’]