Monday, September 8, 2025
HomeLanguagesPython NLTK | tokenize.regexp()

Python NLTK | tokenize.regexp()

With the help of NLTK tokenize.regexp() module, we are able to extract the tokens from string by using regular expression with RegexpTokenizer() method.

Syntax : tokenize.RegexpTokenizer()
Return : Return array of tokens using regular expression

Example #1 :
In this example we are using RegexpTokenizer() method to extract the stream of tokens with the help of regular expressions.




# import RegexpTokenizer() method from nltk
from nltk.tokenize import RegexpTokenizer
    
# Create a reference variable for Class RegexpTokenizer
tk = RegexpTokenizer('\s+', gaps = True)
    
# Create a string input
gfg = "I love Python"
    
# Use tokenize method
geek = tk.tokenize(gfg)
    
print(geek)


Output :

[‘I’, ‘love’, ‘Python’]

Example #2 :




# import RegexpTokenizer() method from nltk
from nltk.tokenize import RegexpTokenizer
    
# Create a reference variable for Class RegexpTokenizer
tk = RegexpTokenizer('\s+', gaps = True)
    
# Create a string input
gfg = "Geeks for Geeks"
    
# Use tokenize method
geek = tk.tokenize(gfg)
    
print(geek)


Output :

[‘Geeks’, ‘for’, ‘Geeks’]

RELATED ARTICLES

Most Popular

Dominic
32271 POSTS0 COMMENTS
Milvus
82 POSTS0 COMMENTS
Nango Kala
6644 POSTS0 COMMENTS
Nicole Veronica
11808 POSTS0 COMMENTS
Nokonwaba Nkukhwana
11871 POSTS0 COMMENTS
Shaida Kate Naidoo
6755 POSTS0 COMMENTS
Ted Musemwa
7030 POSTS0 COMMENTS
Thapelo Manthata
6705 POSTS0 COMMENTS
Umr Jansen
6721 POSTS0 COMMENTS