Thursday, September 4, 2025
HomeGuest BlogsProfessor John Kelleher discusses recurrent neural networks and conversational AI

Professor John Kelleher discusses recurrent neural networks and conversational AI

Voice translate assistants like Google Home, Siri, Alexa and other similar platforms are now commonplace.  However, for the most part, these device are limited to question and answer  type exchanges and not conversational.  The next big focus for machine translation is dialog systems that go beyond Q&A.  At ODSC Europe 2017 we sat down with Professor John Kelleher, one of our keynotes, who is conducting research in this area .

Professor Kelleher talks about his interest in sequence prediction and long distance dependence in the context of NLP and notes that neural machine translation is a natural application of sequential data.. Professor Kelleher discusses why recurrent neural networks are particularly good at machine translation due to the sequential nature of language and allowing the system to have context. The encoder-decoder recurrent neural network architecture is the core technology inside Google’s assistants for example. Thus employing recurrent neural network systems and other techniques will play a key role in building the next generation of dialog devices.

Sheamus McGovern, ODSC

RELATED ARTICLES

Most Popular

Dominic
32261 POSTS0 COMMENTS
Milvus
81 POSTS0 COMMENTS
Nango Kala
6626 POSTS0 COMMENTS
Nicole Veronica
11795 POSTS0 COMMENTS
Nokonwaba Nkukhwana
11855 POSTS0 COMMENTS
Shaida Kate Naidoo
6747 POSTS0 COMMENTS
Ted Musemwa
7023 POSTS0 COMMENTS
Thapelo Manthata
6695 POSTS0 COMMENTS
Umr Jansen
6714 POSTS0 COMMENTS