Friday, November 15, 2024
Google search engine
HomeGuest BlogsProfessor John Kelleher discusses recurrent neural networks and conversational AI

Professor John Kelleher discusses recurrent neural networks and conversational AI

Voice translate assistants like Google Home, Siri, Alexa and other similar platforms are now commonplace.  However, for the most part, these device are limited to question and answer  type exchanges and not conversational.  The next big focus for machine translation is dialog systems that go beyond Q&A.  At ODSC Europe 2017 we sat down with Professor John Kelleher, one of our keynotes, who is conducting research in this area .

Professor Kelleher talks about his interest in sequence prediction and long distance dependence in the context of NLP and notes that neural machine translation is a natural application of sequential data.. Professor Kelleher discusses why recurrent neural networks are particularly good at machine translation due to the sequential nature of language and allowing the system to have context. The encoder-decoder recurrent neural network architecture is the core technology inside Google’s assistants for example. Thus employing recurrent neural network systems and other techniques will play a key role in building the next generation of dialog devices.

Sheamus McGovern, ODSC

RELATED ARTICLES

Most Popular

Recent Comments