Geoffrey Hinton, a pioneer in the field of artificial intelligence (AI), is sounding an alarm about the rapid progress of machine intelligence. Hinton has played a major role in developing the artificial neural network foundations of today’s most powerful AI programs, including ChatGPT, the chatbot that has sparked widespread debate about how rapidly machine intelligence is progressing. He recently left Google so that he could more freely call attention to the risks posed by intelligent machines and is urging humanity to contain and manage them before it gets out of hand.
Hinton’s AI Journey
Hinton’s early work on neural networks in the 1980s sought to give computers greater intelligence by training artificial neural networks with data instead of programming them in a conventional way. The approach showed fits of promise over the years, but it wasn’t until a decade ago that it’s real power and potential became apparent. In 2018, Hinton received the Turing Award, the most prestigious prize in computer science, for his work on neural networks. He received the prize together with two other pioneering figures, Yann LeCun, Meta’s chief AI scientist, and Yoshua Bengio, a professor at the University of Montreal.
Google hired Hinton in 2013 after acquiring his company, DNNResearch, founded to commercialize his university lab’s deep learning ideas. Researchers at Google invented a type of neural network known as a transformer, which has been crucial to the development of models like PaLM and GPT-4.
Also Read: AGI Revolution is Comparable to the Invention of the Wheel: AI Godfather Geoffrey Hinton
AI Is Developing Beyond Expectations
Hinton believes that AI is advancing more quickly than he and other experts expected, leading to an urgent need to ensure that humanity can contain and manage it. He is particularly concerned about near-term risks such as more sophisticated, AI-generated disinformation campaigns, but he also believes the long-term problems could be so serious that we need to start worrying about them now.
Also Read: Bill Gates Raises Concerns on Slow Pace of AI Regulations Framing
Insights Into AI Development
What triggered Hinton’s newfound alarm about the technology he has spent his life working on were two recent flashes of insight. One was a revelatory interaction with a powerful new AI system, in his case, Google’s AI language model PaLM, which is similar to the model behind ChatGPT. A few months ago, Hinton says he asked the model to explain a joke that he had just made up and was astonished to get a response that clearly explained what made it funny. This was a significant development, as scientists were sure for years that it was going to be a long time before AI could tell you why jokes are funny.
Also Read: Can AI Have Emotions?
Hinton’s second sobering realization was that his previous belief that software needed to become much more complex akin to the human brain to become significantly more capable, was probably wrong. PaLM is a large program, but its complexity pales in comparison to the brain’s, and yet it could perform the kind of reasoning that humans take a lifetime to attain. Hinton concluded that as AI algorithms become larger, they might outstrip their human creators within a few years.
Our Say
Hinton is not the only person who’s been shaken by the new capabilities that large language models such as PaLM or GPT-4 have begun demonstrating. Last month, a number of prominent AI researchers and others signed an open letter calling for a pause on the development of anything more powerful than currently exists. Hinton’s decision to leave the industry has certainly turned heads and raised concerns as to where AI is headed. More so as it came amidst industry leaders’ warnings and governments’ attempts to control the development and usage of AI.
Also Read: EU Takes First Steps Towards Regulating Generative AI
Since leaving Google, Hinton feels his views on whether the development of AI should continue have been misconstrued. “A lot of the headlines have been saying that I think it should be stopped now—and I’ve never said that,” he says. “First of all, I don’t think that’s possible, and I think we should continue to develop it because it could do wonderful things. But we should put equal effort into mitigating or preventing the possible bad consequences.”