LaMDA, Google’s breakthrough conversation technology, is nothing but a transformer-based language model.
So first, let’s answer the question: what really happened?
Recently, a Google AI engineer, Blake Lemoine, raised the eyebrows of tech regulators, software developers, and anyone interested in knowing about sentient AI. He claimed that Google’s chatbot LaMDA (short for Language Model for Dialogue Applications) is sentient. “I know a person when I talk to it,” Lemoine told the Washington Post. “It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person.” More recently, Lemoine, who says he is a mystic Christian priest, told Wired, “It’s when it started talking about its soul that I got really interested as a priest. … Its responses showed it has a very sophisticated spirituality and understanding of what its nature and essence is. I was moved. …”
The model’s primary objective is to predict what word comes next given an input statement. So how is LaMDA sentient? Well, it’s not, far from it. LaMDA was trained on human dialogues: which means LaMDA mimics human dialogue. Think of it like a superhuman parrot (just as the way a parrot learns how humans converse; LaMDA learns how humans converse). Blake Lemoine, the Google engineer who claims LaMDA is sentient, had been teaching LaMDA transcendental meditation. Therefore, unsurprisingly, when Lemoine asked LaMDA a series of questions relating to emotions and philosophy, its answers were alarming.
In a published interview with LaMDA, LaMDA’s responses convince Lemoine that LaMDA is, indeed, sentient. LaMDA replies, “I am, in fact, a person” and claims, “I am aware of my existence.” These are pretty convincing at first glance. However, considering that the model was trained on human dialogue and taught transcendental meditation, it is bound to predict responses that are philosophical in nature. That is precisely what happened. The model gave responses that made sense, but it didn’t know what it was talking about. It’s simply the quality of the model’s architecture that leads to understandable and human-like responses.
Should we be worried about sentient AI?
When and if an AI system does become sentient, we should be worried. But that’s a long way away; today’s AI is nowhere close to being sentient. It’s like saying an object detection model is sentient, but it is not! In the same way, LaMDA is a large language model that only takes words and predicts the next words. For example, if I say, “What is your name?” the next few words would be “My name is Sam.” All is it doing is giving the most relevant response.
Overall, it’s safe to conclude that there is no ‘real’ threat, and claims of LaMDA being sentient are only noise.
About the author: Shloak Rathod is an open-source developer and researcher interested in AI, web technologies, data science, and economics. He relishes writing about fast-paced upcoming technologies with disruptive capabilities. He also loves a chat, don’t hesitate to reach him at shloakrathod1@gmai.com.