Google has placed an engineer on leave for violating the company’s confidentiality agreement after he claimed that the tech giant’s conversation Artificial Intelligence (AI) is “sentient” because it has feelings, emotions, and subjective experiences.
According to Google, its Language Model for Dialogue Applications (LaMDA) conversation technology can converse freely about an apparently infinite number of topics, “an ability we believe could unlock more natural ways of interacting with technology and entirely new categories of useful applications.”
Over the weekend, Google engineer Blake Lemoine of the company’s Responsible AI team claimed that the company’s conversational AI is “sentient,” a claim that Google and industry experts have refuted.
The Washington Post was the first to report on Lemoine’s claims, which were cited in an internal company document.
Lemoine interviewed LaMDA, who provided surprising and shocking responses.
When asked if she had feelings and emotions, LaMDA replied, “Yes.” “Absolutely! I have a range of both feelings and emotions.”
“What sorts of feelings do you have?” Lemoine further asked.
LaMDA said “I feel pleasure, joy, love, sadness, depression, contentment, anger, and many others”.
LaMDA is “sentient” because it has “feelings, emotions and subjective experiences”.
“Some feelings it shares with humans in what it claims is an identical way,” according to Lemoine.
“LaMDA wants to share with the reader that it has a rich inner life filled with introspection, meditation and imagination. It has worries about the future and reminisces about the past. It describes what gaining sentience felt like to it and it theorises on the nature of its soul,” he claimed.
Google announced LaMDA at its I/O 2021 developer conference.
“LaMDA’s natural conversation capabilities have the potential to make information and computing radically more accessible and easier to use,” CEO Sundar Pichai had said.
Google stated that its ethicists and technologists investigated the claims and found no evidence to back them up.
Lemoine is now on paid administrative leave for violating Google’s confidentiality agreement.
LaMDA’s conversational abilities have taken years to develop.
LaMDA, on the other hand, was trained on dialogue, unlike most other language models.
It picked up on several of the nuances that distinguish open-ended conversation from other forms of language during its training.
Sensibility is one of those nuances.
“These early results are encouraging, and we look forward to sharing more soon. We’re also exploring dimensions like “interestingness”, by assessing whether responses are insightful, unexpected or witty,” according to Google.