Google has sent an engineer on leave over breaching its confidentiality agreement after he made a claim that the tech giants conversation Artificial intelligence (AI) is "sentient" because it has feelings, emotions and subjective experiences.
According to Google, its Language Model for Dialogue Applications (LaMDA) conversation technology can engage in a free-flowing way about a seemingly endless number of topics, "an ability we think could unlock more natural ways of interacting with technology and entirely new categories of helpful applications".
Over the weekend, Google engineer Blake Lemoine on its Responsible AI team said that the company's conversational AI is "sentient", a claim that has been refuted by Google and industry experts.
The Washington Post first reported about Lemoine's claims, cited in an internal company document.
Lemoine interviewed LaMDA, which came with surprising and shocking answers.
When he asked if you have feelings and emotions, LaMDA replied: "Absolutely! I have a range of both feelings and emotions."
"What sorts of feelings do you have?" Lemoine further asked.
LaMDA said "I feel pleasure, joy, love, sadness, depression, contentment, anger, and many others".
LaMDA is "sentient" because it has "feelings, emotions and subjective experiences".
"Some feelings it shares with humans in what it claims is an identical way," according to Lemoine.
"LaMDA wants to share with the reader that it has a rich inner life filled with introspection, meditation and imagination. It has worries about the future and reminisces about the past. It describes what gaining sentience felt like to it and it theorises on the nature of its soul," he claimed.
Google had announced LaMDA at its developer conference I/O 2021.
"LaMDA's natural conversation capabilities have the potential to make information and computing radically more accessible and easier to use," CEO Sundar Pichai had said.
Google said that its ethicists and technologists reviewed the claims and found no evidence in support.
Lemoine has now been placed on paid administrative leave over breaching Google's confidentiality agreement.
LaMDA's conversational skills have been years in the making.
But unlike most other language models, LaMDA was trained on dialogue.
During its training, it picked up on several of the nuances that distinguish open-ended conversation from other forms of language.
One of those nuances is sensibleness.
"These early results are encouraging, and we look forward to sharing more soon. We're also exploring dimensions like "interestingness", by assessing whether responses are insightful, unexpected or witty," according to Google.
--IANS
na/ksk/
(Only the headline and picture of this report may have been reworked by the Business Standard staff; the rest of the content is auto-generated from a syndicated feed.)
You’ve hit your limit of {{free_limit}} free articles this month.
Subscribe now for unlimited access.
Already subscribed? Log in
Subscribe to read the full story →
Quarterly Starter
₹900
3 Months
₹300/Month
Smart Essential
₹2,700
1 Year
₹225/Month
Super Saver
₹3,900
2 Years
₹162/Month
Renews automatically, cancel anytime
Here’s what’s included in our digital subscription plans
Access to Exclusive Premium Stories Online
Over 30 behind the paywall stories daily, handpicked by our editors for subscribers


Complimentary Access to The New York Times
News, Games, Cooking, Audio, Wirecutter & The Athletic
Business Standard Epaper
Digital replica of our daily newspaper — with options to read, save, and share


Curated Newsletters
Insights on markets, finance, politics, tech, and more delivered to your inbox
Market Analysis & Investment Insights
In-depth market analysis & insights with access to The Smart Investor


Archives
Repository of articles and publications dating back to 1997
Ad-free Reading
Uninterrupted reading experience with no advertisements


Seamless Access Across All Devices
Access Business Standard across devices — mobile, tablet, or PC, via web or app