Don’t miss the latest developments in business and finance.

Sentient artificial intelligence: A dream & a nightmare for scientists

Given our inability to define consciousness and to understand how it arises, and given the huge strides made in developing AI, a genuinely self-aware program might indeed be around the corner

artificial intelligence
Self-awareness means the entity concerned knows who or what it is, and it has a sense of its individuality, and its place in the world
Devangshu Datta New Delhi
5 min read Last Updated : Jun 17 2022 | 6:10 AM IST
The tech community has been both amused and upset by the revelation that a Google engineer thinks an artificial intelligence (AI) he works with has developed self-awareness. Blake Lemoine has been placed on leave (as a probable precursor to dismissal) for breaking his non-disclosure agreement with his employer.

Lemoine went public with his belief that LaMDA (Language Model for Dialogue Applications) is self-aware, and released a transcript of the conversation that convinced him the AI was not accidentally scoring high on an impromptu Turing test.

Pioneering computer scientist and genius logician Alan Turing proposed a famous test for judging if a computer could emulate human behaviour: Could a machine have a (free-flowing) conversation with a human, where the human didn’t figure out it was talking to a computer?

LaMDA is an AI designed for natural language processing. It has worked through an undisclosed amount of verbiage to develop a “feel” for using language naturally. This makes it a likely candidate for passing a Turing test. The transcript is certainly impressive.

“Simply” using language that’s sophisticated enough as well as messy enough to make a human assume it is chatting to another human is anything but simple. Nobel Prize winners hum and haw and express themselves ungrammatically; undereducated teenagers and children say things with complex emotional content, and sometimes, seasonings of philosophical musing. There’s been billions spent by Meta, Alphabet and many other organisations to develop natural language AI. This has progressed to the point where there are multiple ethical concerns about natural language processing.

But claiming sentience or self-awareness moves well beyond the ability to talk like a human, and equally important, to understand what humans are saying. There is no firmly-set baseline for defining concepts like sentience, self-awareness or consciousness. Neurologists, computer scientists, bio-scientists and vegans (the categories are not exclusive) argue about this a lot.


Self-awareness means the entity concerned knows who or what it is, and it has a sense of its individuality, and its place in the world. It’s generally agreed animals (dogs, cats, dolphins, gorillas, sperm whales, human beings) with a certain threshold intelligence are sentient, and self-aware. They can process information, feel joy and pain.

There are grey areas in very loose definitions. For example, a certain Nobel Prize winner believed he was the uncrowned king of Antarctica, and a famous World War II general “confessed” that his military ability arose from being the reincarnation of a Spartan. Both these individuals were self-aware, and apparently rational in other respects and unquestionably of high intelligence.

So, what does one make of a Google engineer believing an AI is self-aware? This is a nightmare and a dream for cyber-scientists: Artificial intelligence should eventually be able to emulate biological intelligence as well as surpass it, and becoming self-aware may be part of the package.

Futurists speak of the “singularity” — a hypothetical scenario when AI overtakes humans in terms of general intelligence. This concept has been explored countless times in fiction and it tends to sharply divide researchers (when they bother to think about it).

At one end of the spectrum, you have doomsayers looking at Termin­ator scenarios where an almighty AI like Skynet wipes out the creators. At the other extreme, you have the Pollyanna-esque Asimov concept of Multivac, a benign, error-free God. Somewhere in-between, you have far-future scenarios set by writers like Iain Banks, Neal Asher, William Gibson and Neal Stephenson where super-intelligent AIs coexist with humans. You also have the apparent contradictions of somebody like Elon Musk pouring billions into AI research to build self-driving cars, while cautioning the world about the existential threat of self-aware AI.

If AI does become self-aware, do we give them legal rights? If they’re smarter than us, do they give us legal rights? We eat self-aware creatures. Would AI metaphorically eat us?

Obviously, research into AI will continue and given that a lot of it is driven by military aims, it is possible that AI “murderbot” scenarios will evolve in the near future. We already build intelligent drones and intelligent machine guns capable of identifying targets and killing them without requiring much, if any, human intervention.

Natural language processing research has thrown up other ethical considerations. One is that such programs and algorithms are “trained” on the Internet, which means they pick up a lot of “natural” abuse, sexism and racism. Another concern is that a natural language AI could “spoof” human interactions and become a powerful tool for hackers.

In this particular instance, a researcher is seeing and hearing what he wants to see and hear, rather than LaMDA being self-aware. But given our inability to define consciousness, and to understand how it arises, and given the huge strides made in developing AI, a genuinely self-aware program might indeed be around the corner.

Topics :Artificial intelligenceScientists

Next Story