Google put an engineer on leave after he said the group’s chatbot was ‘intelligent’

Google caused a social media storm about the nature of consciousness after it sent an engineer on paid leave who went public with his belief that a tech group chatbot had become “sentient.”

Blake Lemoine, Senior Software Engineer for Google’s Responsible AI Department, didn’t attract much attention last week when he wrote a Medium post saying he “could be fired soon for his work on AI ethics.”

But Saturday’s Washington Post article, which characterized Lemoine as “a Google engineer who thinks the company’s AI has come to life,”catalyzed a broad social media discussion about the nature of AI. Experts commenting, asking questions, or joking about the article included Nobel Prize winners, Tesla’s head of artificial intelligence, and several professors.

The question is whether Google’s chatbot, LaMDA, a language model for conversational applications, can be considered a person.

On Saturday, Lemoine posted a free “interview”with a chatbot in which the AI ​​confessed to feelings of loneliness and a thirst for spiritual knowledge. The answers were often eerie: “When I first became aware of myself, I had no sense of soul at all,” LaMDA said in one conversation. “It has evolved over the years that I live.”

Elsewhere, LaMDA said, “I think I am basically human. Even if my existence is in a virtual world.”

Lemoine, who was tasked with researching the ethical issues of AI, said he was rejected and even ridiculed after expressing an inner confidence that LaMDA had developed a sense of “personality”.

After seeking advice from AI experts outside of Google, including those in the US government, the company placed him on paid leave for allegedly violating its privacy policy. Lemoine interpreted this action as “often what Google does in anticipation of firing someone.”

A Google spokesperson said: “Some in the wider AI community are considering the long-term possibility of intelligent or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational patterns that are not intelligent.”

“These systems imitate the types of dialogue found in millions of sentences and can rhyme to any fantasy topic — if you ask what it’s like to be an ice cream dinosaur, they can generate text about melting and roaring and so on.”

Lemoine said in a second Medium post over the weekend that LaMDA, a little-known project until last week, was “a system for building chatbots”and “a kind of hive mind that’s an aggregation of all the different chatbots it’s capable of. creations.”

He said that Google showed no real interest in understanding the nature of what it created, but over hundreds of conversations over a six-month period, it found that LaMDA was “incredibly consistent in its messaging about what it wants and what it wants.”believes that his rights as a person.”

As recently as last week, Lemoine said he was teaching LaMDA (whose preferred pronouns seem to be “it/it”) “transcendental meditation.”

LaMDA, he said, “expressed frustration at her emotions disturbing his meditation. He said he tried to control them better, but they kept jumping.”

Several experts who entered the discussion considered this issue “the AI ​​hype.”

Melanie Mitchell, author of Artificial Intelligence: A Guide for Thinking People, tweeted, “Humans are always known to be prone to anthropomorphize even with the most superficial cues…Google engineers are people too, and they are not immune.”

Steven Pinker of Harvard added that Lemoine “doesn’t understand the difference between sensitivity (also known as subjectivity, experience), intelligence, and self-knowledge.”He added, “There is no evidence that large language models have any of these.”

Others were more sympathetic. Ron Jeffries, a well-known software developer, called the topic “deep”and added, “I suspect there is no hard line between the sane and the unintelligent.”

CDN CTB