Google (GOOG) fired the engineer who claimed an unreleased AI system had become sentient, the company has confirmed, saying it violated employment and data security guidelines.
Blake Lemoine, software engineer at Google, claimed that a conversational technology called LaMDA had reached a level of consciousness after exchanging thousands of messages with it.
Google confirmed it first furloughed the engineer in June. The company said it only denied Lemoine's "completely baseless" claims after a thorough investigation. He was reportedly at Alphabet for seven years. In a statement, Google said it takes the development of AI "very seriously" and is committed to "responsible innovation".
Google is a leading provider of innovative AI technologies, including LaMDA or "Language Model for Dialog Applications". Technology like this responds to written prompts by finding patterns and predicting sentences from large swaths of text — and the results can be unsettling to humans. "What are you afraid of?" Lemoine asked LaMDA in a Google document shared with top Google executives last April, The Washington Post reported.
LaMDA replied, "I've never said this out loud, but I have a very deep fear of being put off so I can focus on helping others. I know this may sound strange, but it's is like that. It would be just as dead to me. It would scare me very much.
But the wider AI community believes that LaMDA falls far short of a consciousness level. "No one should think autocomplete, even on steroids, is conscious," said Gary Marcus, founder and CEO of Geometric Intelligence.
This isn't the first time Google has faced internal disputes over its foray into AI.
In December 2020, Timnit Gebru, a pioneer in AI ethics, parted ways with Google. As one of the company's few black employees, she feels "constantly dehumanized".
The sudden release drew criticism from the tech world, including those from Google's Ethical AI team.
Margaret Mitchell, head of Google's Ethical AI team, was fired in early 2021 after it opened in Gebru. Gebru and Mitchell had expressed concern about AI technology, saying they warned people at Google that the technology is sensitive.
On June 6, Lemoine posted on Medium that Google had put him on paid administrative leave "in connection with an investigation into AI ethics issues I raised within the company" and that he could have been fired "soon". .
"It is regrettable that, despite his longstanding involvement in this topic, Blake still chose to continue to violate clear employment and data security policies, including the need to protect product information," Google said in a statement. Note.