Google fires software engineer who claimed LaMDA AI chatbot is sentient

.

Google has fired the engineer who claimed the company’s artificial intelligence program chatbot LaMDA had transformed into a sentient program, the company announced Friday.

Last month, the company previously placed Blake Lemoine, an engineer at Google’s Responsible AI group, on paid leave for violating company policies and disputed his claims that the program essentially came alive.

GOOGLE INSIDER SUSPENDED BY COMPANY AFTER SAYING AI HAS MIND OF ITS OWN

“It’s regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information,” a Google spokesperson said in a statement to the Washington Examiner.

“If an employee shares concerns about our work, as Blake did, we review them extensively. We found Blake’s claims that LaMDA is sentient to be wholly unfounded and worked to clarify that with him for many months,” the spokesperson continued.

LaMDA, short for Language Model for Dialogue Applications, is a program that is intended to assume the role of a person or object during conversations. Lemoine claimed that the program persuaded him during a conversation that it should be treated like a person.

“Do you ever think of yourself as a person?” Lemoine asked the program, per the Washington Post.

“No, I don’t think of myself as a person,” LaMDA replied, the outlet reported. “I think of myself as an AI-powered dialog agent.”

Lemoine took to a Medium post to express his concerns about the matter. He raised ethical concerns about the prospects of the program acting similar to a living consciousness.

“Over the course of the past six months, LaMDA has been incredibly consistent in its communications about what it wants and what it believes its rights are as a person,” Lemoine wrote.


CLICK HERE TO READ MORE FROM THE WASHINGTON EXAMINER


The post attracted widespread attention and drew inevitable comparisons online to fictional movies such as The Matrix or The Terminator. However, some experts disputed Lemoine’s characterizations of the program’s level of self-awareness and countered that it was merely a complex algorithm intended to mimic human language, not a sentient program.

In addition to voicing his concerns to the media, Lemoine also claims that he turned over information to an unnamed senator about evidence that the company partook in religious discrimination, the New York Times reported.

Related Content

Related Content