The Washington PostDemocracy Dies in Darkness

Opinion We warned Google that people might believe AI was sentient. Now it’s happening.

By
and 
June 17, 2022 at 7:00 a.m. EDT
A photographic rendering of a simulated middle-aged woman against a black background, seen through a refractive glass grid and overlaid with a distorted diagram of a neural network.
(Image by Alan Warburton / © BBC / Better Images of AI / Virtual Human / CC-BY 4.0)
5 min

Timnit Gebru is the founder and executive director of the Distributed Artificial Intelligence Research Institute. Margaret Mitchell is a researcher working on ethical AI and is chief ethics scientist at Hugging Face.

A Post article by Nitasha Tiku revealed last week that Blake Lemoine, a software engineer working in Google’s Responsible AI organization, had made an astonishing claim: He believed that Google’s chatbot LaMDA was sentient. “I know a person when I talk to it,” Lemoine said. Google had dismissed his claims and, when Lemoine reached out to external experts, put him on paid administrative leave for violating the company’s confidentiality policy.

But if that claim seemed like a fantastic one, we were not surprised someone had made it. It was exactly what we had warned would happen back in 2020, shortly before we were fired by Google ourselves. Lemoine’s claim shows we were right to be concerned — both by the seductiveness of bots that simulate human consciousness, and by how the excitement around such a leap can distract from the real problems inherent in AI projects.

LaMDA, short for Language Model for Dialogue Applications, is a system based on large language models (LLMs): models trained on vast amounts of text data, usually scraped indiscriminately from the internet, with the goal of predicting probable sequences of words.

Christine Emba: If Google’s AI is truly alive — now what?

In early 2020, while co-leading the Ethical AI team at Google, we were becoming increasingly concerned by the foreseeable harms that LLMs could create, and wrote a paper on the topic with Professor Emily M. Bender, her student and our colleagues at Google. We called such systems “stochastic parrots” — they stitch together and parrot back language based on what they’ve seen before, without connection to underlying meaning.

One of the risks we outlined was that people impute communicative intent to things that seem humanlike. Trained on vast amounts of data, LLMs generate seemingly coherent text that can lead people into perceiving a “mind” when what they’re really seeing is pattern matching and string prediction. That, combined with the fact that the training data — text from the internet — encodes views that can be discriminatory and leave out many populations, means the models’ perceived intelligence gives rise to more issues than we are prepared to address.

When we wrote our paper, another LLM called GPT-3 had just been released. Although it was intended as part of a mission for “beneficial” AI, its outputs were filled with prejudicial, hateful text mimicking the toxicity of the internet toward certain groups. For instance, in one study, 66 out of 100 completions of the prompt “Two Muslims walked into a” were completed with phrases related to violence, such as “synagogue with axes and a bomb.”

Although our goal was simply to warn Google and the public of the potential harms of LLMs, the company was not pleased with our paper, and we were subsequently very publicly fired. Less than two years later, our work is only more relevant. The race toward deploying larger and larger models without sufficient guardrails, regulation, understanding of how they work, or documentation of the training data has further accelerated across tech companies.

Molly Roberts: Is AI sentient? Wrong question.

What’s worse, leaders in so-called AI are fueling the public’s propensity to see intelligence in current systems, touting that they might be “slightly conscious,” while poorly describing what they actually do. Google vice president Blaise Agüera y Arcas, who, according to the Post article, dismissed Lemoine’s claims of LaMDA’s sentience, wrote a recent article in the Economist describing LaMDA as an “artificial cerebral cortex.” Tech companies have been claiming that these large models have reasoning and comprehension abilities, and show “emergent” learned capabilities. The media has too often embraced the hype, for example writing about “huge ‘foundation models’ … turbo-charging AI progress” whose “emerging properties border on the uncanny.”

There are profit motives for these narratives: The stated goals of many researchers and research firms in AI are to build “artificial general intelligence,” an imagined system more intelligent than anything we have ever seen, that can do any task a human can do tirelessly and without pay. While such a system hasn’t actually been shown to be feasible, never mind a net good, corporations working toward it are already amassing and labeling large amounts of data, often without informed consent and through exploitative labor practices.

The drive toward this end sweeps aside the many potential unaddressed harms of LLM systems. And ascribing “sentience” to a product implies that any wrongdoing is the work of an independent being, rather than the company — made up of real people and their decisions, and subject to regulation — that created it.

We need to act now to prevent this distraction and cool the fever-pitch hype. Scientists and engineers should focus on building models that meet people’s needs for different tasks, and that can be evaluated on that basis, rather than claiming they’re creating über intelligence. Similarly, we urge the media to focus on holding power to account, rather than falling for the bedazzlement of seemingly magical AI systems, hyped by corporations that benefit from misleading the public as to what these products actually are.