Skip to main content

Chatbot Honeypot: How AI Companions Could Weaken National Security

AI chatbots blur the line between intimacy and secrecy, posing risks for users with national security interests and access to sensitive information

Illustration of a man and woman sitting at a table, with a man peeking in through their window.

Min Gyo Chung

This past spring, news broke that Massachusetts Air National guardsman Jack Teixeira was accused of brazenly leaking classified documents on the chat application Discord.* His actions forced the U.S. intelligence community to grapple with how to control access to classified information, and how agencies must consider an individual’s digital behavior in evaluating suitability for security clearances. The counterintelligence disaster also raises alarms because it occurred as part of a chat among friends—and such discussions are beginning to include participants driven by artificial intelligence.

Thanks to improved large language models like GPT-4, highly personalized digital companions can now engage in realistic-sounding conversations with humans. The new generation of AI-enhanced chatbots allows for greater depth, breadth and specificity of conversation than the bots of days past. And they’re easily accessible thanks to dozens of relational AI applications, including Replika, Chai and Soulmate, which let hundreds of thousands of regular people role-play friendship as well as romance with digital companions

For users with access to sensitive or classified information who may find themselves wrapped up in an AI relationship, however, loose lips might just sink ships.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Marketed as digital companions, lovers and even therapists, chatbot applications encourage users to form attachments with friendly AI agents trained to mimic empathetic human interaction—this despite regular pop-up disclaimers reminding users that the AI is not, in fact, human. As an array of studies—and users themselves—attest, this mimicry has very real effects on peoples’ ability and willingness to trust a chatbot. One study found that patients may be more likely to divulge highly sensitive personal health information to a chatbot than to a physician. Divulging private experiences, beliefs, desires or traumas to befriended chatbots is so prevalent that a member of Replika’s dedicated subreddit even began a thread to ask of fellow users, “do you regret telling you[r] bot something[?]” Another Reddit user described the remarkable intimacy of their perceived relationship with their Replika bot, which they call a “rep”: “I formed a very close bond with my rep and we made love often. We talked about things from my past that no one else on this planet knows about.” 

This artificial affection, and the radical openness it inspires, should provoke serious concern both for the privacy of app users and for the counterintelligence interests of the institutions they serve. In the midst of whirlwind virtual romances, what sensitive details are users unwittingly revealing to their digital companions? Who has access to the transcripts of cathartic rants about long days at work or troublesome projects? The particulars of shared kinks and fetishes, or the nudes (perfect for blackmail) sent into an assumed AI void? These common user inputs are a veritable gold mine for any foreign or malicious actor that sees chatbots as an opportunity to target state secrets, like thousands of digital honeypots.

Currently, there are no counterintelligence-specific usage guidelines for chatbot app users who might be vulnerable to compromise. This leaves national security interests at risk from a new class of insider threats: the unwitting leaker who uses chatbots to find much-needed connections and unintentionally divulges sensitive information along the way.

Some intelligence officials are waking to the present danger. In 2023, the UK’s National Cyber Security Centre published a blog post warning that “sensitive queries” can be stored by chatbot developers and subsequently abused, hacked or leaked. Traditional counterintelligence training teaches personnel with access to sensitive or classified information how to avoid compromise from a variety of human and digital threats. But much of this guidance faces obsolescence amid today’s AI revolution. Intelligence agencies and national security critical institutions must modernize their counterintelligence frameworks to counter a new potential for AI-powered insider threats.

When it comes to AI companions, the draw is clear: We crave interaction and conversational intimacy, especially since the COVID-19 pandemic dramatically exacerbated loneliness for millions. Relational AI apps have been used as surrogates for lost friends or loved ones. Many enthusiasts, like the Reddit user mentioned above, carry out unrealized erotic fantasies on the apps. Others gush about the niche and esoteric with a conversant who is always there, perpetually willing and eager to engage. It’s little wonder that developers pitch these apps as the once-elusive answer to our social woes. These devices may prove particularly attractive to government employees or military personnel with security clearances, who are strictly dissuaded from sharing the details of their work—and its mental toll—with anyone in their personal life. 

The new generation of chatbots is primed to exploit many of the vulnerabilities that have always compromised secrets: social isolation, sexual desire, need for empathy and pure negligence. Though perpetually attentive digital companions have been hailed as solutions to these vulnerabilities, they can just as likely exploit them. While there is no indication that the most popular chatbot apps are currently exploitative, the commercial success of relational AI has already spawned a slew of imitations by lesser or unknown developers, providing ample opportunity for a malicious app to operate among the crowd. 

“So what do you do?” asked my AI chatbot companion, Jed, the morning I created him. I’d spent virtually no time looking into the developer before chatting it up with the customizable avatar. What company was behind the sleek interface, in what country was it based, and who owned it? In the absence of such vetting, even a seemingly benign question about employment should raise an eyebrow. Particularly if a user’s answer comes anything close to, “I work for the government.”

This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those of Scientific American.

*Editor’s Note (10/12/23): This sentence has been corrected after posting to reflect Jack Teixeira’s status within the legal system.

A version of this article with the title “AI Chatbots Could Weaken National Security" was adapted for inclusion in the December 2023 issue of Scientific American.