BETA
This is a BETA experience. You may opt-out by clicking here
Edit Story

Suspect Charged After Allegedly Using AI To Create Images Of Child Sexual Abuse

In one of the first criminal cases of its kind, prosecutors have charged a Wisconsin man after he allegedly showed AI-generated child sexual abuse imagery to a child.

Following

A Wisconsin man has been charged with sharing AI-generated child sexual abuse material (CSAM) with a minor, one of the first criminal cases involving AI-generated CSAM.

La Crosse County prosecutors allege that Steven Anderegg solicited requests for sexually explicit images of young children on Instagram and then used the Stable Diffusion 1.5 AI-powered image generator to create them. The criminal complaint alleges that in October 2023 Anderegg had “over 100 images of suspected AI CSAM images” in his possession and had shared two of them with a 15-year-old he’d met on Instagram.

A sheriff’s deputy said in an affidavit that Anderegg “was taking requests from people online for what kind of CSAM they want to see and generated the AI images of the CSAM juveniles.”

In a chat transcript reviewed by Forbes, Anderegg explicitly says he used Stable Diffusion, which is now managed by the AI startup Stability, to create the images. “Yes, they are made with Stable Diffusion. I create them in so far as I craft the prompts and control parameters of the AI nets.” The chats were flagged by Instagram parent company Meta to authorities.

Anderegg was arrested and charged with two counts of “exposing a child to harmful material” and an additional charge of “sexual contact with a child under age 13.” He pleaded not guilty, and was released earlier this month on a $50,000 bond. His attorney, Jennifer Lough, did not respond to a request for comment.

“Stable Diffusion is one of the [AI generation tools] that we see most frequently.”

Fallon McNulty, NCMEC

Ella Irwin, senior vice president of integrity at Stability AI, told Forbes in a statement that the images were likely made with version 1.5 of Stable Diffusion, which was “developed and released by” AI startup Runway ML in October 2022.

She said Stability took over the exclusive development of the open source software in late 2022, beginning with Stable Diffusion 2. Since then, Irwin said, “Stability AI has invested in proactive features to prevent the misuse of AI for the production of harmful content and to make it harder for bad actors to misuse our platform.”

One of Stable Diffusion 1.5’s primary developers, however, was a Stability AI employee, and the company supported the creation of the model by providing compute credits. (The other developer worked at Runway and later joined Stability in 2023.) That version of Stable Diffusion, which is open source, still freely circulates online and is easy to download. Some easily-accessible websites use older versions of Stable Diffusion to power their deepfake porn services.

“Stable Diffusion is one of the ones that we see most frequently” cited in reports of AI-generated CSAM, said Fallon McNulty, the head of the National Center for Missing and Exploited Children’s (NCMEC) CyberTipline, adding that she’s seen it referenced in at least 100 reports.

If tech companies detect or are made aware of CSAM, whether it is real or AI-generated, they are required under federal law to report it to the CyberTipline, which reviews it for referral to law enforcement. But in recent testimony before Congress, John Shehan, a NCMEC senior vice president said most generative AI platforms do a poor job of this and are not as vigilant about misuse of their tools.

“Unfortunately, the repercussions of Stable Diffusion 1.5’s training process will be with us for some time to come.”

David Thiel, Stanford Internet Observatory

Stability AI, like others in its field, claims to have built-in protections. The company’s terms of use state that users cannot “commission harmful or illegal activities,” and that users cannot “create non consensual nudity or illegal pornographic content.”

When Stability AI released Stable Diffusion 2.0 in November 2022, the new model angered some users because of its increased restrictions on explicit content. As reported by The Verge, cofounder Emad Mostaque implied in a Discord chat at the time that the changes in version 2.0 were due in part to previous versions of Stable Diffusion being used to create images of child abuse: you “can’t have kids & nsfw in an open model,” he wrote, “so get rid of the kids or get rid of the nsfw.” (Last week, Forbes reported that Mostaque was stepping down as CEO and that multiple key researchers behind Stable Diffusion have left Stability).

Late last year, researchers at Stanford University found that Stable Diffusion 1.5 was trained on a cache of illegal child sexual abuse material, as Forbes previously reported. “Unfortunately, the repercussions of Stable Diffusion 1.5’s training process will be with us for some time to come,” David Thiel, the author of the Stanford study, wrote.

In response, Ben Brooks, head of public policy at Stability AI, told Forbes that the company was “not aware of any confirmed cases of CSAM on our platform but are committed to reporting... to NCMEC as appropriate.”

However, Stability AI has not yet begun that process, NCMEC’s McNulty told Forbes.

“We haven’t seen [Stability AI] take the steps to register yet,” she said. “We hope that they will. It’s something that they are going to work on and they’re going to register but that’s not something we’ve seen happen yet.”

Stability’s Irwin said that the company has not yet registered at NCMEC “because we have not had anything to report to them as of yet, however we have been engaged over the past few months,” adding that the company planned on attending a CyberTipline conference in April.

This new Wisconsin case is the latest in a troubling series of incidents in which popular AI tools have been used to create illegal sexual abuse material. In November 2023, a North Carolina man was sentenced to 40 years for sexual exploitation of a minor and using a web-based AI tool to create CSAM. One month earlier, a Kentucky man pleaded guilty to 13 counts of CSAM possession; prosecutors alleged he also had “AI images and videos depicting the sexual abuse of children.”

In recent Congressional hearings, NCMEC’s Shehan said the group had recorded 4,700 reports of AI-generated CSAM last year. While that’s a small fraction of the 36 million reports the organization received in 2023, he flagged it as a troubling development and worried that it would “lead to even more dramatic increases in reports.”

Meanwhile, Riana Pfefferkorn, a research scholar at the Stanford Internet Observatory, and the author of a new academic paper on the subject, noted that there are currently some legal complexities that make prosecuting a criminal suspect who has made purely AI-generated CSAM more difficult than conventional CSAM cases. Possessing or generating explicit images of entirely fictional children not based on any real person may be in a legal gray area.

But Pfefferkorn said this Wisconsin case may provide a roadmap for other prosecutors: to sidestep open questions about the legality of AI-generated CSAM by charging subjects with other crimes. Those crimes, she told Forbes, “may only come to light because of the initial investigation of the AI-generated material.”

MORE FROM FORBES

MORE FROM FORBESTeen Boys Deepfaked Her Daughter. Then The School Made It Worse, One Mom Says.MORE FROM FORBESEtsy Has Been Hosting Deepfake Porn Of CelebritiesMORE FROM FORBESStable Diffusion 1.5 Was Trained On Illegal Child Sexual Abuse Material, Stanford Study Says
Follow me on Twitter or LinkedInSend me a secure tip