Pop culture obsessives writing for the pop culture obsessed.

MIT scientists created a “psychopath” AI by feeding it violent content from Reddit

We’ve all seen evil machines in The Terminator or The Matrix, but how does a machine become evil? Is it like Project Satan from Futurama, where scientists combined parts from various evil cars to create the ultimate evil car? Or are machines simply destined to eventually turn evil when their processing power or whatever becomes sufficiently advanced? As it turns out, one guaranteed way to make a machine turn bad is by putting it in the hands of some scientists who are actively trying to create an AI “psychopath,” which is exactly what a group from MIT has achieved with an algorithm it’s named “Norman”—like the guy from Psycho.

This comes from Newsweek, which explains that the scientists exclusively fed Norman violent and gruesome content from an unnamed Reddit page before showing it a series of Rorschach inkblot tests. While a “standard” AI would interpret the images as, for example, “a black and white photo of a baseball glove,” Norman sees “man is murdered by machine gun in broad daylight.” If that sounds extreme, Norman’s responses get so, so, so, so much worse. Seriously, it may just be an algorithm, but if they dumped this thing into one of those awful Boston Dynamics dog bodies, we would only have a matter of minutes before Killbots and Murderoids started trampling our skulls. Here are some examples from the study:

Advertisement
Advertisement
Advertisement

Seriously, if “man gets pulled into dough machine” doesn’t give you chills, then you might need to start wondering if the machines have already assimilated you. Also, for the record, the study says that Norman wasn’t actually given any photos of real people dying; it just used graphic image captions from the unnamed Reddit page (which is unnamed in the study because of its violent content).

Advertisement

Thankfully, there was a purpose behind this madness beyond trying to expedite the destruction of humanity. The MIT team—Pinar Yanardag, Manuel Cebrian, and Iyad Rahwan—was actually trying to show how some AI algorithms aren’t necessarily inherently biased, but they can become biased based on the data they’re given. In other words, they didn’t build Norman as a psychopath, but it became a psychopath because all it knew about the world was what it learned from a Reddit page. (That last bit seems like it should be particularly relevant for some people on the internet, but we’re going to assume that wasn’t the MIT team’s intention.)