Does Twitter's New Hate Policy Cover Trump's North Korea Tweet?

The company announced new rules to protect against hateful conduct and violent threats in December—but the policy contains an intentional loophole.

A man displays a picture of President Trump on his phone.
A man displays a picture of President Trump on his phone in Tehran, Iran. (Reuters)
Editor’s Note: This story was originally published on December 18, 2017. It has been updated substantially to reflect the events of January 3, 2018.
Updated at 12:00 AM ET on January 3, 2018

In a speech on New Year’s Day, Kim Jong Un, the supreme leader of North Korea, offered to begin direct negotiations with South Korea while brazenly threatening the United States with nuclear war. “The entire United States is within range of our nuclear weapons,” he said, “and a nuclear button is always on my desk.”

Late Tuesday night Eastern time, the President of the United States responded in  a tweet:

President Trump appeared to be warning Kim of a modern nuclear war. He was issuing, in other words, a violent threat—although the type of violence he describes is of an unusually hideous and mechanized kind.

This would seem to present a problem for Twitter: The company bans most users of its service from issuing violent threats against civilians. Yet when the Seattle-based content-moderation expert Rochelle LaPlante reported Trump’s tweet as a violation of its rules, she received an automated rejection:

Twitter, in other words, does not consider Trump in violation of their rules. Why is that? It goes back to a major policy change from the company last month.

In December, Twitter announced new and stricter rules banning bigoted content and hate groups from its platform. It also said it would begin enforcing its anti-hate and violence rules more stringently than it has in the past.

The company was responding to pressure from its users, who have begged for both clearer rules and stronger enforcement for years.

“Freedom of expression means little if voices are silenced because people are afraid to speak up,” the document reads. That’s a new line for a company that had long insisted that—even in privately owned forums like its messaging service—only good speech could fight bad speech.

According to Twitter, the rules ban content that includes “a violent threat or multiple slurs, epithets, [and] racist or sexist tropes,” as well as material that “incites fear, or reduces someone to less than human.” They also prohibit groups that advocate violence against civilians.

Depending on how they’re interpreted, the new rules could give moderators a wide berth to suspend and ban users who encourage violence against civilians or propagandize for hate groups. The guidelines do not draw a distinction between user behavior on or off the site: If someone tweets only in coded language on Twitter, but calls for racial violence or genocide elsewhere on the web or in person, then they could still be banned from the service.

While logos or symbols affiliated with hate groups will not result in someone getting banned, they will carry a sensitive media tag, meaning that they will not automatically display to the site’s users.

But “context matters when evaluating for abusive behavior,” warned Twitter, and they included two big exceptions in the new policy. First, their ban on advocating violence against civilians does not apply to “military or government entities.” Second, they may moderate their own rules if “the behavior is newsworthy and in the legitimate public interest.”

It wasn’t hard to figure out the famous Twitter user to whom those loopholes most apply.

The two highest-profile users to get kicked off the service since the rule change are Jayda Fransen and Paul Golding, the leaders of Britain First, an ultranationalist and virulently anti-Islam U.K. political party and “street-defense organization.”

In November, President Trump retweeted a few of Fransen’s fake anti-Muslim videos to his more than 43 million followers. Theresa May condemned the president’s retweets, saying that Britain First spread “hateful narratives that peddle lies and stoke tensions.” Britain First has an estimated 1,000 followers in the United Kingdom.

These rules aren’t just an insurance policy for the company—they’ve already been used to shield the president from suspension. In September, when Trump warned in a tweet that “Little Rocket Man ... won’t be around much longer,” the company said that the threatening tweets didn’t violate its guidelines because they were “newsworthy.”

Now the company has slapped on another policy. Wednesday’s decision makes clear that heads of state—including President Trump—now receive the same monopoly on violence on Twitter that they already enjoy out in the world.

Robinson Meyer is a former staff writer at The Atlantic and the former author of the newsletter The Weekly Planet.