Google Developing Tools to Suppress Online Speech, Protect Elites’ Feelings

Google's Executive Chairman, Eric Schmidt adresses the 9th Global Competitiveness For
FAYEZ NURELDINE/AFP/Getty Images

Jigsaw, a small subsidiary of Google, is working on multiple technological projects that would work to censor speech on the Internet and increase monitoring of its users, according to an article published on Wired.

Under a veil of protecting people from abuse by authoritarian regimes and “online trolls,” Jigsaw founder and president Jared Cohen is releasing a set of tools known as ‘Conversation AI.’

“I want to use the best technology we have at our disposal to begin to take on trolling and other nefarious tactics that give hostile voices disproportionate weight… [we will] do everything we can to level the playing field,” said Cohen. Jigsaw argues that online trolls bully people into self-censoring their views, and that online speech must be moderated to ensure that nobody is silenced.

Conversation AI will use machine-learning techniques to pick up on “harassment” and “abuse” faster than any human moderator possibly can. According to Google, the filter has a 92% success rate at detecting so-called “abusive” messages in concurrence with a panel of humans, producing a false-positive only 10% of the time.

The major issue with such a tool is the possibly unintended consequences that automatic detection will create.

Writing in Wired, Andy Greenberg highlighted that “throwing out well-intentioned speech that resembles harassment could be a blow to exactly the open civil society Jigsaw has vowed to protect.” When discussing the potential for “collateral damage” with its inventors, co-creator Lucas Dixon argued that the team wanted to “let communities have the discussions they want to have… there are already plenty of nasty places on the Internet.”

“What we can do is create places where people can have better conversations,” Dixon claimed, which Greenberg noted was “[favoring] a sanitized Internet over a freewheeling one.”

There also does not seem to be much appetite for such a completely sanitized Internet. Emma Llansó, director of the Free Expression Project at the nonprofit Center for Democracy and Technology, noted that “an automated detection system can open the door to the delete-it-all option, rather than spending the time and resources to identify false positives.”

Even feminists who face online trolls were nervous about the idea. Sady Doyle told Wired, “People need to be able to talk in whatever register they talk… imagine what the Internet would be like if you couldn’t say ‘Donald Trump is a moron,'” a phrase that registered a 99/100 on the AI’s personal attack scale.

“Jigsaw recruits will hear stories about people being tortured for their passwords or of state-sponsored cyberbullying,” said Cohen at a recent meeting of Jigsaw. He gave examples of “an Albanian LGBT activist who tries to hide his identity on Facebook, despite its real-names-only policy, and an admini­strator for a Libyan youth group wary of govern­ment infiltrators,” as people that Jigsaw were trying to protect.

Yet such a system could easily be developed by tyrannical regimes overseas to detect populist uprisings within its online borders, especially after the UN takes control over the ICANN on October 1st.

Founded in 2010 and previously known as Google Ideas, this is not the first time that Jigsaw has come under fire when attempting to tackle online “harassment.” In September of last year, they invited people who, according to Wired, had been “harassed by the anti-feminist GamerGate movement,” when instead they were some of the worst online harassers themselves, such as Randi Harper and Zoe Quinn. After this information was made public, Google employees began distancing themselves from the supposed “victims.”

Jack Hadfield is a student at the University of Warwick and a regular contributor to Breitbart Tech. You can follow him on Twitter @ToryBastard_ or email him at jack@yiannopoulos.net.

COMMENTS

Please let us know if you're having issues with commenting.