Inside Google’s Internet Justice League and Its AI-Powered War on Trolls

Andy Greenberg, Wired, September 19, 2016

{snip}

For years now, on Twitter and practically any other freewheeling public forum, the trolls have been out in force. Just in recent months: Trump’s anti-Semitic supporters mobbed Jewish public figures with menacing Holocaust “jokes.” Anonymous racists bullied African American comedian Leslie Jones off Twitter temporarily with pictures of apes and Photoshopped images of semen on her face. Guardian columnist Jessica Valenti quit the service after a horde of misogynist attackers resorted to rape threats against her 5-year-old daughter. “It’s too much,” she signed off. “I can’t live like this.” Feminist writer Sady Doyle says her experience of mass harassment has induced a kind of permanent self-­censorship. “There are things I won’t allow myself to talk about,” she says. “Names I won’t allow myself to say.”

{snip}

All this abuse, in other words, has evolved into a form of censorship, driving people offline, silencing their voices. For years, victims have been calling on–clamoring for–the companies that created these platforms to help slay the monster they brought to life. But their solutions generally have amounted to a Sisyphean game of whack-a-troll.

Now a small subsidiary of Google named Jigsaw is about to release an entirely new type of response: a set of tools called Conversation AI. The software is designed to use machine learning to automatically spot the language of abuse and harassment–with, Jigsaw engineers say, an accuracy far better than any keyword filter and far faster than any team of human moderators. “I want to use the best technology we have at our disposal to begin to take on trolling and other nefarious tactics that give hostile voices disproportionate weight,” says Jigsaw founder and president Jared Cohen. “To do everything we can to level the playing field.”

Conversation AI represents just one of Jigsaw’s wildly ambitious projects. The New York-based think tank and tech incubator aims to build products that use Google’s massive infra­structure and engineer­ing muscle not to advance the best possibilities of the Internet but to fix the worst of it: surveillance, extremist indoctrination, censorship. The group sees its work, in part, as taking on the most intract­able jobs in Google’s larger mission to make the world’s information “universally accessible and useful.”

Cohen founded Jigsaw, which now has about 50 staffers (almost half are engineers), after a brief high-profile and controversial career in the US State Department, where he worked to focus American diplomacy on the Internet like never before. One of the moon-shot goals he’s set for Jigsaw is to end censorship within a decade, whether it comes in the form of politically motivated cyberattacks on opposition websites or government strangleholds on Internet service providers. And if that task isn’t daunting enough, Jigsaw is about to unleash Conversation AI on the murky challenge of harassment, where the only way to protect some of the web’s most repressed voices may be to selectively shut up others. If it can find a path through that free-speech paradox, Jigsaw will have pulled off an unlikely coup: applying artificial intelligence to solve the very human problem of making people be nicer on the Internet.

{snip}

To create a viable tool, Jigsaw first needed to teach its algorithm to tell the difference between harmless banter and harassment. For that, it would need a massive number of examples. So the group partnered with The New York Times, which gave Jigsaw’s engineers 17 million comments from Times stories, along with data about which of those comments were flagged as inappropriate by moderators. Jigsaw also worked with the Wikimedia Foundation to parse 130,000 snippets of discussion around Wikipedia pages. It showed those text strings to panels of 10 people recruited randomly from the CrowdFlower crowdsourcing service and asked whether they found each snippet to represent a “personal attack” or “harassment.” Jigsaw then fed the massive corpus of online conversation and human evaluations into Google’s open source machine learning software, TensorFlow.

{snip}

In fact, by some measures Jigsaw has now trained Conver­sation AI to spot toxic language with impressive accuracy. Feed a string of text into its Wikipedia harassment-detection engine and it can, with what Google describes as more than 92 percent certainty and a 10 percent false-positive rate, come up with a judgment that matches a human test panel as to whether that line represents an attack. For now the tool looks only at the content of that single string of text. But Green says Jigsaw has also looked into detecting methods of mass harassment based on the volume of messages and other long-term patterns.

Wikipedia and the Times will be the first to try out Google’s automated harassment detector on comment threads and article discussion pages. Wikimedia is still considering exactly how it will use the tool, while the Times plans to make Conversation AI the first pass of its website’s com­ments, blocking any abuse it detects until it can be moder­ated by a human. {snip}

{snip}

Topics:

Share This

We welcome comments that add information or perspective, and we encourage polite debate. If you log in with a social media account, your comment should appear immediately. If you prefer to remain anonymous, you may comment as a guest, using a name and an e-mail address of convenience. Your comment will be moderated.