Posted on April 28, 2020

Internet Speech Will Never Go Back to Normal

Jack Goldsmith, The Atlantic, April 25, 2020

COVID-19 has emboldened American tech platforms to emerge from their defensive crouch. Before the pandemic, they were targets of public outrage over life under their dominion. Today, the platforms are proudly collaborating with one another, and following government guidance, to censor harmful information related to the coronavirus. And they are using their prodigious data-collection capacities, in coordination with federal and state governments, to improve contact tracing, quarantine enforcement, and other health measures. As Facebook’s Mark Zuckerberg recently boasted, “The world has faced pandemics before, but this time we have a new superpower: the ability to gather and share data for good.”

Civil-rights groups are tolerating these measures—emergency times call for emergency measures—but are also urging a swift return to normal when the virus ebbs. We need “to make sure that, when we’ve made it past this crisis, our country isn’t transformed into a place we don’t want to live,” warns the American Civil Liberties Union’s Jay Stanley. “Any extraordinary measures used to manage a specific crisis must not become permanent fixtures in the landscape of government intrusions into daily life,” declares the Electronic Frontier Foundation, a digital-rights group. {snip}

But the “extraordinary” measures we are seeing are not all that extraordinary. Powerful forces were pushing toward greater censorship and surveillance of digital networks long before the coronavirus jumped out of the wet markets in Wuhan, China, and they will continue to do so once the crisis passes. The practices that American tech platforms have undertaken during the pandemic represent not a break from prior developments, but an acceleration of them.

As surprising as it may sound, digital surveillance and speech control in the United States already show many similarities to what one finds in authoritarian states such as China. Constitutional and cultural differences mean that the private sector, rather than the federal and state governments, currently takes the lead in these practices, which further values and address threats different from those in China. But the trend toward greater surveillance and speech control here, and toward the growing involvement of government, is undeniable and likely inexorable.

{snip}

Beginning in the 1990s, the U.S. government and powerful young tech firms began promoting nonregulation and American-style freedom of speech as essential features of the internet. This approach assumed that authoritarian states would crumble in the face of digital networks that seemed to have American constitutional values built into them. {snip}

China quickly became worried about unregulated digital speech—both as a threat to the Communist Party’s control and to the domestic social order more generally. It began building ever more powerful mechanisms of surveillance and control to meet these threats. Other authoritarian nations would follow China’s lead. In 2009, China, Russia, and other members of the Shanghai Cooperation Organisation announced their “agreement on cooperation in the field of international information security.” The agreement presciently warned of a coming “information war,” in which internet platforms would be weaponized in ways that would threaten nations’ “social and political systems.”

{snip}

Ten years ago, speech on the American Internet was a free-for-all. There was relatively little monitoring and censorship—public or private—of what people posted, said, or did on Facebook, YouTube, and other sites. In part, this was due to the legal immunity that platforms enjoyed under Section 230 of the Communications Decency Act. And in part it was because the socially disruptive effects of digital networks—various forms of weaponized speech and misinformation—had not yet emerged. As the networks became filled with bullying, harassment, child sexual exploitation, revenge porn, disinformation campaigns, digitally manipulated videos, and other forms of harmful content, private platforms faced growing pressure from governments and users to fix the problems.

The result a decade later is that most of our online speech now occurs in closely monitored playpens where many tens of thousands of human censors review flagged content to ensure compliance with ever-lengthier and more detailed “community standards” (or some equivalent). More and more, this human monitoring and censorship is supported—or replaced—by sophisticated computer algorithms. The firms use these tools to define acceptable forms of speech and other content on their platforms, which in turn sets the effective boundaries for a great deal of speech in the U.S. public forum.

{snip} YouTube has an aggressive policy of removing what it deems to be deceptive practices and foreign-influence operations related to elections. It also makes judgments about and gives priority to what it calls “authoritative voices.” Facebook has deployed a multipronged strategy that includes removing fake accounts and eliminating or demoting “inauthentic behavior.” Twitter has a similar censorship policy aimed at “platform manipulation originating from bad-faith actors located in countries outside of the US.”  These platforms have engaged in “strategic collaboration” with the federal government, including by sharing information, to fight foreign electoral interference.

The platforms are also cooperating with one another and with international organizations, and sometimes law enforcement, on other censorship practices. This collaboration began with a technology that allows child pornography to be assigned a digital fingerprint and placed in centralized databases that the platforms draw on to suppress the material. A similar mechanism has been deployed against terrorist speech—a more controversial practice, since the label terrorist often involves inescapably political judgments. Sharing and coordination across platforms are also moving forward on content related to electoral interference and are being discussed for the manipulated videos known as deepfakes. The danger with “content cartels,” as the writer Evelyn Douek dubs these collaborations, is that they diminish accountability for censorship decisions and make invariable mistakes more pervasive and harder to fix.

And of course, mistakes are inevitable. Much of the content that the platforms censor—for example, child pornography and content that violates intellectual-property rights—is relatively easy to identify and uncontroversial to remove. But Facebook, for example, also takes down hate speech, terrorist propaganda, “cruel and insensitive” speech, and bullying speech, which are harder to identify objectively and more controversial to regulate or remove. Facebook publishes data on its enforcement of its rules. They show that the firm makes “mistakes”—defined by its own flexible criteria—in about 15 percent of the appealed cases involving supposed bullying and about 10 percent of the appealed hate-speech cases.

All these developments have taken place under pressure from Washington and Brussels. In hearings over the past few years, Congress has criticized the companies—not always in consistent ways—for allowing harmful speech. In 2018, Congress amended the previously untouchable Section 230 of the Communications Decency Act to subject the platforms to the same liability that nondigital outlets face for enabling illegal sex trafficking. Additional amendments to Section 230 are now in the offing, as are various other threats to regulate digital speech. In March 2019, Zuckerberg invited the government to regulate “harmful content” on his platform. {snip}

Against this background, the tech firms’ downgrading and outright censorship of speech related to COVID-19 are not large steps. Facebook is using computer algorithms more aggressively, mainly because concerns about the privacy of users prevent human censors from working on these issues from home during forced isolation. As it has done with Russian misinformation, Facebook will notify users when articles that they have “liked” are later deemed to have included health-related misinformation.

But the basic approach to identifying and redressing speech judged to be misinformation or to present an imminent risk of physical harm “hasn’t changed,” according to Monika Bickert, Facebook’s head of global policy management. {snip}

{snip}

Police use subpoenas to tap into huge warehouses of personal data collected by private companies. They have used these tools to gain access to doorbell cameras that now line city blocksmicrophones in the Alexa devices in millions of homes, privately owned license-plate readers that track every car, and the data in DNA databases that people voluntarily pay to enter. They also get access to information collected on smart-home devices and home-surveillance cameras—a growing share of which are capable of facial recognition—to solve crimes. And they pay to access private tow trucks equipped with cameras tracking the movements of cars throughout a city.

In other cases, federal, state, and local governments openly work in conjunction with the private sector to expand their digital surveillance. One of the most popular doorbell cameras, Ring, which is owned by Amazon, has forged video-sharing partnerships with more than 400 law-enforcement agencies in the United States. Ring actively courts law-enforcement agencies by offering discounted cameras to local police departments, which offer them to residents. The departments then use social media to encourage citizens to download Ring’s neighborhood application, where neighbors post videos and discuss ostensibly suspicious activity spotted on their cameras.

Meanwhile, the company Clearview AI provides law-enforcement agents with the ability to scan an image of a face across a database of billions of faces, scraped from popular apps and websites such as Facebook and YouTube. More than 600 law-enforcement agencies are now using Clearview’s database.

These developments are often greeted with blockbuster news reports and indignant commentary. And yet Americans keep buying surveillance machines and giving their data away. {snip}

{snip}