Sam Biddle, The Intercept, October 12, 2021
TO WARD OFF accusations that it helps terrorists spread propaganda, Facebook has for many years barred users from speaking freely about people and groups it says promote violence.
The restrictions appear to trace back to 2012, when in the face of growing alarm in Congress and the United Nations about online terrorist recruiting, Facebook added to its Community Standards a ban on “organizations with a record of terrorist or violent criminal activity.” This modest rule has since ballooned into what’s known as the Dangerous Individuals and Organizations policy, a sweeping set of restrictions on what Facebook’s nearly 3 billion users can say about an enormous and ever-growing roster of entities deemed beyond the pale.
In recent years, the policy has been used at a more rapid clip, including against the president of the United States, and taken on almost totemic power at the social network, trotted out to reassure the public whenever paroxysms of violence, from genocide in Myanmar to riots on Capitol Hill, are linked to Facebook. Most recently, following a damning series of Wall Street Journal articles showing the company knew it facilitated myriad offline harms, a Facebook vice president cited the policy as evidence of the company’s diligence in an internal memo obtained by the New York Times.
But as with other attempts to limit personal freedoms in the name of counterterrorism, Facebook’s DIO policy has become an unaccountable system that disproportionately punishes certain communities, critics say. It is built atop a blacklist of over 4,000 people and groups, including politicians, writers, charities, hospitals, hundreds of music acts, and long-dead historical figures.
A range of legal scholars and civil libertarians have called on the company to publish the list so that users know when they are in danger of having a post deleted or their account suspended for praising someone on it. The company has repeatedly refused to do so, claiming it would endanger employees and permit banned entities to circumvent the policy. Facebook did not provide The Intercept with information about any specific threat to its staff.
Despite Facebook’s claims that disclosing the list would endanger its employees, the company’s hand-picked Oversight Board has formally recommended publishing all of it on multiple occasions, as recently as August, because the information is in the public interest.
The Intercept has reviewed a snapshot of the full DIO list and is today publishing a reproduction of the material in its entirety, with only minor redactions and edits to improve clarity. It is also publishing an associated policy document, created to help moderators decide what posts to delete and what users to punish.
“Facebook puts users in a near-impossible position by telling them they can’t post about dangerous groups and individuals, but then refusing to publicly identify who it considers dangerous,” said Faiza Patel, co-director of the Brennan Center for Justice’s liberty and national security program, who reviewed the material.
The list and associated rules appear to be a clear embodiment of American anxieties, political concerns, and foreign policy values since 9/11, experts said, even though the DIO policy is meant to protect all Facebook users and applies to those who reside outside of the United States (the vast majority). Nearly everyone and everything on the list is considered a foe or threat by America or its allies: Over half of it consists of alleged foreign terrorists, free discussion of which is subject to Facebook’s harshest censorship.
The DIO policy and blacklist also place far looser prohibitions on commentary about predominately white anti-government militias than on groups and individuals listed as terrorists, who are predominately Middle Eastern, South Asian, and Muslim, or those said to be part of violent criminal enterprises, who are predominantly Black and Latino, the experts said.
The list, the foundation of Facebook’s Dangerous Individuals and Organizations policy, is in many ways what the company has described in the past: a collection of groups and leaders who have threatened or engaged in bloodshed. The snapshot reviewed by The Intercept is separated into the categories Hate, Crime, Terrorism, Militarized Social Movements, and Violent Non-State Actors. These categories were organized into a system of three tiers under rules rolled out by Facebook in late June, with each tier corresponding to speech restrictions of varying severity.
But while labels like “terrorist” and “criminal” are conceptually broad, they look more like narrow racial and religious proxies once you see how they are applied to people and groups in the list, experts said, raising the likelihood that Facebook is placing discriminatory limitations on speech.
Regardless of tier, no one on the DIO list is allowed to maintain a presence on Facebook platforms, nor are users allowed to represent themselves as members of any listed groups. The tiers determine instead what other Facebook users are allowed to say about the banned entities. Tier 1 is the most strictly limited; users may not express anything deemed to be praise or support about groups and people in this tier, even for nonviolent activities (as determined by Facebook). Tier 1 includes alleged terror, hate, and criminal groups and alleged members, with terror defined as “organizing or advocating for violence against civilians” and hate as “repeatedly dehumanizing or advocating for harm against” people with protected characteristics. Tier 1’s criminal category is almost entirely American street gangs and Latin American drug cartels, predominantly Black and Latino. Facebook’s terrorist category, which is 70 percent of Tier 1, overwhelmingly consists of Middle Eastern and South Asian organizations and individuals — who are disproportionately represented throughout the DIO list, across all tiers, where close to 80 percent of individuals listed are labeled terrorists.
There are close to 500 hate groups in Tier 1, including the more than 250 white supremacist organizations Fishman referenced, but Faiza Patel, of the Brennan Center, noted that hundreds of predominantly white right-wing militia groups that seem similar to the hate groups are “treated with a light touch” and placed in Tier 3.
Tier 2, “Violent Non-State Actors,” consists mostly of groups like armed rebels who engage in violence targeting governments rather than civilians, and includes many factions fighting in the Syrian civil war. Users can praise groups in this tier for their nonviolent actions but may not express any “substantive support” for the groups themselves.
Tier 3 is for groups that are not violent but repeatedly engage in hate speech, seem poised to become violent soon, or repeatedly violate the DIO policies themselves. Facebook users are free to discuss Tier 3 listees as they please. Tier 3 includes Militarized Social Movements, which, judging from its DIO entries, is mostly right-wing American anti-government militias, which are virtually entirely white.
A Facebook spokesperson categorically denied that Facebook gives extremist right-wing groups in the U.S. special treatment due to their association with mainstream conservative politics. They added that the company tiers groups based on their behavior, stating, “Where American groups satisfy our definition of a terrorist group, they are designated as terrorist organizations (E.g. The Base, Atomwaffen Division, National Socialist Order). Where they satisfy our definition of hate groups, they are designated as hate organizations (For example, Proud Boys, Rise Above Movement, Patriot Front).”
On the issue of how Facebook’s tiers often seem to sort along racial and religious lines, the spokesperson cited the presence of the white supremacists and hate groups in Tier 1 and said “focusing solely on” terrorist groups in Tier 1 “is misleading.” They added: “It’s worth noting that our approach to white supremacist hate groups and terrorist organization is far more aggressive than any government’s. All told, the United Nations, European Union, United States, United Kingdom, Canada, Australia, and France only designate thirteen distinct white supremacist organizations. Our definition of terrorism is public, detailed and was developed with significant input from outside experts and academics. Unlike some other definitions of terrorism, our definition is agnostic to religion, region, political outlook, or ideology. We have designated many organizations based outside the Middle Eastern and South Asian markets as terrorism, including orgs based in North America and Western Europe (including the National Socialist Order, the Feurerkrieg Division, the Irish Republican Army, and the National Action Group).”
On Facebook’s list, however, the number of listed terrorist groups based in North American or Western Europe amounts to only a few dozen out of over a thousand.
Facebook’s list represents an expansive definition of “dangerous” throughout. It includes the deceased 14-year-old Kashmiri child soldier Mudassir Rashid Parray, over 200 musical acts, television stations, a video game studio, airlines, the medical university working on Iran’s homegrown Covid-19 vaccine, and many long-deceased historical figures like Joseph Goebbels and Benito Mussolini. Including such figures is “fraught with problems,” a group of University of Utah social media researchers recently told Facebook’s Oversight Board.
Internal Facebook materials walk moderators through the process of censoring speech about the blacklisted people and groups. The materials, portions of which were previously reported by The Guardian and Vice, attempt to define what it means for a user to “praise,” “support,” or “represent” a DIO listee and detail how to identify prohibited comments.
Although Facebook provides a public set of such guidelines, it publishes only limited examples of what these terms mean, rather than definitions. Internally, it offers not only the definitions, but also much more detailed examples, including a dizzying list of hypotheticals and edge cases to help determine what to do with a flagged piece of content.
Facebook’s global content moderation workforce, an outsourced army of hourly contractors frequently traumatized by the graphic nature of their work, are expected to use these definitions and examples to figure out if a given post constitutes forbidden “praise” or meets the threshold of “support,” among other criteria, shoehorning the speech of billions of people from hundreds of countries and countless cultures into a tidy framework decreed from Silicon Valley. Though these workers operate in tandem with automated software systems, determining what’s “praise” and what isn’t frequently comes down to personal judgment calls, assessing posters’ intent. “Once again, it leaves the real hard work of trying to make Facebook safe to outsourced, underpaid and overworked content moderators who are forced to pick up the pieces and do their best to make it work in their specific geographic location, language and context,” said Martha Dark, the director of Foxglove, a legal aid group that works with moderators.
In the internal materials, Facebook essentially says that users are allowed to speak of Tier 1 entities so long as this speech is neutral or critical, as any commentary considered positive could be construed as “praise.” Facebook users are barred from doing anything that “seeks to make others think more positively” or “legitimize” a Tier 1 dangerous person or group or to “align oneself” with their cause — all forms of speech considered “praise.” The materials say, “Statements presented in the form of a fact about the entity’s motives” are acceptable, but anything that “glorifies the entity through the use of approving adjectives, phrases, imagery, etc” is not. Users are allowed to say that a person Facebook considers dangerous “is not a threat, relevant, or worthy of attention,” but they may not say they “stand behind” a person on the list they believe was wrongly included — that’s considered aligning themselves with the listee. Facebook’s moderators are similarly left to decide for themselves what constitutes dangerous “glorification” versus permitted “neutral speech,” or what counts as “academic debate” and “informative, educational discourse” for billions of people.
Determining what content meets Facebook’s definitions of banned speech under the policy is a “struggle,” according to a Facebook moderator working outside of the U.S. who responded to questions from The Intercept on the condition of anonymity. This person said analysts “typically struggle to recognize political speech and condemnation, which are permissible context for DOI.” They also noted the policy’s tendency to misfire: “[T]he fictional representations of [dangerous individuals] are not allowed unless shared in a condemning or informational context, which means that sharing a Taika Waititi photo from [the film] Jojo Rabbit will get you banned, as well as a meme with the actor playing Pablo Escobar (the one in the empty swimming pool).”