Posted on May 14, 2020

France Threatens Big Fines for Social Media with Hate-Speech Law

Sam Schechner, Wall Street Journal, May 13, 2020

France is empowering regulators to slap large fines on social-media companies that fail to remove postings deemed hateful, one of the most aggressive measures yet in a broad wave of rules aimed at forcing tech companies to more tightly police their services.

France’s National Assembly passed a law Wednesday that threatens fines of up to €1.25 million ($1.36 million) against companies that fail to remove “manifestly illicit” hate-speech posts—such as incitement to racial hatred or anti-Semitism—within 24 hours of being notified.

The law also gives the country’s audiovisual regulator the right to audit companies’ systems for removing such content and to fine them up to 4% of their global annual revenue in case of serious and repeated violations. The measure takes effect on July 1.

{snip}

A signature effort of French President Emmanuel Macron, the government has promoted the law, first proposed last year, as part of a broad effort to reset the balance for how much responsibility tech companies should assume for illegal or harmful activity that happens on their platforms.

{snip}

{snip} Later this year, the European Union’s executive arm is expected to propose what it calls a Digital Services Act, which would update—and perhaps abolish—liability protections from its old e-commerce directive. The U.K. is pursuing a related idea with legislation on what it calls Online Harms, which would create a “duty of care” for tech companies to take measures to prevent a gamut of illegal or potentially harmful content from being published on their platforms, or face fines.

Germany in 2018 implemented a law similar to the French one, threatening fines of up to €50 million for companies that systematically fail to remove several types of content deemed hateful within 24 hours. Last year, German officials issued a €2 million fine to Facebook Inc. under the law.

{snip}

Passage of the new French law came a day after the announcement of a proposed settlement of a U.S. lawsuit against Facebook that also centered on its handling of problematic content.

The agreement addresses a class-action complaint originally filed in September 2018 claiming that content moderators for the internet giant suffered psychological trauma and post-traumatic stress disorder after they were required to review offensive and disturbing videos and images.

Under the agreement, which covers former employees and contractors in four states and must be approved by a judge, Facebook agreed to pay up to $52 million, from which each moderator can receive $1,000 that is intended for medical costs. Those diagnosed with certain conditions can be eligible for more money.

{snip}