Posted on October 5, 2021

Canadian Government’s Proposed Online Harms Legislation Threatens Our Human Rights

Ilan Kogan, CBC, October 5, 2021

The Canadian government is considering new rules to regulate how social media platforms moderate potentially harmful user-generated content. Already, the proposed legislation has been criticized by internet scholars — across the political spectrum — as some of the worst in the world.

Oddly, the proposed legislation reads like a list of the most widely condemned policy ideas globally. Elsewhere, these ideas have been vigorously protested by human rights organizations and struck down as unconstitutional. {snip}

{snip}

The legislation is simple. First, online platforms would be required to proactively monitor all user speech and evaluate its potential for harm. Online communication service providers would need to take “all reasonable measures,” including the use of automated systems, to identify harmful content and restrict its visibility.

Second, any individual would be able to flag content as harmful. The social media platform would then have 24 hours from initial flagging to evaluate whether the content was in fact harmful. Failure to remove harmful content within this period could trigger a stiff penalty: up to three per cent of the service provider’s gross global revenue or $10 million, whichever is higher. For Facebook, that would be a penalty of $2.6 billion per post.

Proactive monitoring of user speech presents serious privacy issues. Without restrictions on proactive monitoring, national governments would be able to significantly increase their surveillance powers.

The Canadian Charter of Rights and Freedoms protects all Canadians from unreasonable searches. But under the proposed legislation, a reasonable suspicion of illegal activity would not be necessary for a service provider, acting on the government’s behalf, to conduct a search. All content posted online would be searched. Potentially harmful content would be stored by the service provider and transmitted — in secret — to the government for criminal prosecution.

Canadians who have nothing to hide still have something to fear. Social media platforms process billions of pieces of content every day. Proactive monitoring is only possible with an automated system. Yet automated systems are notoriously inaccurate. Even Facebook’s manual content moderation accuracy has been reported to be below 90 per cent.

{snip} Many innocent Canadians will be referred for criminal prosecution under the proposed legislation.

{snip}

Identifying illegal content is difficult, and therefore the risk of collateral censorship is high. Hate speech restrictions may best illustrate the problem. The proposal expects platforms to apply the Supreme Court of Canada’s hate speech jurisprudence. Identifying hate speech is difficult for courts, let alone algorithms or low-paid content moderators who must make decisions in mere seconds. {snip}

{snip}