
Subscribe to my newsletter, The Wavelength, if you want the content on my blog delivered to your inbox four times a week. The Wavelength will transition to a subscription product early in 2022. Posts on this site will continue in a fashion. Details to come.
Over the summer when the Wavelength was a lighter product, Australia enacted the “Online Safety Bill 2021” and the “Online Safety (Transitional Provisions and Consequential Amendments) Bill 2021” (see here for more detail and analysis of the bills as introduced.) It appears that Australia was the first among the so-called Five Eyes nations to seek to regulate the online world for harm. Three of the four other Five Eyes nations are in various states in legislating.
However, the United States (U.S.) Congress seems deadlocked on many of the issues presented by online harm and is, in any event, bounded by the First Amendment which limits the extent of how the U.S. government may regulate speech. Of course, the First Amendment is not limitless, and the Supreme Court of the United States has allowed regulation of some speech in some ways, notably in the famous case from the First World War I in which Justice Oliver Wendell Holmes formulated the yelling fire in a crowded theater scenario. Nonetheless, U.S. policy proposals have looked to change the legal protection platforms like Facebook and Twitter enjoy for the material others post on their platforms and for moderating, editing, and taking down some content (i.e., 47 USC 230 more popularly known as Section 230.) And so, it appears very unlikely the U.S. would choose to regulate online harm and ills in the same its “cousins” are looking to do.
I think some general comments about the proposed bills is warranted. London, Ottawa, and Wellington are struggling with many of the same issues. How does one define material or speech as harmful? According to whom? Anti-Semitic content isn’t welcome in my home, but it may be in others. Also, the potential extra-territorial reach of some of these laws pose legal and practical problems. Let’s say Canada succeeds in enacting a bill requiring the Twitters of the world to take down harmful material (according to Canada’s definition), does this apply throughout the world? Or just in Canada? Moreover, what responsibilities would platforms have to monitor access to material that is legal in the U.S. but not in Canada?
The first stop on the tour is the United Kingdom (UK). A committee of the Parliament is beginning public and private hearings on its draft “Online Safety Bill” (see here for more detail and analysis on the bill as introduced in May 2021.) Here’s the beginning of what I wrote then on the bill:
- The UK’s Department for Digital, Culture, Media & Sport (DCMS) published its long-awaited online harms bill that sets out the framework by which the UK proposes to regulate harmful and illegal online content. The UK follows Australia and the European Union in proposing legislation to regulate the online world. The Australian Parliament is currently considering the “Online Safety Bill 2021” and the “Online Safety (Transitional Provisions and Consequential Amendments) Bill 2021” (see here for more detail and analysis.) The European Commission (EC) rolled out its The Digital Services Act in December 2020 and is currently negotiating a final bill with other EU stakeholders (see here for more detail and analysis.) And, of course, in the United States (U.S.), there have been calls from both political parties and many stakeholders to revise 47 U.S.C. 230 (aka Section 230), the liability shield many technology companies have to protect them from litigation arising from content they allow others to post. However, to date, no such legislation has advanced beyond mere introduction.
- The British bill kicks a lot of details into the future where the regulator and government will have to sort key parts of the law. And so, implementation will prove crucial and likely another front where online platforms can make their cases.
The Joint Select Committee started public hearings last week on 9 September and heard from three panels of witnesses. Moreover, the committee has received extensive written input including from the Department for Digital, Culture, Media and Sport and the Home Office that laid out the government’s goals:


The UK’s bill has shades of bills and talking points Republicans have been flogging in Congress. The largest platforms will need to make clearer their rules and procedures for taking down “controversial” material. And like much of London’s emphasis on the harms the online world poses to children, their well-being and protection are at the forefront of this bill and its rationale.
As noted, the Joint Select Committee held a hearing on 9 September on the bill with these witnesses:
- Mr Imran Ahmed, CEO and Founder at Center for Countering Digital Hate
- Sanjay Bhandari, Chair at Kick It Out
- Edleen John, Director of International Relations, Corporate Affairs and Co-Partner for Equality, Diversity and Inclusion at Football Association
- Rio Ferdinand, former Manchester United player
- Danny Stone MBE, Director at Antisemitism Policy Trust
- Nancy Kelley, Chief Executive at Stonewall
In the unofficial transcript (and Parliament is quite firm that any users of this early transcript stress the fact it is not corrected by MPs, Lords, and witnesses), Center for Countering Digital Hate CEO and Founder Imran Ahmed claimed:


Ahmed is clearly targeting the COVID-19 disinformation platforms have been struggling to manage amid pressure from many governments. Not long ago, U.S. President Joe Biden accused Facebook of killing people through the disinformation on its platform. But then Ahmed also blames the online platforms and the proliferation of disinformation for the 6 January insurrection and the abuse athletes of color experience. Ahmed ascribes profit as the primary reason why this sort of disinformation persists online. However, he had some advice on how to revise the Online Safety Bill.
Ahmed was asked for an assessment of the bill, and he responded:


I will not go through the entire transcript, but the above excerpts serve to explain one dominant perspective on online harms (i.e., misinformation and disinformation) and some of the perceived shortcomings of the bill. It bears note that discontent is filtering through to the media about problems with the bill, and, at least, the media’s attempts to get answers on who will be considered a journalist under the bill and hence exempt from much of the new regime. Naturally, the media is concerned about its place in any online harms bill, but other affected industries are obviously lobbying as furiously as possible to protect their interests, correct problems, and water down enforcement provisions. It will be interesting to see what a revised bill looks like.
Incidentally, the chair of the Joint Select Committee, Damian Collins MP, hosts an excellent podcast, Infotagion, and its June 2021 episode is on the Online Safety Bill as well as its most recent episode. I might be wrong, but there are not many Members of the U.S. Congress with such podcasts.
Moreover, a different committee, the Digital, Culture, Media and Sport Sub-committee on Online Harms and Disinformation, is conducting an inquiry into “Online harms and disinformation,” a matter with obvious overlap. This sub-committee’s deadline for written input came on 3 September 2021, and the sub-committee will likely draft and issue a report sooner rather than later.
Next stop is Ottawa. This summer, the government of Canada introduced its own online harms bill, C-36, and its fate may well hinge on the outcome of its general elections next week. Nonetheless, the current government in Ottawa is accepting comments on the bill until 25 September. To help Canadians and others comment, the government made available:
- A discussion guide that summarizes and outlines the Government’s overall approach.
- A technical paper that summarizes the proposed instructions to inform the upcoming legislation.
The Parliament offered this summary of the bill while the Parliamentary Information and Research Service of the Library of Parliament works on a more detailed, and one presumes a more authoritative, summary:
- On 23 June 2021, the Minister of Justice introduced Bill C-36, An Act to amend the Criminal Code and the Canadian Human Rights Act and to make related amendments to another Act (hate propaganda, hate crimes and hate speech), in the House of Commons and it was given first reading.
- Bill C-36 amends the Criminal Code to create a recognizance to keep the peace relating to hate propaganda and hate crime and to define “hatred” for the purposes of two hate propaganda offences. It also makes related amendments to the Youth Criminal Justice Act.
- In addition, it amends the Canadian Human Rights Act to provide that it is a discriminatory practice to communicate or cause to be communicated hate speech by means of the Internet or other means of telecommunication in a context in which the hate speech is likely to foment detestation or vilification of an individual or group of individuals on the basis of a prohibited ground of discrimination. It authorizes the Canadian Human Rights Commission to accept complaints alleging this discriminatory practice and authorizes the Canadian Human Rights Tribunal to adjudicate complaints and order remedies.
In a press release, Ottawa explained its understanding of the online harms wrought by social media use:
- Individuals and groups use social media platforms to spread hateful messaging. Indigenous Peoples and equity-deserving groups such as racialized individuals, religious minorities, LGBTQ2 individuals and women are disproportionately affected by hate, harassment, and violent rhetoric online. Hate speech harms the individuals targeted, their families, communities, and society at large. And it distorts the free exchange of ideas by discrediting or silencing targeted voices.
- Social media platforms can be used to spread hate or terrorist propaganda, counsel offline violence, recruit new adherents to extremist groups, and threaten national security, the rule of law and democratic institutions. At their worst, online hate and extremism can incite real-world acts of violence in Canada and anywhere in the world, as was seen on January 29, 2017 at the Centre culturel islamique de Québec, and on March 15, 2019, in Christchurch, New Zealand.
- Social media platforms are also used to sexually exploit children. Women and girls, predominantly, are victimized through the sharing of intimate images without the consent of the person depicted. These crimes can inflict grave and enduring trauma on survivors, which is made immeasurably worse as this material proliferates on the internet and social media.
- Social media platforms have significant impacts on expression, democratic participation, national security, and public safety. These platforms have tools to moderate harmful content. Mainstream social media platforms have voluntary content moderation systems that flag and test content against their community guidelines. But some platforms take decisive action in a largely ad-hoc fashion. These responses by social media companies tend to be reactive in nature and may not appropriately balance the wider public interest. Also, social media platforms are not required to preserve evidence of criminal content or notify law enforcement about criminal content, outside of mandatory reporting for child pornography offences. More proactive reporting could make it easier to hold perpetrators to account for harmful online activities.
The government stressed it is “is committed to confronting online harms while respecting freedom of expression, privacy protections, and the open exchange of ideas and debate online.”
Turning back to the discussion and technical papers the government made available, in the first the government contended:
New legislation would apply to ‘online communication service providers’.
The concept of online communication service provider is intended to capture major platforms, (e.g., Facebook, Instagram, Twitter, YouTube, TikTok, Pornhub), and exclude products and services that would not qualify as online communication services, such as fitness applications or travel review websites.
The legislation would not cover private communications, nor telecommunications service providers or certain technical operators. There would be specific exemptions for these services.
The legislation would also authorize the Government to include or exclude categories of online communication service providers from the application of the legislation within certain parameters.
The legislation would target five categories of harmful content:
- terrorist content;
- content that incites violence;
- hate speech;
- non-consensual sharing of intimate images; and
- child sexual exploitation content.
While all of the definitions would draw upon existing law, including current offences and definitions in the Criminal Code, they would be modified in order to tailor them to a regulatory – as opposed to criminal – context.
These categories were selected because they are the most egregious kinds of harmful content. The Government recognizes that there are other online harms that could also be examined and possibly addressed through future programming activities or legislative action.
The Liberal Party government continued:
In addition to the legislative amendments proposed under Bill C-36, further modifications to Canada’s existing legal framework to address harmful content online could include:
- modernizing An act respecting the mandatory reporting of Internet child pornography by persons who provide an Internet service, (referred to as the Mandatory Reporting Act) to improve its effectiveness; and
- amending the Canadian Security and Intelligence Service Act to streamline the process for obtaining judicial authority to acquire basic subscriber information of online threat actors.
New Zealand is also in the midst of considering an online harms bill, the “Films, Videos, and Publications Classification (Urgent Interim Classification of Publications and Prevention of Online Harm) Amendment Bill.” This bill was driven by the livestreaming of the attack of a Christchurch mosque as explained by the bill’s new sponsor in February 2021:
This bill addresses specific legislative and regulatory gaps in our current online content regulation. These were highlighted in the tragic events of the Christchurch mosque attacks on 15 March 2019. The terrorist of the Christchurch attacks sought to exploit online platforms to promote his acts of hate-based violence. Following the original livestream broadcast, footage of the attacks spread across the internet through social media, and in the days that followed we saw thousands of links appear across our social media platforms. Unfortunately, some of these links were able to autoplay. Viewing this type of content can be extremely harmful and distressing, and I’m sure that members of this House, like me, know of people—particularly young children—who saw those links and viewed that content, and it was extremely harmful to them. I can only imagine how hard that must be to know that that content was available to the families of the victims and the survivors.
As mentioned, on 13 September, the Governance and Administration Committee published its final report on the bill with the recommendation that the bill be passed with proposed amendments. The committee stated:

The committee stated that the bill, as introduced, proposes:

The committee explained the changes it made to the bill, including the removal of the electronic filtering provisions, which the committee noted were opposed by most people and entities that commented on the bill:


Moreover, regarding the electronic filtering systems, the committee added:
The bill as introduced did not specify the design of an electronic filter or how exactly it would operate. The lack of detail about the filter’s design, scope, and operation was a significant concern for us and for submitters.
Finally, the report also includes a helpful blackline version of the proposed changes showing where and how the committee is advising the government to revise.
© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2021. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.
Photo by Bruno Thethe from Pexels