UK Unveils Online Safety Bill

First, subscribe to my newsletter, The Wavelength, if you want all the content on my blog delivered to your inbox four times a week.

The United Kingdom (UK) has followed Australia and the European Union with legislation to address “online harms.”

Twitter

The British are coming. Does that make Mark Zuckerberg Paul Revere? Or would he rather be George Washington?

Cocktail Party

Another major bill to regulate online harm debuts. The British bill would address both online platforms where users post content and search engines and would impose similar duties of care on both. London is taking an expansive view of who is a child with everyone 17 and under being placed in this class of users that will receive greater protection than adults. Moreover, the bill has provisions to protect the freedom of expression, privacy, political content, and “recognised” news outlets. Failures to meet their new duties could result in fines as high as 10% of a company’s annual worldwide turnover.

Meeting

The UK’s Department for Digital, Culture, Media & Sport (DCMS) published its long-awaited online harms bill that sets out the framework by which the UK proposes to regulate harmful and illegal online content. The UK follows Australia and the European Union in proposing legislation to regulate the online world. The Australian Parliament is currently considering the “Online Safety Bill 2021” and the “Online Safety (Transitional Provisions and Consequential Amendments) Bill 2021” (see here for more detail and analysis.) The European Commission (EC) rolled out its The Digital Services Act in December 2020 and is currently negotiating a final bill with other EU stakeholders (see here for more detail and analysis.) And, of course, in the United States (U.S.), there have been calls from both political parties and many stakeholders to revise 47 U.S.C. 230 (aka Section 230), the liability shield many technology companies have to protect them from litigation arising from content they allow others to post. However, to date, no such legislation has advanced mere introduction.

The British bill kicks a lot of details into the future where the regulator and government will have to sort key parts of the law. And so, implementation will prove crucial and likely another front where online platforms can make their cases.

Geek Out

Prime Minister Boris Johnson’s government has released its proposed “Online Safety Bill.” In a press release, DCMS explained:

The Online Safety Bill follows the publication of the Online Harms White Paper in April 2019. An initial Government response to the consultation was published in February 2020, and a full Government response in December 2020. The full government response set out in detail the regulatory framework, which will be taken forward through this bill.

The opposition party, as is to be expected, did not laud the legislation. Jo Stevens MP, Labour’s Shadow Secretary of State for Digital, Culture, Media and Sport asserted:

  • Over two years ago the Conservatives promised ‘world leading’ legislation in their White Paper. Instead we have watered down and incomplete proposals which lag behind the rest of the world. Even the Government’s press release admits that it’s proposals will only tackle “some of the worst abuses on social media.
  • Labour backs criminal sanctions for senior tech executives to bring about a change of culture in these companies who for too long have been given a completely free rein.
  • As the NSPCC has identified these proposals do very little to ensure children are safe online. There is little to incentivise companies to prevent their platforms from being used for harmful practices.
  • The Bill, which will have taken the Government more than five years from its first promise to act to be published, is a wasted opportunity to put into place future proofed legislation to provide an effective and all-encompassing regulatory framework to keep people safe online.

In the accompanying Explanatory notes, the DCMS provided an overview of the bill:

  1. The Online Safety Bill establishes a new regulatory regime to address illegal and harmful content online, with the aim of preventing harm to individuals in the United Kingdom. It imposes duties of care in relation to illegal content and content that is harmful to children on providers of internet services which allow users to upload and share user-generated content (“user-to-user services”) and on providers of search engines which enable users to search multiple websites and databases (“search services”).
  2. The Bill also imposes duties on such providers in relation to the protection of users’ rights to freedom of expression and privacy. Providers of user-to-user services which meet specified thresholds (“Category 1 services”) are subject to additional duties in relation to content that is harmful to adults, content of democratic importance and journalistic content.
  3. The Bill confers powers on the Office of Communications (OFCOM) to oversee and enforce the new regulatory regime (including dedicated powers in relation to terrorism content and child sexual exploitation and abuse (CSEA) content), and requires OFCOM to prepare codes of practice to assist providers in complying with their duties of care. The Bill also expands OFCOM’s existing duties in relation to promoting the media literacy of members of the public.

The DCMS explained “[t]he Bill is divided into seven parts:

  1. Part 1 contains definitions of the services to which the Bill applies.
  2. Part 2 sets out the duties of care that apply to providers of user-to-user and search services. These are duties to undertake risk assessments, and also duties with regards to content on their services that is illegal, harmful to children and harmful to adults.
  3. Part 3 sets out further obligations on services in relation to transparency reporting and the payment of fees.
  4. Part 4 sets out OFCOM’s powers and duties as the online safety regulator. There are specific provisions on OFCOM’s duties to carry out risk assessments and to maintain a register of categories of services. Part 4 also establishes OFCOM’s functions and powers with respect to the use of technology in relation to terrorism content and child sexual exploitation and abuse (CSEA) content, information-gathering, enforcement, research, and media literacy.
  5. Part 5 provides for the grounds and avenues for appeals against decisions by OFCOM, and for designated bodies to make super-complaints to the regulator.
  6. Part 6 provides for the powers of the Secretary of State to issue a statement of strategic priorities and guidance to OFCOM, and to review the regulatory framework established by the Bill.
  7. Part 7 contains miscellaneous and general provisions. In particular, it defines key concepts such as providers of regulated services, users, and internet services.”

In the bill it is explained “’user-to-user service’ means an internet service by means of which content that is generated by a user of the service, or uploaded to or shared on the service by a user of the service, may be encountered by another user, or other users, of the service.” Moreover, a “regulated service” means—

(a) a regulated user-to-user service, or

(b) a regulated search service.”

And so, such services would obviously include Facebook, Google, YouTube, Twitter, and other platforms on which people can post. These platforms would be subject to the requirements and enforcement mechanisms of the bill. However, a number of services are exempted. Schedule 1 identifies those entities exempted from the Online Safety Act such as email providers, voice communications providers, and others.

The bill would establish a duty of care for user-to-user services and another duty of care for search services. The overview of Part 2 nicely summarizes the operative sections of the Online Safety Act:

  • This Part imposes duties of care on providers of regulated services and requires OFCOM to issue codes of practice relating to those duties.
  • Chapter 2 imposes duties of care on providers of regulated user-to-user services.
  • Chapter 3 imposes duties of care on providers of regulated search services.
  • Chapter 4 imposes duties on providers of regulated services to assess whether a service is likely to be accessed by children.
  • Chapter 5 requires OFCOM to issue codes of practice relating to particular duties and explains what effects the codes of practice have.

Chapter 2 of the bill details the duties regulated user-to-user services must meet:

  • The illegal content risk assessment duty (see section 7(1)),
  • Each of the illegal content duties (see section 9),
  • The duty about rights to freedom of expression and privacy set out in section 12(2),
  • The duties about reporting and redress set out in section 15(2)(a), and section 15(3) and (5)
  • Each of the record-keeping and review duties (see section 16).

Those regulated user-to-user services children are likely to access (e.g., TikTok, Snapchat, Instagram) must meet additional duties:

  • Each of the children’s risk assessment duties (see section 7(3) and (4)),
  • Each of the duties to protect children’s online safety (see section 10),

Section 7 explains the illegal content risk assessment duty, a term defined as, in relevant part:

an assessment to identify, assess and understand such of the following as appear to be appropriate, taking into account the risk profile that relates to services of that kind—

(a)  the user base;

(b)  the level of risk of individuals who are users of the service encountering the following by means of the service—

(i)  terrorism content,

(ii)  Child Sexual Exploitation and Abuse (CSEA) content,

(iii)  priority illegal content, and

(iv)  other illegal content,

Under the bill, regulated user-to-user services must perform these assessments. When these services will have to perform these assessments will hinge on when OFCOM completes its “risk assessment to identify, assess and understand the risks of harm to individuals presented by regulated services” and its “guidance for providers of regulated services to assist them in complying with their duties to carry out risk assessments.” Upon the day, OFCOM completes both of these tasks, then regulated user-to-user services already operating in the UK have three months to do an illegal content risk assessment. Those regulated user-to-user services that want to begin operating must perform an illegal content risk assessment before they can.

Moreover, there are also definitions of “children’s risk assessment” and “adult’s risk assessments.”

Regulated user-to-user services will also have safety duties regarding illegal content that will require affirmative action in some cases and reactionary action in others. These platforms will have

A duty, in relation to a service, to take proportionate steps to mitigate and effectively manage the risks of harm to individuals, as identified in the most recent illegal content risk assessment of the service.

Additionally, these platforms will also have

A duty to operate a service using proportionate systems and processes designed to—

(a)  minimise the presence of priority illegal content;

(b)  minimise the length of time for which priority illegal content is present;

(c)  minimise the dissemination of priority illegal content;

(d)  where the provider is alerted by a person to the presence of any illegal content, or becomes aware of it in any other way, swiftly take down such content.

The British government is not requiring that all illegal content be taken down immediately and is instead imposing a duty to minimize such content unless a person has alerted the provider which then must act “swiftly” in taking down this content.

Under the Online Safety Act, regulated user-to-user services likely to be accessed by children have “duties to protect children’s online safety.” These platforms have a duty to “mitigate and effectively manage the risks of harm” as identified in their children’s risk assessment and also generally according to different age groups of children. Likewise, these platforms have “[a] duty to operate a service using proportionate systems and processes designed to—

(a) prevent children of any age from encountering, by means of the service, primary priority content that is harmful to children;

(b) protect children in age groups judged to be at risk of harm from other content that is harmful to children (or from a particular kind of such content) from encountering it by means of the service.

The Online Safety Bill defines children as those 17 and younger.

Some regulated user-to-user services have additional duties. The so-called Category 1 services also have “duties to protect adults’ online safety,” including but not limited to informing users how harmful priority content is dealt with and how harmful content identified during the risk assessment is managed. OFCOM will have to set the criteria by which regulated user-to-user services are split into Category 1, Category 2A, and Category 2B.

Regulated user-to-user services also have duties regarding the freedom of expression and privacy. All such entities will have

A duty to have regard to the importance of—

(a)  protecting users’ right to freedom of expression within the law, and

(b)  protecting users from unwarranted infringements of privacy, when deciding on, and implementing, safety policies and procedures.

Category 1 services will have additional duties. In “deciding on safety policies and procedures,” these platforms must assess the impact the policies and procedures might have on the rights to freedom of expression and privacy.

This class of entities also have “duties to protect content of democratic importance.” Category 1 platforms also have “[a] duty to operate a service using systems and processes designed to ensure that the importance of the free expression of content of democratic importance is taken into account when making decisions about—

(a) how to treat such content (especially decisions about whether to take it down or restrict users’ access to it), and

(b) whether to take action against a user generating, uploading or sharing such content.

Category 1 services also have a duty to protect journalistic content that shall come into play when they moderate this sort of content, especially when posted by users. This section of the bill spells out a more defined process for handling complaints about content moderation in terms of the duties this class of regulated user-to-user services must meet.

Regulated user-to-user services will need to meet their new duties of reporting and redress. The platforms must have processes that allow users to report content that is illegal, harmful to children, or harmful to adults.

Providers of search services would have similar but distinct duties and “must comply with the following duties in relation to each such service—

(a) the illegal content risk assessment duty (see section 19(1)),

(b) each of the illegal content duties (see section 21),

(c) the duty to protect rights to freedom of expression and privacy (see section 23),

(d) the duties about reporting and redress set out in—

(i) section 24(2)(a), and

(ii) section 24(3) and (5) so far as relating to subsection (4)(a)(i), (b) or (c)(i) of that section, and

(e) each of the record-keeping and review duties (see section 25).

There would also be additional duties for any provider of search services children are likely to access.

However, the new duties providers of search services must heed does not include the content of “recognised news publisher[s].”

The illegal content risk assessments for providers of search services are similar but less extensive than the one for regulated user-to-user services. In the same vein, the duty to conduct children’s risk assessment track closely with the regulated services. The point at which a “regulated search service must conduct an illegal content risk assessment is the same as for regulated user-to-user services: three months after OFCOM completes its risk assessment or issues its guidance about risk assessments.

Regulated search providers will have “illegal content duties,” including

A duty, in relation to a service, to take proportionate steps to mitigate and effectively manage the risks of harm to individuals, as identified in the most recent illegal content risk assessment of the service.

These entities will also have “[a] duty to operate a service using proportionate systems and processes designed to minimise the risk of individuals encountering the following in or via search results—

(a) priority illegal content;

(b) other illegal content that the provider knows about (having been alerted to it by another person or become aware of it in any other way).

Regulated search providers will also have “duties to protect children’s online safety” that entail taking “proportionate steps to—

(a) mitigate and effectively manage the risks of harm to children in different age groups, as identified in the most recent children’s risk assessment of the service, and

(b) mitigate the impact of harm arising to children in different age groups from content that is harmful to children encountered in or via search results of the service.”

These platforms would also have “A duty to operate a service using proportionate systems and processes designed to—

(a) minimise the risk of children of any age encountering primary priority content that is harmful to children in or via search results;

(b) minimise the risk of children in age groups judged to be at risk of harm from other content that is harmful to children (or from a particular kind of such content) encountering it in or via search results.”

Regulated search services would have to meet the same sorts of duties relating to freedom of expression and privacy and redress and reporting regulated user-to-user services would need to heed.

All regulated services (user-to-user services and providers of search services) must assess whether children are likely to access their services, “a key requirement of the safety duties to be imposed on all providers of regulated services in relation to children.” An assessment must be performed for each different regulated service an entity offers in the UK (e.g., Facebook would need to perform assessments for both Facebook and Instagram.) The bill also provides “OFCOM must prepare guidance for providers of regulated services to assist them in complying with their duties to carry out assessments.”

Moreover, OFCOM must prepare a code of practice for providers of regulated services describing recommended steps for the purposes of compliance with duties set out in section 9 or 21 (safety duties about illegal content) so far as relating to terrorism content” and CSEA. OFCOM must also prepare codes of practice for the other duties regulated services must meet. OFCOM must submit these codes of practice to DCMS which may approve them, but if the Department opts not to, it may present them to Parliament for approval. The codes of practice are important for regulated services because following them will functionally mean compliance with its new duties under the Online Safety Bill.

The bill specifies that “Illegal content” means—

(a) in relation to a regulated user-to-user service, content—

(i) that is regulated content in relation to that service, and

(ii) that amounts to a relevant offence;

(b) in relation to a regulated search service, content that amounts to a relevant offence.”

The DCMS will draft and issue regulations to specify the other offences shall be deemed relevant offences. The bill makes clear a “[r]elevant offence” means—

(a) a terrorism offence (see section 42),

(b) a CSEA offence (see section 43),

(c) an offence that is specified in, or is of a description specified in,

regulations made by the Secretary of State (see section 44), or

(d) an offence, not within paragraph (a), (b) or (c), of which the victim or intended victim is an individual (or individuals).

Certain platforms would have to pay annual fees to the British government. OFCOM would annually set a threshold above which platforms would need to pay a certain fee. The DCMS explained that “providers with qualifying worldwide revenue at or above a specified threshold will have an obligation to notify OFCOM and pay an annual fee” and “[w]here providers whose qualifying worldwide revenue is at or above the threshold do not notify or pay a fee, then enforcement action may be taken against them.”

OFCOM would receive a suite of new enforcement powers, including:

  • The authority to issue provisional notices of enforcement action that triggers a period of representations (i.e., the service may argue why they have not violated the law)
  • The power to impose confirmation decision directing action after the period for representations has expired
  • Levy financial penalties for violations of “the greater of £18 million and 10% of the person’s qualifying worldwide revenue”
  • Asking a court for a service restriction order that would stop other entities from doing business with a regulated service in the event of violations in some cases, or asking a court for a an access restriction order “to impede access to a non-compliant regulated service”

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2021. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Evren Ozdemir from Pixabay

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s