Another Democratic Section 230 Bill

The SAFE TECH Act addresses Section 230 issues bills introduced in the last Congress largely did not.

Three Democratic Senators have introduced a new bill to reform 47 USC 230 (Section 230), which is among the first major bills on the liability protection social media platforms and other technology companies have. Senate Intelligence Committee Chair Mark Warner (D-VA), Senator Mazie Hirono (D-HI) and Senator Amy Klobuchar (D-MN) released the “Safeguarding Against Fraud, Exploitation, Threats, Extremism and Consumer Harms (SAFE TECH) Act” (S.299) “to reform Section 230 and allow social media companies to be held accountable for enabling cyber-stalking, targeted harassment, and discrimination on their platforms” explains their press release. Warner, Hirono, and Klobuchar made available bill text, a three-page summary, frequently asked questions, and a redline.

The bill tracks like a Section 230 wish list for the left in how it would generally like to see the immunity of technology companies narrowed in order to create the incentives for them to better police certain types of harmful speech.

Of course, this bill and a number of the Republican bills introduced last year come at Section 230 from different, perhaps even conflicting directions, leaving observers and experts to wonder how, and if, compromise is possible. Or one must wonder what sort of Frankenstein bill would emerge as a compromise and whether the incentive structure for technology companies would be distorted and the second order and the third order effects to be what no one foresees or wants.

Incidentally, a note for Congress. Warner’s office made available a redline version of the legislation (i.e., a version showing changes to existing law in red) that makes understanding the bill much easier. I often make my own redlines, but I humbly suggest that the House and Senate change their rules to require that all bills provide redlines. Anyway, back to the bill.

Section 230, of course, gives the Facebooks, Twitters, Googles, YouTubes, etc. of the world very broad liability protection in that they cannot be sued for the vast majority of content third parties post to their platforms. Consequently, all sorts of harassing and quite frankly defamatory material can be posted, and the platforms can decline to remove said content without fear they will face a lawsuit. For example, The New York Times recently published an article about a woman who has accused others of being criminal and unethical without evidence, and platforms such as Google did nothing for years even though this slander had real world effects on the objects of her scorn.

The other side of the Section 230 coin, as some have argued, is that narrowing significantly or removing the liability protection would result in platforms removing immediately any content that might incur litigation, a stance that would likely fall hardest on those out of power and without resources.

Warner, Hirono, and Klobuchar asserted:

These changes to Section 230 do not guarantee that platforms will be held liable in all, or even most, cases. Proposed changes do not subject platforms to strict liability; and the current legal standards for plaintiffs still present steep obstacles. Rather, these reforms ensure that victims have an opportunity to raise claims without Section 230 serving as a categorical bar to their efforts to seek legal redress for harms they suffer – even when directly enabled by a platform’s actions or design.

The SAFE TECH Act would change Section 230 in a number of notable ways. First, in a nod to First Amendment issues, the crucial language in current law would be changed from “information” to “speech,” setting the stage where a world in which speech protected under the First Amendment would continue to be protected under Section 230. Hence, Twitter could not be sued if someone claims President Joe Biden is an idiot or has implemented the wrong policy on an issue.

Moreover, language appended to the last clause in Section 230(c) would also move certain speech outside the current legal shield. Any speech that the provider or user has been paid to make available could lead to litigation for the provider would no longer be immune for this class of speech. And so, if the Proud Boys paid a troll farm to slur Senator Ron Wyden (D-OR) on the basis of his Jewish heritage, say by claiming his allegiance is to Israel and not the United States, any platform hosting this content could be sued by Wyden for defamation among other possible grounds.

Likewise, platforms would no longer have liability protection for advertisements others pay for and place.

The SAFE TECH Act makes clear that platforms can seek to fend off lawsuits through an affirmative defense proving they are not the entity that created or disseminated the offensive information in question. However, the bill would set the evidentiary burden higher than the one used in most civil actions (more likely than not) to a preponderance of the evidence. This suggests the intent behind changing the evidence needed to use this defense is that platforms would have an incentive to better record who posts what. The likely side effect or second order effect would be it may become easier to track down those who are posting abusive or illegal content if it becomes much harder to post anonymously.

At present, platforms have so-called Good Samaritan liability that bars lawsuits against them for moderating and even taking down content. The SAFE TECH Act would pare back that liability protection in cases where a court has issued an order for the platform to remove or make unavailable content through an injunction order issued on the basis of irreparable harm. Moreover, a platform’s compliance with such an injunctive order cannot give rise to a lawsuit, and so platforms would be shielded from retaliatory litigation from the party who posted the content.

Like Representative Yvette Clarke’s (D-NY) discussion draft, the “Civil Rights Modernization Act of 2021,” (see here for more analysis), the SAFE TECH Act removes Section 230 liability in lawsuits alleging the content posted on a platform violates a federal or state civil rights law. This provision is short and worth quoting in full:

Nothing in this section shall be construed to limit, impair, or prevent any action alleging discrimination on the basis of any protected class, or conduct that has the effect or consequence of discriminating on the basis of any protected class, under any Federal or state law.

The italicized language will almost certainly be a non-starter for Republicans, most of whom object to language that makes illegal conduct without a discriminatory intent that results in de facto discrimination. Keeping in mind, as always, that at least 10 Republican votes would be needed to pass such a bill, it seems like this language would be left on the cutting room floor. Still, this is the sort of language left-wing and Democratic advocates would like to see, and the sponsors may have included it knowing it would not probably survive Republican objections. Giving the other party a victory in removing language like this may allow the primary parts of the bill to get enacted.

There are other carve outs of the Section 230 liability shield. First, platforms could be sued under federal or state laws barring “stalking, cyberstalking, harassment, cyberharassment, or intimidation based in whole or in part on sex (including sexual orientation and gender identity), race, color, religion, ancestry, national origin, or physical or mental disability.” Right now, Section 230 stops people from suing, say Reddit, for harassing material, or as in the aforementioned Times horror story, Pinterest and WordPress removed slanderous and libelous content only after a reporter contacted them while Google ultimately decided not to do so. If this provision of the SAFE TECH Act becomes law, such platforms would face lawsuits for failing to take down such material. I wonder, however, if the terms used in this provision would cover child pornography, non-consensual pornography, revenge pornography and similar content. Perhaps those types of content would be considered harassment or cyberharrassment.

Another carve out is that non-U.S. nationals could sue in federal courts alleging injuries on the basis of content posted on a platform under the Alien Tort Claims Act (ATCA) (28 USC 1350) that is usually used to allege violations of human rights. Warner, Hirono, and Klobuchar cite “the survivors of the Rohingya genocide” in Myanmar be able to sue platforms for the inflammatory material some refused to takedown that fed the genocidal activities of the Burmese Army and government. Facebook, in particular, was flagged for being unresponsive to requests to take down this sort of content.

Finally, if a person is bringing a civil suit for a wrongful death, Twitter, Facebook, Parler, Reddit, and others could be sued for actions they took or did not take that may have contributed to or led to the death in question.

Another provision would address the use of Section 230 as a defense against antitrust actions, a novel deployment of a provision meant to protect platforms from lawsuits about the content others post:

Nothing in this section shall be construed to prevent, impair, or limit any action brought under State or Federal antitrust laws.

In the FAQ, Warner, Hirono, and Klobuchar explained the rationale for this language:

Internet platforms and other tech companies have pushed the bounds of Section 230 in an effort to immunize themselves from all manner of activity. Just last year, a leading cyber-security firm claimed Section 230 immunized it against a claim it had engaged in anticompetitive conduct to harm a competitor and pursued its claim all the way to the Supreme Court.

This may well be an issue on which Democrats and Republican can agree as evidenced by the Trump Administration’s Department of Justice recommendations on reforming Section 230 that state:

A fourth category of reform is to make clear that federal antitrust claims are not, and were never intended to be, covered by Section 230 immunity.  Over time, the avenues for engaging in both online commerce and speech have concentrated in the hands of a few key players.  It makes little sense to enable large online platforms (particularly dominant ones) to invoke Section 230 immunity in antitrust cases, where liability is based on harm to competition, not on third-party speech.

Finally, another approach put forth by a key Democratic stakeholder may prove more preferable than the SAFE TECH Act for it homes in on the process by which platforms moderate content. These platforms  would need to publish clear and fair processes and then live by them. Moreover, this bill would require platforms to take down content as ordered by courts.

Last summer, Senator Brian Schatz (D-HI and then Senate Majority Whip John Thune (R-SD) introduced the “Platform Accountability and Consumer Transparency (PACT) Act” (S.4066) (see here for more analysis.) According to Schatz and Thune’s press release, the PACT Act will strengthen transparency in the process online platforms use to moderate content and hold those companies accountable for content that violates their own policies or is illegal. Schatz and Thune claimed the “PACT Act creates more transparency by:

  • Requiring online platforms to explain their content moderation practices in an acceptable use policy that is easily accessible to consumers;
  • Implementing a quarterly reporting requirement for online platforms that includes disaggregated statistics on content that has been removed, demonetized, or deprioritized; and
  • Promoting open collaboration and sharing of industry best practices and guidelines through a National Institute of Standards and Technology-led voluntary framework.

They asserted “[t]he PACT Act will hold platforms accountable by:

  • Requiring large online platforms to provide process protections to consumers by having a defined complaint system that processes reports and notifies users of moderation decisions within 14 days, and allows consumers to appeal online platforms’ content moderation decisions within the relevant company;
  • Amending Section 230 to require large online platforms to remove court-determined illegal content and activity within 24 hours; and
  • Allowing small online platforms to have more flexibility in responding to user complaints, removing illegal content, and acting on illegal activity, based on their size and capacity.

Schatz and Thune stated that “[t]he PACT Act will protect consumers by:

  • Exempting the enforcement of federal civil laws from Section 230 so that online platforms cannot use it as a defense when federal regulators, like the Department of Justice and Federal Trade Commission, pursue civil actions for online activity;
  • Allowing state attorneys general to enforce federal civil laws against online platforms that have the same substantive elements of the laws and regulations of that state; and
  • Requiring the Government Accountability Office to study and report on the viability of an FTC-administered whistleblower program for employees or contractors of online platforms.

© Michael Kans, Michael Kans Blog and, 2019-2021. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and with appropriate and specific direction to the original content.

Photo by Karsten Winegeart on Unsplash

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s