|The Administration sends its Section 230 reform bill to Congress as a committee is gearing up to consider a second such bill in a matter of months.|
There has been increased activity among Republicans in the Administration and Congress regarding the liability shield provided to technology companies in 47 U.S.C. 230 (Section 230 of the Communications Act of 1934). The flurry of bills and events on Section 230 seems designed to appeal to the Republican base to drive turnout for the election and to try to set the narrative in the event of a presidential election too close to call on Election Day. Many Republicans would look to deploy a narrative on Facebook, Twitter, and other platforms that Democrats and former Vice President Joe Biden are trying to steal the election by insisting all ballots should be counted, or a variation thereof. Many Republicans have long been laying the groundwork that these platforms are biased against conservatives and their viewpoints as evidenced by the supposedly liberal politics of those in technology and the content moderation that has resulted in lies and hate speech posted by those on the far right being removed. Of course, a few of President Donald Trump’s Tweets and Facebook posts have been called out for being untrue or in violation of terms of service for inciting violence. Moreover, these platforms have put in place measures anticipating a presidential election too close to call, and Facebook has dedicated ad space to encouraging Americans to register to vote, something Trump has objected to. Finally, another reasons why the Section 230 activity may be for show is that any legislative changes must be passed by the House, which Democrats of course control, and they are leaning towards different changes to Section 230 to combat hate speech and how social media tends to speed the spread of disinformation, misinformation, and lies throughout cyberspace.
Last week, in concert with state attorneys general being in Washington to meet with the United States (U.S.) Department of Justice (DOJ) regarding the Administration’s plans on the Google antitrust suit, the White House held an event titled a “Discussion with State Attorneys General on Protecting Consumers from Social Media Abuses.” On the same day, DOJ transmitted legislative language to Congress on how Section 230 should be reformed. However, in attendance at the event with Trump were the following:
U.S. Attorney General Bill Barr; Senator Josh Hawley (R-MO); and State Attorneys General Ken Paxton of Texas, Mark Brnovich of Arizona, Jeff Landry of Louisiana, Lynn Fitch of Mississippi, Eric Schmitt of Missouri, Alan Wilson of South Carolina, Sean Reyes of Utah, Leslie Rutledge of Arkansas, and Patrick Morrisey of West Virginia.
In a sign of the partisan tenor of the event, it bears note that all the aforementioned officials are Republicans. It is not clear whether Democrats were not invited or none chose to attend the event. Given Trump’s desire to avoid anything approaching criticism to his face, it is more likely the former is the case.
In any event, Trump made the following remarks:
- In recent years, a small group of powerful technology platforms have tightened their grip over commerce and communications in America. They’ve used this power to engage in unscrupulous business practices while simultaneously waging war on free enterprise and free expression.
- At the urging of the radical left, these platforms have become intolerant of diverse political views and abusive toward their own users. And I think we could say as abusive as you could possibly be, in some cases. Right, Josh? You’ve seen that.
- For example, Twitter routinely restricts posts expressing conservative views, even from a President of the United States, while at the same time it allows Iran’s Supreme Leader to freely spew vile, anti-Semitic hate and even death threats.
- Every year, countless Americans are banned, blacklisted, and silenced through arbitrary or malicious enforcement of ever-shifting rules. Some platforms exploit their power, acquire vast sums of personal data without consent, or rig their terms of service to coerce, mislead, or defraud. And we’ve seen it so many times.
- In May, I directed Attorney General Barr to work with the state attorneys general as they enforce the state laws against deceptive business practices.
- Today’s discussion will focus on concrete legal steps to protect an open Internet and a free society, including steps to ensure the social media companies cannot deceive their users with hidden efforts to manipulate the spread of information. This is a very big subject. We’re going to be discussing it; we’ve been discussing it. And over a fairly short period of time, I suspect, we’ll come to a conclusion.
It bears note that Trump and other Republicans continue to make the argument conservative viewpoints are disadvantaged by social media platforms without furnishing evidence beyond anecdotes. In fact, the opposite may be true, as media accounts have reported Facebook, for example, has gone out of its way not to take down conservative content that may well violate the platform’s terms of service. This seems to be in response to political pressure applied to its lobbyists in Washington by Republicans. Moreover, one study shows that political content is a minority of the content on Twitter and that the platform hosts a diversity of opinions. Nonetheless, polling released by the Pew Research Center show the Republicans’ message dovetails with public perception of how social media manages their sites. Whether the message is driving public perception is another question these data do not address.
In its press release, the DOJ claimed:
The draft legislative text implements reforms that the Department of Justice deemed necessary in its June Recommendations and follows a yearlong review of the outdated statute. The legislation also executes President Trump’s directive from the Executive Order on Preventing Online Censorship.
In its cover letter to the House and Senate, the DOJ asserted:
The beneficial role Section 230 played in building today’s internet, by enabling innovations and new business models, is undisputed. It is equally undisputed, however, that the internet has drastically changed since I996. Many of today’ s online platforms are no longer nascent companies but have become titans of industry. Platforms have also changed how they operate. They no longer function as simple forums for posting third-party content, but use sophisticated algorithms to suggest and promote content and connect users. Platforms can use this power for good to promote free speech and the exchange of ideas, or platforms can abuse this power by censoring lawful speech and promoting certain ideas over others.
The DOJ provided both a red-line showing how it proposes Section 230 in relation to the statute as currently enacted, and a section-by-section explaining these changes.
Firstly, the legislation would change the standard in Section 230(c)(2) platforms could use for removing or taking down content from one of what the provider considers objectionable broadly speaking to one determined by a reasonably objective belief. The latter standard will need to be fleshed out by courts, but it seems like a much higher bar to clear for a Twitter or Facebook to legally police the content on its site. At present, such platforms only need to be acting in “good faith” regarding material it “considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.” The DOJ’s language would also eliminate “objectionable” material, presumably because that term gives platforms too much leeway. Instead, the revised Section 230 language would be:
any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user has an objectively reasonable belief is obscene, lewd, lascivious, filthy, excessively violent, promoting terrorism or violent extremism, harassing, promoting self-harm, or unlawful, whether or not such material is constitutionally protected
Note the DOJ has explicitly added the terms “promoting terrorism or violent extremism,” “promoting self-harm,” and “unlawful.”
It is also important to note the DOJ is proposing an actual definition of “good faith.” The Trump Administration and other Republicans have argued the lack of such a definition has allowed the platforms to abuse their right to moderate content, particularly to the detriment of conservatives and their viewpoints. In fact, part of the EO is to get the Federal Communications Commission (FCC) to create a definition through the regulatory process.
The DOJ is proposing the following:
To restrict access to or availability of specific material “in good faith,” an interactive computer service provider must—
- have publicly available terms of service or use that state plainly and with particularity the criteria the service provider employs in its content- moderation practices;
- restrict access to or availability of material consistent with those terms of service or use and with any official representations or disclosures regarding the service provider’s content-moderation practices;
- not restrict access to or availability of material on deceptive or pretextual grounds, or apply its terms of service or use to restrict access to or availability of material that is similarly situated to material that the provider intentionally declines to restrict; and
- supply the provider of the material with timely notice describing with particularity the provider’s reasonable factual basis for the restriction of access and a meaningful opportunity to respond, unless a law enforcement agency has asked that such notice not be made, or a provider reasonably believes that the material relates to terrorism or other criminal activity, or that such notice would risk imminent harm to others.
This definition would impose significant resource obligations on platforms to comply and may even have the unintended consequence of cementing the current large players as the dominant players, for smaller entrants may not be able to easily devote resources to meeting this burden. The DOJ is likely working from the thinking that forcing platforms to live by their terms of service would end discrimination against conservatives, but it may result in a rewrite of terms of service that are much more stringent than those in use today that may screen out untrue or inflammatory content. For example, terrorism and violent extremism are content that platforms may take down consistent with their terms of service and avoid being sued. What if such platforms start labeling content that is calling for violence against minorities and the left wing as terrorism or extremism?
The revisions to Section 230(c)(1) would considerably narrow a platform’s liability protection regarding the circumstances it is legally liable for content on its platform. The new language would end the blanket liability protection provided to providers and would subsume it to the liability protection provided in revised (c)(2). Therefore, liability protection is only granted to Facebook, Twitter, and others when they moderate content on their platforms according to the new, higher standards.
New language would add “Bad Samaritan” language making clear platforms could not assert any Section 230 liability protection for having content that they know is illegal or should know is illegal. It seems like child pornography was in the DOJ’s mind, among other possible crimes, for media reports have argued platforms are not doing all they can to find known images of children and removing them. Revenge porn may have also inspired this language. Be that as it may, this is limited to federal and state civil actions and only state criminal actions. Federal criminal violations are not protected by the Section 230 liability shield and may be prosecuted right now.
There is also new provisions removing liability for platforms that are aware of illegal content or actions and fail to remove and restrict access to it, do not report it to law enforcement agencies, and preserve evidence. Again, this is limited to federal and state civil actions and only state criminal actions.
The DOJ is also taking aim at defamation online. A new section would mandate he provision of a mechanism by which defamatory content could be reported, presumably to pressure platforms to remove such content.
Interestingly, some of the proposed changes dovetail with the rationale put forth by the DOJ and other Trump Administration agencies as to why hardware, software, and app developers should both end offering end-to-end encryption and develop technological means to secure data in transit and rest while allowing law enforcement agencies to access potentially illegal content. The DOJ has beaten the drum that encryption is allowing terrorists, exploiters of children, and illegal pornographers to elude detection and apprehension. So, too, the proposed changes to Section 230 are being framed as addressing the problems of “going dark” and the “dark web.”
In advance of a markup this week, the Senate Judiciary Committee Chair Lindsey Graham (R-SC) issued legislation that would further alter Section 230. In late July, the Senate Judiciary Committee met, amended and reported out the “Eliminating Abusive and Rampant Neglect of Interactive Technologies Act of 2020” (EARN IT Act of 2020) (S.3398), a bill that would change Section 230 by narrowing the liability shield and potentially making online platforms liable to criminal and civil actions for having child sexual materials on their platforms. The bill as introduced in March was changed significantly when a manager’s amendment was released and then further changed at the markup. The Committee reported out the bill unanimously, sending it to the full Senate, perhaps signaling the breadth of support for the legislation. It is possible this could come before the full Senate this year. If passed, the EARN IT Act of 2020 would represent a second piece of legislation to change Section 230 in the last two years with enactment of “Allow States and Victims to Fight Online Sex Trafficking Act of 2017” (P.L. 115-164). There is, at present, no House companion bill.
However, the “Online Content Policy Modernization Act” tacks additional Section 230 reform onto a bill seeking to reform copyright disputes. The new language would similarly reform Section 230 regarding “Good Samaritan” blocking, and, like the DOJ proposal, would limit liability protection for content moderation and removal. This provision would also change the “considers to be” language to an objectively reasonable standard. Likewise, it would also remove “objectionable” as a grounds a platform could use to take down material without fear of being sued but add “promoting self-harm, promoting terrorism, or unlawful” materials as those that can be taken down without threat of a suit. Unlike the DOJ bill, this one makes clear that when platforms edit, modify, or editorialize about content posted by others, they are acting as “information content providers” and are not protected from litigation.
Of course, there have been a number of other Section 230 reform bills and other Trump Administration actions, including a May 2020 executive order. (see here for more analysis.)
© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.