Online Disinformation Hearing

A House committee continues its inquiry into Section 230 liability protection and how it may be fueling the flood of online misinformation and disinformation. However, the preferred policy solutions of Republicans and Democrats differ significantly.

First things first, if you would like to receive my Technology Policy Update, email me. You can find some of these Updates from 2019 and 2020 here.

The House Energy and Commerce Committee’s Communications and Technology and Consumer Protection and Commerce Subcommittees held a joint virtual hearing titled “A Country in Crisis: How Disinformation Online is Dividing the Nation” to “examine the role of social media platforms in disseminating disinformation relating to the coronavirus disease of 2019(COVID-19) pandemic and racial injustice” according to Chair Frank Pallone Jr’s (D-NJ) background memorandum. There was wide-ranging discussion of Section 230 of the Communications Act of 1934 and how the liability shield for technology companies has been used, misused, and abused. While a number of both Republican and Democratic Members agreed that Congress should revisit this statue, there was little agreement on the ultimate goal for amending the 1996 law or the means of doing so.

Communications and Technology Subcommittee Chair Mike Doyle (D-PA) said that “[t]he matter before the Committee today is one of pressing importance, the flood of disinformation online – principally distributed by social media companies – and the dangerous and divisive impact it is having on our nation as we endure the COVID-19 epidemic.” He added that “[i]n the midst of this historic crisis, we are also facing a historic opportunity…[and] [t]ens of millions of Americans are calling for racial justice and systemic changes to end racism and police brutality in the wake of the horrific murders of George Floyd, Breonna Taylor, and countless other Black Americans at the hands of law enforcement.”

Doyle stated

  • But as we march for progress and grapple with this deadly disease, the divisions in our country are growing. While our nation has long been divided, today we see that much of this division is driven by misinformation distributed and amplified by social media companies – the largest among them being Facebook, YouTube, and Twitter.
  • These platforms have become central to the daily lives of many around the globe – and to the way that people get their news, interact with each other, and engage in political discourse.
  • Our nation and the world are facing an unprecedented tsunami of disinformation that threatens to devastate our country and the world. It has been driven by hostile foreign powers seeking to weaken our democracy and divide our people, by those in our country who seek to divide us for their own political gain, and by the social media companies themselves – who have put profits before people as their platforms have become awash in disinformation and their business models have come to depend on the engaging and enraging nature of these false truths.


  • When Congress enacted Section 230 of the Communications Decency Act in 1996, this provision provided online companies with a sword and a shield to address concerns about content moderation and a website’s liability for hosting third party content. And while a number of websites have used 230 for years to remove sexually explicit and overly violent content, they have failed to act to curtail the spread of disinformation. Instead they have built systems to spread it at scale and to monetize the way it confirms our implicit biases.
  • Everyone likes to hear and to read things that confirm what they think is true, and these companies have made trillions of dollars by feeding people what they want to hear. As a result, these platforms have peddled lies about COVID 19, Black Lives Matter, voting by mail, and much, much more.
  • When companies have done the right thing and stepped up to take down disinformation, they have been attacked by those who have benefited from it. Recently, Twitter labelled a number of tweets by President Trump as inaccurate, abusive, and glorifying violence. In response, President Trump issued an Executive Order threatening all social media companies. The Department of Justice has issued similarly thuggish proposals as well. The intent of these actions is clear: to bully social media companies into inaction.

Doyle asserted “[s]ocial media companies need to step up to protect our civil rights, our human rights, and human lives – NOT sit on the sidelines as our nation drowns in a sea of disinformation.” He argued “the future of our democracy is at stake and the status quo is unacceptable.” Doyle said “[w]hile Section 230 has long provided online companies the flexibility and liability protections they need to innovate and to connect people from around the world, it has become clear that reform is necessary if we want to stem the tide of disinformation rolling over our country.”

Communications and Technology Subcommittee Ranking Member Bob Latta (R-OH) said that having access to accurate information on COVID-19 can be a life or death matter, and more and more Americans are turning to internet sources for this sort of information. He claimed that everyone knows that not everything read online can be taken as true due to inaccuracies or outright lies. Latta said that to date companies have worked to police their platforms to remove harmful or inaccurate information online. He said that Congress enacted Section 230 to allow internet companies to do just that. Latta recounted the law was crafted to allow online platforms such as CompuServe and America Online to proactively take down offensive content without fear of liability for having down the right thing. He declared hateful speech has no place on the internet or online platforms, and Section 230 is the means to ensure this does not happen.

Latta said that many companies use Section 230 for its intended purposes, but it is concerning to see other companies use this statutory language after being pressured by activists or advertisers to make Good Samaritan edits intended to fit their political agenda. He claimed many tech companies have gown and benefitted from Section 230 protection, allowing them to become the true gatekeepers to the internet. Latta said it is often the case these companies do not want to take responsibility for the content within their gates. He stressed that he is neither calling for a repeal of Section 230 nor niche carveouts that would make a patchwork of the statute. Latta said it is unfortunate courts have broadly read the statute to give technology companies wide liability protection with “doing everything possible.” He claimed that Section 230 allows “bad Samaritans to skate by without accountability.”

Latta asserted freedom of speech is a fundamental right upon which the United States is built. He said it is critical that technology companies do not unfairly police political speech online. Latta claimed many problems would be addressed if these companies consistently enforced their terms of service. He argued if online platforms can moderate conservative political speech then surely they can identify and takedown racist and hateful speech, too. He stated his hope that reports of political bias at large platforms are not an indication of their resource prioritization, and if so, then Congressional scrutiny should be considered as to how Section 230 is being used in the marketplace. Latta reiterated that repealing Section 230 is not the answer, and he said these companies could use better oversight in how they implement their content moderation practices. He argued Congress should do everything possible to encourage technology companies to use the “sword” of Section 230 to address offensive and lewd speech without censoring political speech. Latta stressed terms of service can and should be enforced equitably but fairly.

Consumer Protection and Commerce Subcommittee Chair Jan Schakowsky (D-IL) noted last fall she and Doyle held a joint hearing on Section 230, and subsequently my subcommittee held a hearing on unsafe products and fake reviews found online.” She said “[a]t both hearings, industry representatives from Big Tech testified, and we heard that content moderation and consumer protection were really hard, and that industry could always do better.” Schakowsky said “[t]hey made promises and discouraged Congressional action.” She added “[f]ast forward 6 months, add a global health crisis and nationwide protests against police brutality and racial inequality” and asserted “[a]s we will hear today, it’s an understatement to say that industry could still be doing better.”

Schakowsky stated “[t]he harms associated with misinformation and disinformation continue to fall disproportionately on communities of color, who already suffer worse outcomes from COVID-19.” She claimed “[a]ll the while, the President himself is continually spreading dangerous disinformation that Big Tech is all too eager to profit from.” Schakowsky asserted “[n]o matter what the absolutists say, Section 230 is not only about free speech and content moderation…[and] [i]f it were, our conversation today would be different.” She argued “Big Tech uses it as a shield to protect itself from liability when it fails to protect consumers or harms public health, and uses it as a sword to intimidate cities and states when they consider legislation, as Airbnb did in 2016 when New York City was considering regulating its online rental market for private homes.” Schakowsky stated “[t]he truth is, Section 230 protects business models that generate profits off scams, fake news, fake reviews, and unsafe, counterfeit, and stolen products….[and] [t]his was never the intent, and since both courts and industry refuse to change, Congress must act.”

Schakowsky emphasized that “we must do so responsibly.” She asserted “[t]he President’s recent actions are designed to kneecap platforms that fact check him or engage in what he claims is bias against conservative views.” Schakowsky declared “the President is using his position to chill speech and that is wrong.” She stated that “[w]e must encourage content moderation that fosters a safer and healthy online world…[a]nd don’t be fooled by made up claims of bias against conservatives.”

Schakowsky contended “[t]oday, it seems there is a less of a bias against conservatives and rather a bias for conservatives.” She asserted “[o]n June 19th, 9 of the 10 top-performing political pages on Facebook were conservative pages, including Donald J. Trump, Donald Trump for President, Ben Shapiro, Breitbart and Sean Hannity.” Schakowsky said that “as the New York Times reported over the weekend, Facebook in particular seems to enjoy a cozy relationship with the Trump Administration, aided by Facebook’s own loyal Trump supporters, Joel Kaplan and Peter Theil.” She stated “I hope Mr. Kaplan and Mr. Theil will soon make themselves available to Congress to answer questions about what role they play in information dissemination, and how they balance this incredible responsibility with their extreme partisan ties and views.”

Schakowsky stated “[r]egardless, as the testimony today demonstrates, something needs to be done…[and] [t]he American people are dying and suffering as a result of online disinformation.” She said “I look forward to working with my colleagues to modernize Section 230 and put platforms on a path that helps all Americans.”

Color of Change Senior Campaign Director Brandi Collins-Dexter stated

It is important to ensure free speech is a fundamental right. But in these times we have been too willing to conflate free speech with disinformation, allowing disinformation to be about politics when it should be about the truth and facts that exist above the political fray. It is important that we recognize the need to still set boundaries. We have said that you can’t yell fire in a crowded theater. We have said that freedom of speech is not absolute, with walls that include defamation, incitement, right to dignity and the right to privacy. And we have said both that private companies are not obligated to let any sort of speech reign free in their domain and that freedom of speech does not mean freedom from the consequences of speech.

Collins-Dexter said “[i]n this vein, I urge you to move quickly to fix our democracy before it’s irretrievably broken” and “Congress should:

1.   Convene a series of civil rights-focused hearings with high-level executives and CEOs from all of the major companies, with particular focus on those trafficking in disinformation. It is important that we hear from those making decisions on how policies and practices get implemented and operationalized, how they not only set the rules, but ensure Congress should ask these companies how their governance structures ensure companies are protecting civil rights through board committees, and senior personnel with authority and resources to ensure these companies are proactively protecting civil rights.

2.   Restore funding for the Office of Technology Assessment in order to help Congress tackle issues such as data privacy, tech election protection, and set up infrastructure that can facilitate deeper investment in US based innovation and entrepreneurship to combat disinformation and other data hostile practices. Often, members of Congress and the staff do not have the resources, time, technology background or expertise to carry out in depth assessments of the dangers and opportunities for new technologies. This is one meaningful way to generate at scale the congressional knowledge required to key decisions.

3.   Ensure that regulators have every power at their disposal to ensure the safety of consumers and users on tech platforms. The Senate must affirmatively ensure civil rights are protected online.  There are a number of ways this could occur: Congress should make robust civil rights protections an essential element of any privacy legislation. Congress could also pass Senator [Kirsten] Gillibrand’s [“Data Protection Act of 2020” (S.3300)], which would create a consumer watchdog agency that is resourced to ensure we all are able to have control and protection of our data and that there is a competitive digital marketplace. The United States remains one of the only democracies in the world operating without a Data Protection Agency, despite that fact that the U.S. is second only to China as the country creating, replicating and storing the most data.

4.   Most important, Congress should affirmatively empower and resource the Federal Trade Commission to enforce antitrust laws against technology oligarchs. It is clear that the sheer amount of data and information amassed by tech companies, the inability of companies like Facebook and Google to be regulated at scale, and the stakes online, in the voting booth, and on our streets require a serious conversation about, and actionable steps towards, breaking up companies. Last year the Federal Trade Commission levied the largest fine on a company in history when they fined Facebook $5 billion. Yet not even a year later we are in no better position than we were then— many would say we are  worse off.

University of California, Berkeley Professor Hany Farid stated

  • The  internet,  and  social  media  in  particular,  is  failing  us  on  an  individual,  societal,  and democratic level.  Online content providers have prioritized growth, profit, and market dominance over creating a safe and healthy online ecosystem.  These providers have taken the position that they are simply in the business of hosting user-generated content and are not and should not be asked to be “the arbiters of truth.” This, however, defies the reality of social media today where the vast majority of delivered content is actively promoted by content providers based on their algorithms that are designed in large part to maximize engagement and revenue.  These algorithms have access  to highly detailed profiles  of billions of  users, allowing  the  algorithm  to  micro-target  content  in  unprecedented  ways.   These  algorithms have learned that divisive, hateful, and conspiratorial content engages users and so this type of content is prioritized, leading to rampant misinformation and conspiracies and, in turn, increased anger, hate, and intolerance, both online and offline.
  • Many want to frame the issue of content moderation as an issue of freedom of speech. It  is  not.   First,  private  companies  have  the  right  to  regulate  content  on  their  services without running afoul of the first amendment,  as many routinely do when they ban legal adult  pornography.   Second,  the  issue  of  content  moderation  should  focus  not  on  content removal but on the underlying algorithms that determine what is relevant and what we see, read,  and  hear  online.   It  is  these  algorithms  that  are  at  the  core  of  the  misinformation amplification.  Snap’s CEO, Evan Spiegel, for example, recently announced “We will make it clear with our actions that there is no gray area when it comes to racism, violence and injustice.   We  simply  cannot  promote  accounts  in  America  that  are  linked  to  people  who incite  racial  violence,  whether  they  do  so  on  or  off  our  platform.”  But  Mr.   Spiegel  also announced that the company would not necessarily remove specific content or accounts. It is precisely this amplification and promotion that should be at the heart of the discussion of online content moderation.
  • If online content providers prioritized their algorithms to value trusted information over untrusted information, respectful over hateful, and unifying over divisive, we could move from a  divisiveness-fueling  and misinformation-distributing machine that  is  social media  today, to a healthier and more respectful online ecosystem.  If advertisers, that are the fuel behind social media, took a stand against online abuses, they could withhold their advertising dollars to insist on real change. Standing in the way of this much needed change is a lack of corporate leadership, a lack of competition,  a lack of regulatory oversight,  and a lack of education among the general public.  Responsibility, therefore, falls on the private sector, government regulators, and we the general public.

George Washington University Professor Spencer Overton asserted

  • Disinformation on  social  media  presents a  real  danger  to  racial  equity,  voting  rights,  and democracy. Social media companies currently have the authority in the United States to moderate content to prevent disinformation, civil rights violations, and voter suppression. They should use this authority.  
  • While President Trump’s executive order is misguided and constitutionally problematic, a serious debate on possible regulation of  social media platforms is warranted in Congress. There are serious questions about whether social media platforms should be required to engage in reasonable content moderation to prevent disinformation that results in online civil rights and other legal violations.
  • While I  am loathe to open up Section 230 to debate because the provision serves such an important purpose, the status quo is not working. Even after the killing of George Floyd, there is a real question about whether social media companies will address their own systemic shortcomings and embrace civil rights principles and racial equity. Absent a clear declaration accompanied by action, those interested in racial equity and voting rights may have no choice but to seek to amend Section 230.
  • Various  platforms—including Facebook, Twitter, and YouTube—have been very effective at preventing other objectionable content—such as obscenity. Similarly, social  media  companies have been very effective in ensuring truthful information on COVID-19 because they perceived that disinformation on the coronavirus posed a public health risk. Unfortunately, some social media companies do not seem to  have internalized the threat disinformation poses to the health of our democracy. The comparative lack of effectiveness in protecting racial equity and the voting rights of all Americans seems to reflect not a lack of capacity, but a lack of will.

DigitalFrontiers Advocacy Principal Neil Fried stated

  • Section  230  was  created  almost  twenty-five  years  ago  in  a  world  of  dial-up bulletin  boards.  In  today’s  always-on,  broadband  and  mobile  environment,  it  is  having  harmful,  unintended  consequences.  One  way  to  preserve  the  benefits  of  subsection (c)(2)’s content moderation safe harbor, while   fixing   subsection   (c)(1)’s   harmful misincentive, would  be  to  restore  a  duty  of  care.  This  could  be  achieved by  requiring platforms to take reasonable, good faith steps to curb illicit activity over their services as a condition of receiving liability protection. Doing so would mean platforms do not enjoy protection  from  liability  when  they  negligently,  recklessly,  or  knowingly  facilitate illicit activity  by  their  users.  I would  be  happy  to  discuss  with  the  Committee  a  variety  of  possible language changes to section 230 to accomplishing that objective. 
  • Altering   section   230   in   this   way   would better   realize   Congress’   goal   of   encouraging platforms to moderate content, so that we get the best out of the internet and mitigate  the  worst.  It  would  help  combat  illicit  activity  online.  And  it  would  ameliorate competition  concerns  arising  from  the  fact  that  while  many  of  the  platforms’  rivals  appropriately have a duty of care regarding third-party conduct, the platforms themselves do not.  Such  a  solution  also  avoids  harms  that critics  typically  ascribe  to  section  230  reform:
    • First, it  preserves  the  content  moderation  safe  harbor  the  platforms  say they  need  to  be  willing  to  continue carrying user-generated  content,  so  this  approach does not jeopardize online expression.
    • Second, it requires no new regulation of the internet. Regulation typically involves advance,  industry-wide  restrictions  on permissible  business  models, usually  promulgated  by  an  agency.  Under  this  proposal,  however, platforms would still have discretion over their business models on the front end. They just would appropriately  be  held  accountable  on  the  back  end  if  they  use  that  discretion poorly.  That  potential  back-end  accountability  would  prompt  more  responsibly from the start. In essence, it would encourage more “responsibility by design.”
    • Third, it does not rely on the creation of proscriptive content requirements by Congress or an agency, avoiding First Amendment claims.
    • Fourth, it reduces the need to develop issue-specific provisions or to pass separate  bills  for each and  every  current  or  future  problem  online,  thereby minimizing the risk of creating a patchwork of inconsistent requirements.
    • Fifth,  the  effort  needed  to  meet  the  duty  of  care  will  inherently be proportional  to  platform  size,  ensuring  smaller  platforms  are  not  unreasonably burdened  as  they  try  to  grow.  Indeed,  any  evaluation  of  the  reasonableness  of  content  moderation  efforts  will  factor  in  the  resources  available  to  a  platform.  Moreover,  smaller  platforms  and  platforms  that  focus  less  on  user-generated content will have fewer users and uses to moderate.

© Michael Kans, Michael Kans Blog and, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and with appropriate and specific direction to the original content.

Image by Gordon Johnson from Pixabay

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s