More of the Same

Twitter, Google, and Facebook’s CEOs are questioned about disinformation in a contentious hearing that did not necessarily make the Section 230 reform landscape much clearer.

First, subscribe to my new newsletter, the Wavelength, to get all the content you have come to enjoy on my blog. I’m a former lobbyist and Congressional staffer who understand the politics, policy, and law and technology. I cover the U.S. and much of the rest of the world and cover much of the breadth of the technology world.

The House Energy and Commerce Committee’s Communications and Technology and Consumer Protection and Commerce Subcommittees held a much awaited hearing with the chief executive officers of Google, Facebook, and Twitter about online disinformation and the role of social media platforms.

Twitter

Pick your metaphor. Almost all heat, very little light. Go Shakespearean: sound and fury signifying next to nothing. The parable of the wise men and the elephant. One man’s trash is another’s treasure. Stampede at the watering hole if the animals are Members and the water cameras. Please answer yes or no.

Cocktail Party

Everyone agrees there are problems with disinformation online, but the political parties are sharply split over what constitutes problematic disinformation. Accordingly, the remedies differ. And, as shocking as this may be, there are Members not acting in good faith and are posturing for the cameras. I think it would be an excellent inquiry to cross check Member statements against political donations from “Big Tech.” Having said that, the incentives for these companies do not lend themselves to vigorous policing of content that violates their terms of service, for much of that content drives engagement and hence revenue. Political pressure does create incentive to do something, which may well likely be there has been an increased focus by these platforms of late.

Meeting

Congress is thinking about changing the liability shield again for technology companies, but none of the key Members of the committee have tipped their hands, by introducing legislation. Ostensibly, two years the change was to stop the sexual exploitation of children online with the unintended effect of pushing legitimate, often legal sex work off of websites like Craigslist and others. In this Congress, there have been a handful of bills introduced to reform 47 U.S.C. 230 (Section 230), and last year, in remarks to The New York Times editorial board, now President Joe Biden called for a complete repeal of Section 230. Former President Donald Trump vetoed the FY 2021 National Defense Authorization Act (P.L. 116-283) because it did not repeal Section 230, but this reason was likely incidental to Trump’s actual purpose as he was engaged in a spree of granting pardons for former associates. Democrats generally are alarmed about the extremist content that egged on Trump supporters to storm the Capitol on 6 January 2021, targets women and minorities, spreads disinformation about politics, and casts into doubt public health measures such as mask wearing, distancing, and vaccination. The titled of the hearing summarizes the Democratic position: “Disinformation Nation: Social Media’s Role in Promoting Extremism and Disinformation.” On the other hand, Republicans decry the anti-conservative bias allegedly afflicting their content and viewpoints (despite evidence to the contrary) and so-called “cancel culture.” They also take issue with the removal for former President Donald Trump from a number of platforms while Iran’s Ayatollah Ali Khamenei can engage in conduct much worse than Trump’s. However, many Republicans worked from the same script in focusing on the mental health effects on children of using social media and being online. This represents a new emphasis as Republicans have often stressed the alleged censorship of conservatives.

Geek Out

The Democratic staff drafted and issued a memorandum on online disinformation before the hearing that lays out the Democratic position on the lies, misinformation, and disinformation available online.

Chair Frank Pallone Jr. (D-NJ) contended (watch his opening statement or read his full statement):

  • It is now painfully clear that neither the market nor public pressure will force these social media companies to take the aggressive action they need to take to eliminate disinformation and extremism from their platforms. And, therefore, it is time for Congress and this Committee to legislate and realign these companies’ incentives to effectively deal with disinformation and extremism.
  • Today, our laws give these companies, and their leaders, a blank check to do nothing. Rather than limit the spread of disinformation, Facebook, Google, and Twitter have created business models that exploit the human brain’s preference for divisive content to get Americans hooked on their platform, at the expense of the public interest. It isn’t just that social media companies are allowing disinformation to spread – it’s that, in many cases, they are actively amplifying and spreading it themselves. Fines, to the extent they are levied at all, have simply become the cost of doing business.
  • The dirty truth is that they are relying on algorithms to purposefully promote conspiratorial, divisive, or extremist content so they can rake in the ad dollars. This is because the more outrageous and extremist the content, the more engagement and views these companies get from their users. More views equal more money.
  • It’s crucial to understand that these companies aren’t just mere bystanders – they are playing an active role in the meteoric rise of disinformation and extremism.
  • So when a company is actually promoting this harmful content, I question whether existing liability protections should apply.
  • Members on this Committee have suggested legislative solutions and introduced bills. The Committee is going to consider all these options so that we can finally align the interests of these companies with the interests of the public and hold the platforms, and their CEOs, accountable when they stray.

Ranking Member Cathy McMorris Rodgers (R-WA) claimed (watch her opening statement and read her full written statement):

  • Big Tech needs to be exposed and completely transparent for what you are doing to our children so parents like me can make informed decisions. We also expect Big Tech to do more to protect children because you haven’t done enough. Big Tech has failed to be good stewards of your platforms. I have two daughters and a son with a disability.
  • Let me be clear, I do not want you defining what is true for them. I do not want their future manipulated by your algorithms. I do not want their self- worth defined by the engagement tools you’ve built to own their attention. I do not want them to be in danger from what you’ve created. I do not want their emotions and vulnerabilities taken advantage of so you can make more money and have more power.
  • I’m sure most of my colleagues on this committee—who are also parents and grandparents—feel the same way. Over 20 years ago, before we knew what Big Tech would become, Congress gave you liability protections.
  • I want to know why do you think you still deserve those protections today? What will it take for your business model to stop harming children? I know I speak for millions of moms when I say we need these answers and we will not rest until we get them.

Subcommittee Chair Jan Schakowsky (D-IL) stated (watch her opening statement and read her full written statement):

  • What our witnesses need to take away from this hearing is that self-regulation has come to the end of its road, and that this democratically elected body is prepared to move forward with legislation and regulation.
  • The regulation we seek should not attempt to limit Constitutionally protected free speech, but it must hold platforms accountable when they are used to incite violence and hatred—– or as in the case of the Covid pandemic – spread misinformation that costs thousands of lives.
  • The witnesses here today have demonstrated time and again that promises to self-regulate don’t work. They must be held accountable for allowing disinformation and misinformation to spread across their platforms, infect our public discourse, and threaten our democracy.
  • That’s why I’ll be introducing the Online Consumer Protection Act, which I hope will earn bipartisan support.

Subcommittee Ranking Member Gus Bilirakis (R-FL) (watch his opening statement or read his full statement):

  • People want to use your services, but they suspect your coders are designing what they think we should see and hear, by keeping us online longer than ever, and all with the purpose to polarize and monetize us, disregarding any consequences for the assault on our inherent freedoms.
  • So I don’t want to hear about how changing current law is going to hurt start-ups, because I’ve heard directly from them accusing you of anti-competitive tactics. None of us want to damage entrepreneurs.
  • What I do want to hear is what you will do to bring our country back from the fringes and stop the poisonous practices that drive depression, isolation, and suicide, and instead cooperate with law enforcement to protect our citizens. Our kids are being lost while you say you will “try to do better” as we’ve heard countless times already. We need true transparency and real change, not empty promises.
  • The fear you should have coming into this hearing today isn’t that you’re going to get yelled at by a Member of Congress, it’s that our committee knows how to get things done when we come together. We can do this with you or without you. And we will.

Subcommittee Chair Mike Doyle (D-PA) stated (watch his opening statement or read his full statement):

  • You can take down this content, you can reduce division, you can fix this – but you choose not to. We saw your platforms remove ISIS terrorist content; we saw you tamp down on COVID misinformation at the beginning of the pandemic; we have seen disinformation drop when you have promoted reliable news sources and removed serial disinformation super spreaders from your platforms.
  • You have the means, but time after time, you are picking engagement and profit over the health and safety of your users, our nation, and our democracy. These are serious issues, and to be honest – it seems like you all just shrug off billion-dollar fines.
  • Your companies need to be held accountable – we need rules, regulations, technical experts in government, and audit authority of your technologies. Ours is the committee of jurisdiction, and we will legislate to stop this. The stakes are simply too high.

Subcommittee Ranking Member Bob Latta (R-OH) asserted (watch his opening statement or read his full statement):

  • As Ranking Member on the Subcommittee for Communications and Technology, we have oversight over any change made to Section 230 of the Communications Decency Act. Section 230 provides you with liability protection for content moderation decisions made in “good faith”. Based on recent actions, however, it is clear that your definition of “good faith” moderation includes censoring viewpoints you disagree with and establishing a faux independent appeals process that does not make its content moderation decisions based on American principles of free expression. I find that highly concerning.
  • I look at today’s hearing as an important step in reconsidering the extent to which Big Tech deserves to retain their significant liability protection.

Facebook CEO Mark Zuckerberg stated (watch his opening statement or read his full statement):

  • In my testimony above, I laid out many of the steps we have taken to balance important values including safety and free expression in democratic societies. We invest significant time and resources in thinking through these issues, but we also support updated Internet regulation to set the rules of the road. One area that I hope Congress will take on is thoughtful reform of Section 230 of the Communications Decency Act.
  • Over the past quarter-century, Section 230 has created the conditions for the Internet to thrive, for platforms to empower billions of people to express themselves online, and for the United States to become a global leader in innovation. The principles of Section 230 are as relevant today as they were in 1996, but the Internet has changed dramatically. I believe that Section 230 would benefit from thoughtful changes to make it work better for people, but identifying a way forward is challenging given the chorus of people arguing—sometimes for contradictory reasons—that the law is doing more harm than good.
  • Although they may have very different reasons for wanting reform, people of all political persuasions want to know that companies are taking responsibility for combatting unlawful content and activity on their platforms. And they want to know that when platforms remove harmful content, they are doing so fairly and transparently.
  • We believe Congress should consider making platforms’ intermediary liability protection for certain types of unlawful content conditional on companies’ ability to meet best practices to combat the spread of this content. Instead of being granted immunity, platforms should be required to demonstrate that they have systems in place for identifying unlawful content and removing it. Platforms should not be held liable if a particular piece of content evades its detection—that would be impractical for platforms with billions of posts per day—but they should be required to have adequate systems in place to address unlawful content.
  • Definitions of an adequate system could be proportionate to platform size and set by a third-party. That body should work to ensure that the practices are fair and clear for companies to understand and implement, and that best practices don’t include unrelated issues like encryption or privacy changes that deserve a full debate in their own right.
  • In addition to concerns about unlawful content, Congress should act to bring more transparency, accountability, and oversight to the processes by which companies make and enforce their rules about content that is harmful but legal. While this approach would not provide a clear answer to where to draw the line on difficult questions of harmful content, it would improve trust in and accountability of the systems and address concerns about the opacity of process and decision-making within companies.

Google CEO Sundar Pichai stated (watch his opening statement or read his full statement):

  • These are just some of the tangible steps we’ve taken to support high quality journalism and protect our users online, while preserving people’s right to express themselves freely. Our ability to provide access to a wide range of information and viewpoints, while also being able to remove harmful content like misinformation, is made possible because of legal frameworks like Section 230 of the Communications Decency Act.
  • Section 230 is foundational to the open web: it allows platforms and websites, big and small, across the entire internet, to responsibly manage content to keep users safe and promote access to information and free expression. Without Section 230, platforms would either over-filter content or not be able to filter content at all. In the fight against misinformation, Section 230 allows companies to take decisive action on harmful misinformation and keep up with bad actors who work hard to circumvent their policies.
  • Thanks to Section 230, consumers and businesses of all kinds benefit from unprecedented access to information and a vibrant digital economy. Today, more people have the opportunity to create content, start a business online, and have a voice than ever before. At the same time, it is clear that there is so much more work to be done to address harmful content and behavior, both online and offline.
  • Regulation has an important role to play in ensuring that we protect what is great about the open web, while addressing harm and improving accountability. We are, however, concerned that many recent proposals to change Section 230—including calls to repeal it altogether—would not serve that objective well. In fact, they would have unintended consequences—harming both free expression and the ability of platforms to take responsible action to protect users in the face of constantly evolving challenges.
  • We might better achieve our shared objectives by focusing on ensuring transparent, fair, and effective processes for addressing harmful content and behavior. Solutions might include developing content policies that are clear and accessible, notifying people when their content is removed and giving them ways to appeal content decisions, and sharing how systems designed for addressing harmful content are working over time. With this in mind, we are committed not only to doing our part on our services, but also to improving transparency across our industry.
  • I look forward to sharing more about our approach with you today, and working together to create a path forward for the web’s next three decades.

Twitter CEO Jack Dorsey (watch his opening statement or read his full statement):

  • We also recognize that addressing harms associated with misinformation requires innovative solutions. Content moderation in isolation is not scalable, and simply removing content fails to meet the challenges of the modern Internet. This is why we are investing in two experiments — Birdwatch and Bluesky. Both are aimed at improving our efforts to counter harmful misinformation.
  • In January, we launched the “Birdwatch” pilot, a community-based approach to misinformation. Birdwatch is expected to broaden the range of voices involved in tackling misinformation, and streamline the real-time feedback people already add to Tweets. We hope that engaging diverse communities here will help address current deficits in trust for all. More information on Birdwatch can be found here. We expect data related to Birdwatch will be publicly available at Birdwatch Guide, including the algorithm codes that power it.
  • Twitter is also funding Bluesky, an independent team of open source architects, engineers, and designers, to develop open and decentralized standards for social media. This team has already created an initial review of the ecosystem around protocols for social media to aid this effort. Bluesky will eventually allow Twitter and other companies to contribute to and access open recommendation algorithms that promote healthy conversation and ultimately provide individuals greater choice. These standards will support innovation, making it easier for startups to address issues like abuse and hate speech at a lower cost. Since these standards will be open and transparent, our hope is that they will contribute to greater trust on the part of the individuals who use our service. This effort is emergent, complex, and unprecedented, and therefore it will take time. However, we are excited by its potential and will continue to provide the necessary exploratory resources to push this project forward.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2021. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Photo by Alberto De quevedo on Unsplash

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s