The Section 230 hearing was largely political theater. |
The Senate Commerce, Science, and Transportation Committee held its long awaited hearing ostensibly on 47 U.S.C. 230 (Section 230) with the CEOs of Facebook, Google, and Twitter. I suppose the title of the hearing should have told us all we need to know about the approach of the Republican majority: “Does Section 230’s Sweeping Immunity Enable Big Tech Bad Behavior?” And, oddly enough, there are likely areas where Republicans can agree with Democrats in terms of less desirable outcomes flowing perhaps from Section 230 immunity. For example, The New York Times and other outlets have highlighted how technology platforms do at identifying and taking down child pornography or non-consensual pornography, and I would think tech’s staunchest supports would concede there is room for improvement. However, this hearing seem conceived and executed for to perpetuate the Republican narrative that technology companies are biased against them and their content. And, to amplify this message, Republican Senators crafted novel arguments (e.g. Senator Mike Lee (R-UT) claiming that a platform labeling a false or misleading statement is censorship) or all but yelled at the CEOs (e.g. Senator Ted Cruz (R-TX) positively shouting at Twitter head Jack Dorsey).
Chair Roger Wicker (R-MS) again propounded the position that technology companies should not be able to moderate, correct, label, or block political content, especially conservative material. In essence, Republicans seem to be making the case that Twitter, Facebook, Google, and others have become the de facto public square for 21st Century America, and just as a person marching with a sign in an actual town cannot be stopped, so, too, should it be online. This argument conveniently ignores the long-established fact that the First Amendment applies to government regulation or suppression of speech, and that private regulation or suppression is largely not protected by the First Amendment. Also, Republicans are taking the paradoxical position that the government should be able to dictate or bully private companies into complying with their desired policy outcome when they purport to favor free market economics. It is also telling that Wicker only wants to change Section 230 and not do away with it entirely. A cynic might observe that so long as the social media platforms are giving conservatives the treatment they want, the many other, extensively documented abuse and harassment women and people of color face online do not seem to be important enough to address. Moreover, Wicker had little to say about the tide of lies, misinformation, and disinformation flooding the online world. Finally, Wicker relied only on anecdotal evidence that conservatives and Republicans are somehow being muted or silenced at a greater rate than liberals and Democrats for the very good reason that no evidence from reputable research supports this argument. The data we have show conservative material flourishing online.
In his opening statement, Wicker claimed:
- We have convened this morning to continue the work of this Committee to ensure that the internet remains a free and open space, and that the laws that govern it are sufficiently up to date. The internet is a great American success story, thanks in large part to the regulatory and legal structure our government put in place. But we cannot take that success for granted. The openness and freedom of the internet are under attack.
- For almost 25 years, the preservation of internet freedom has been the hallmark of a thriving digital economy in the United States. This success has largely been attributed to a light-touch regulatory framework and to Section 230 of the Communications Decency Act – often referred to as the “26 words that created the internet.”
- There is little dispute that Section 230 played a critical role in the early development and growth of online platforms. Section 230 gave content providers protection from liability to remove and moderate content that they or their users consider to be “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.” This liability shield has been pivotal in protecting online platforms from endless and potentially ruinous lawsuits. But it has also given these internet platforms the ability to control, stifle, and even censor content in whatever manner meets their respective “standards.” The time has come for that free pass to end.
- After 24 years of Section 230 being the law of the land, much has changed. The internet is no longer an emerging technology. The companies before us today are no longer scrappy startups operating out of a garage or a dorm room. They are now among the world’s largest corporations, wielding immense power in our economy, culture, and public discourse – immense power. The applications they have created are connecting the world in unprecedented ways, far beyond what lawmakers could have imagined three decades ago. These companies are controlling the overwhelming flow of news and information that the public can share and access.
- One noteworthy example occurred just two weeks ago after our subpoenas were unanimously approved; the New York Post – the country’s fourth largest newspaper – ran a story revealing communications between Hunter Biden and a Ukrainian official. The report alleged that Hunter Biden facilitated a meeting with his father, Joe Biden, who was then the Vice President of the United States. Almost immediately, both Twitter and Facebook took steps to block or limit access to the story. Facebook, according to its Policy Communications Manager, began “reducing its distribution on [the] platform” pending a third-party fact check. Twitter went beyond that, blocking all users — including the House Judiciary Committee — from sharing the article on feeds and through direct messages. Twitter even locked the New York Post’s account entirely, claiming the story included “hacked materials” and was “potentially harmful.”
- It is worth noting that both Twitter and Facebook’s aversion to hacked materials has not always been so stringent. For example, when the President’s tax returns were illegally leaked, neither company acted to restrict access to that information. Similarly, the now-discredited Steele dossier was widely shared without fact checking or disclaimers. This apparent double standard would be appalling under normal circumstances. But the fact that selective censorship is occurring in the midst of the 2020 election cycle dramatically amplifies the power wielded by Facebook and Twitter.
- Google recently generated its own controversy when it was revealed that the company threatened to cut off several conservative websites, including the Federalist, from their ad platform. Make no mistake, for sites that rely heavily on advertising revenue for their bottom line, being blocked from Google’s services – or “demonetized” – can be a death sentence.
- According to Google, the offense of these websites was hosting user-submitted comment sections that included objectionable content. But Google’s own platform, YouTube, hosts user-submitted comment sections for every video uploaded. It seems that Google is far more zealous in policing conservative sites than its own YouTube platform for the same types of offensive and outrageous language.
- It is ironic, that when the subject is net neutrality technology companies, including Facebook, Google, and Twitter, have warned about the grave threat of blocking or throttling the flow of information on the internet. Meanwhile, these same companies are actively blocking and throttling the distribution of content on their own platforms and are using protections under 30 to do it. Is it any surprise that voices on the right are complaining about hypocrisy or, even worse, anti-democratic election interference.
- These recent incidents are only the latest in a long trail of censorship and suppression of conservative voices on the internet. Reasonable observers are left to wonder whether big tech firms are obstructing the flow of information to benefit one political ideology or agenda.
- My concern is that these platforms have become powerful arbiters of what is true and what content users can access. The American public gets little insight into the decision-making process when content is moderated, and users have little recourse when they are censored or restricted. I hope we can all agree that the issues the Committee will discuss today are ripe for thorough examination and action.
- I have introduced legislation to clarify the intent of Section 230’s liability protections and increase the accountability of companies who engage in content moderation. The “Online Freedom and Viewpoint Diversity Act” would make important changes to “right-size” the liability shield and make clear what type of content moderation is protected. This legislation would address the challenges we have discussed while still leaving fundamentals of Section 230 in place.
- Although some of my colleagues on the other side of the aisle have characterized this as a purely partisan exercise, there is strong bipartisan support for reviewing Section 230. In fact, both presidential candidates Trump and Biden have proposed repealing Section 230 in its entirety – a position I have not yet embraced. I hope we can focus today’s discussion on the issues that affect all Americans. Protecting a true diversity of viewpoints and free discourse is central to our way of life. I look forward to hearing from today’s witnesses about what they are doing to promote transparency, accountability, and fairness in their content moderation processes. And I thank each of them for cooperating with us in the scheduling of this testimony.
Ranking Member Maria Cantwell (D-WA) stayed largely in the mainstream of Democratic thought and policy on Section 230. She opened the aperture on technology issues and spotlighted the problems she sees, including the effect that declining advertising revenue has had on the U.S. media and the growing dominance of Facebook and Google have in online advertising. This is not surprising since she released a report on this very subject the day before. Cantwell discussed at some length Russian election interference, a subject tangentially related to Section 230. Perhaps, she was hinting that technology companies should be charged with finding and removing the types of misinformation foreign governments and malign actors are spreading to wreak havoc in the U.S. If so, she did not hit this point too hard. Rather her recitation of election interference was intended to get Republicans on their back foot, for if the subject of the hearing turned to Russian disinformation and related efforts, they may have to break ranks with the White House and President Donald Trump on the threat posed by Russia. Cantwell also went off topic a bit by obliquely discussing statements made by Trump and others about the security and legality of mail-in voting. She suggested without being specific that there may be means of bolstering Section 230 to drive platforms to take down disinformation and harmful material more expeditiously. Cantwell also poked Wicker by noting that the print media was not being subpoenaed to testify on why they largely ignore the New York Post’s questionable Hunter Biden article.
Cantwell asserted:
- So these issues about how we harness the information age to work for us, and not against us, is something that we deal with every day of the week, and we want to have discussion and discourse. I believe that discussion and discourse today should be broader than just 230. There are issues of privacy that our committee has addressed and issues of how to make sure there is a free and competitive news market.
- I noticed today we’re not calling in the NAB or the Publishers Association asking them why they haven’t printed or reprinted information that you alluded to in your testimony that you wish was more broadly distributed. To have the competition in the news market is to have a diversity of voices and diversity of opinion, and in my report, just recently released, we show that true competition really does help perfect information, both for our economy, and for the health of our democracy. So I do look forward to discussing these issues today. What I do not want today’s hearing to be is a chilling effect on the very important aspects of making sure that hate speech or misinformation related to health and public safety, are allowed to remain on the internet.
- We all know what happened in 2016, and we had reports from the FBI, our intelligence agencies, and a bipartisan Senate committee that concluded in 2016, that Russian operatives did, masquerading as Americans, use targeted advertisements, intentionally falsified news articles, self generated content and social media platform tools to interact and attempt to deceive tens of millions of social media users in the United States. Director of National Intelligence, then Republican Senator–former Senator–Dan Coats said in July 2018, “The warning lights are blinking red that the digital infrastructure that serves our country is literally under attack.”
- So I take this issue very seriously and have had for many years, that is, making sure, as the Mueller–Special Counsel Mueller indicated, 12 Russian intelligence officers hacked the DNC, and various information detailing phishing attacks into our state election boards, online personas, and stealing documents. So, when we had a subcommittee hearing and former Bush Homeland Security Director Michael Chertoff testified, I asked him point blank, because there were some of our colleagues who were saying, “you know what? Everybody does election interference.” So I asked him if election interference was something that we did, or should be encouraging? He responded that he agreed: “Interfering with infrastructure or elections is completely off limits and unacceptable.”
- That is why I believe that we should be working aggressively internationally to sanction anybody that interferes in our elections. So I hope today that we will get a report from the witnesses on exactly what they have been doing to clamp down on election interference. I hope that they will tell us what kind of hate speech and misinformation that they have taken off the books. It is no secret that there are various state actors who are doing all they can to take a whack at democracy, to try to say that our way of government, that our way of life, that our way of freedom of speech and information, is somehow not as good as we have made it, being the beacon of democracy around the globe.
- I am not going to let or tolerate people to continue to whack at our election process, our vote by mail system, or the ability of tech platforms, security companies, our law enforcement entities, and the collective community to speak against misinformation and hate speech. We have to show that the United States of America stands behind our principles and that our principles do also transfer to the responsibility of communication online. As my colleagues will note, we’ve all been through this in the past. That is why you, Mr. Chairman, and I, and Senators Rosen and Thune, sponsored the Hack Act that is to help increase the security and cyber security of our nation and create a workforce that can fight against that. That is why I joined with Van Hollen and Rubio on the Deter Act, especially in establishing sanctions against Russian election interference, and to continue to make sure that we build the infrastructure of tomorrow.
- So, I know that some people think that these issues are out of sight and out of mind. I guarantee you, they’re not. There are actors who have been at this for a long time. They wanted to destabilize Eastern Europe, and we became the second act when they tried to destabilize our democracy here by sewing disinformation. I want to show them that we in the United States do have fair elections. We do have a fair process. We are going to be that beacon of democracy.
- So, I hope that as we talk about 230 today and we hear from the witnesses on the progress that they have made in making sure that disinformation is not allowed online, that we will also consider ways to help build and strengthen that. That is to say, as some of those who are testifying today, what can we do on transparency, on reporting, on analysis, and yes, I think you’re going to hear a lot about algorithms today, and the kinds of oversight that we all want to make sure that we can continue to have the diversity of voices in the United States of America, both online and offline.
- I do want to say though, Mr. Chairman, I am concerned about the vertical nature of news and information. Today I expect to ask the witnesses about the fact that I believe they create a choke point for local news. The local news media have lost 70% of their revenue over the last decade, and we have lost thousands, thousands of journalistic jobs that are important. It was even amazing to me that the sequence of events yesterday had me being interviewed by someone at a newspaper who was funded by a joint group of the Knight Foundation, and probably Facebook funds, to interview me about the fact that the news media and broadcast has fallen on such a decline because of loss of revenue as they’ve made the transition to the digital age.
- Somehow, somehow, we have to come together to show that the diversity of voices that local news represent need to be dealt with fairly when it comes to the advertising market. And that too much control in the advertising market puts a foot on their ability to continue to move forward and grow in the digital age. Just as other forms of media have made the transition, and yes still making the transition, we want to have a very healthy and dynamic news media across the United States of America. So, I plan to ask the witnesses today about that.
- I wish we had time to go into depth on privacy and privacy issues but Mr. Chairman, you know, and so does Senator Thune and other colleagues of the Committee on my side, how important it is that we protect American consumers on privacy issues. That we’re not done with this work, that there is much to do to bring consensus in the United States on this important issue. And I hope that as we do have time or in the follow up to these questions, that we can ask the witnesses about that today.
- But make no mistake, gentlemen, thank you for joining us, but this is probably one of many, many, many conversations that we will have about all of these issues. But again, let’s harness the information age, as you are doing, but let’s also make sure that consumers are fairly treated and that we are making it work for all of us to guarantee our privacy, our diversity of voices, and upholding our democratic principles and the fact that we, the United States of America, stand for freedom of information and freedom of the press.
Twitter CEO Jack Dorsey’s written testimony seeks to distinguish his platform’s good practices (e.g. transparency and no cow-towing to political powers that be) from Facebook’s bad practices. Regarding algorithms, the secret sauce of how users see what they see and why some content gets amplified, Dorsey seems to make the case that a platform should makes multiple algorithms available to users and they should choice. A couple of troubling implications follow from such an approach. First, if a user if seeing content that is objectionable, well, he bears some of the blame because he chose it. Secondly, allowing people to pick their own algorithms seems very similar to a platform using different algorithms for people in that the net effect will still be filter bubbles. The difference is with choice, there will be the illusion of control. Finally, on privacy, Dorsey sidesteps the issue of whether people should be allowed to stop platforms from collecting personal data by pledging his loyalty to giving people choice and control of its collection, use, and distribution.
In terms of Section 230, here are Dorsey’s thoughts:
- As you consider next steps, we urge your thoughtfulness and restraint when it comes to broad regulatory solutions to address content moderation issues. We must optimize for new startups and independent developers. In some circumstances, sweeping regulations can further entrench companies that have large market shares and can easily afford to scale up additional resources to comply. We are sensitive to these types of competition concerns because Twitter does not have the same breadth of interwoven products or market size as compared to our industry peers. We want to ensure that new and small companies, like we were in 2006, can still succeed today. Doing so ensures a level playing field that increases the probability of competing ideas to help solve problems going forward. We must not entrench the largest companies further.
- I believe the best way to address our mutually-held concerns is to require the publication of moderation processes and practices, a straightforward process to appeal decisions, and best efforts around algorithmic choice. These are achievable in short order. We also encourage Congress to enact a robust federal privacy framework that protects consumers while fostering competition and innovation.
Facebook CEO Mark Zuckerberg framed Section 230 as allowing free speech to thrive online because platforms would avoid legal liability and not host any material that could result in a lawsuit. He also praised the provisions that allow for content moderation, such as “basic moderation” for “removing hate speech and harassment that impacts the safety and security of their communities.” Zuckerberg avoids moderation of political content where the leaders of nations post material that is patently untrue or inflammatory. He then claimed Facebook supports giving people a voice, but, then this is contrary to media accounts of the company doing the bidding of authoritarian regimes to take down the posts of and shut down the accounts of dissidents and critics. Moreover, Zuckerberg argued that Section 230’s liability shield permits the company to police and remove material that creates risk through “harm by trying to organize violence, undermine elections, or otherwise hurt people.” Some have argued the opposite is the case, and if Facebook faced regulatory or legal jeopardy for hosting such material or not taking it down in a timely fashion, it would act much more quickly and expend more resources to do so.
Zuckerberg then detailed his company’s efforts to ensure the social media giant is providing Americans with accurate information about voting, much of which would please Democrats and displease Republicans, the latter of which have inveighed against the appending of fact checking to assertions made by Trump and others about the election.
Zuckerberg also pushed back on Cantwell’s assertions regarding the effect his platform and Google have had on journalism. He claimed Facebook is another venue by which media outlets can make money and touted the Facebook Journalism Project, in which Facebook has “invested more than $425 million in this effort, including developing news products; providing grants, training, and tools for journalists; and working with publishers and educators to increase media literacy.”
As for Zuckerberg’s position on Section 230 legislation, he argued:
- However, the debate about Section 230 shows that people of all political persuasions are unhappy with the status quo. People want to know that companies are taking responsibility for combatting harmful content—especially illegal activity—on their platforms. They want to know that when platforms remove content, they are doing so fairly and transparently. And they want to make sure that platforms are held accountable.
- Section 230 made it possible for every major internet service to be built and ensured important values like free expression and openness were part of how platforms operate. Changing it is a significant decision. However, I believe Congress should update the law to make sure it’s working as intended. We support the ideas around transparency and industry collaboration that are being discussed in some of the current bipartisan proposals, and I look forward to a meaningful dialogue about how we might update the law to deal with the problems we face today.
- At Facebook, we don’t think tech companies should be making so many decisions about these important issues alone. I believe we need a more active role for governments and regulators, which is why in March last year I called for regulation on harmful content, privacy, elections, and data portability. We stand ready to work with Congress on what regulation could look like in these areas. By updating the rules for the internet, we can preserve what’s best about it—the freedom for people to express themselves and for entrepreneurs to build new things—while also protecting society from broader harms. I would encourage this Committee and other stakeholders to make sure that any changes do not have unintended consequences that stifle expression or impede innovation.
Alphabet CEO Sundar Pichai framed Google’s many products as brining the world information for free. He voiced support for amorphous privacy legislation and highlighted Google’s $1 billion commitment to supporting some journalism outlets. He asserted Google, YouTube, and related properties exercise their content moderation without political bias. Pichia offered these sentiments on Section 230:
As you think about how to shape policy in this important area, I would urge the Committee to be very thoughtful about any changes to Section 230 and to be very aware of the consequences those changes might have on businesses and consumers.At the end of the day, we all share the same goal: free access to information for everyone and responsible protections for people and their data. We support legal frameworks that achieve these goals…
© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.
Image by Gerd Altmann from Pixabay