|Despite the minority abstaining from the hearing, one of the committees in the House with jurisdiction over online matters continued its inquiry into influence campaigns in the context of the coming election.|
On 18 June, the House Intelligence Committee held a hearing titled “Emerging Trends in Online Foreign Influence Operations: Social Media, COVID-19, and Election Security” with witnesses from Facebook, Twitter, and Google. This is a follow up to a 2017 hearing to examine the three social media giants’ roles in amplifying or preventing online influence campaigns. The Committee Republicans opted against participating altogether, the second hearing they have boycotted this year.
By way of explanation for the boycott, Committee Member Representative Brad Wenstrup (R-OH) claimed in a 19 June interview on Fox News that the hearing was “just one more step in…Schiff’s playbook to politicize things, to split us further apart, and to use it for some type of political gain.” Wenstrup added
Although this may not be classified material, it is also a chance for our adversaries to understand what steps we are taking to try and stop them from foreign influence. You know, on the Intelligence Committee, we deal with sensitive secrets. We should be operating in a secure facility. And, we should not be in an environment where we are online.
In his opening statement at the hearing, Chair Adam Schiff (D-CA) stated
Today’s important conversation is essential to our oversight of how the Intelligence Community and Nation are working to keep our elections and political discourse from foreign interference. I had hoped it would be a bipartisan discussion. Unfortunately, and without reason or justification, our Republican colleagues once again have decided to absent themselves from the work of the committee. I repeat my hope that they will reconsider this ill-considered path and join us for future hearings.
- This is the second hearing of the House Intelligence Committee held with witnesses from Google, Facebook, and Twitter. The first was in November 2017, where we continued to piece together the full breadth of the Russian attack on our democracy one year earlier and inform the public about what we had found. It was a breathtaking and audacious attack that took place on several fronts, including social media platforms used daily by millions of Americans. Through subsequent disclosures by the technology companies, Department of Justice, and this committee, the world learned that Russia’s Internet Research Agency undertook a determined effort to use social media to divide Americans in advance of the 2016 election. These IRA trolls took to a broad array of platforms to launch a sophisticated and pernicious campaign that exploited wedge issues already challenging our Nation, such as immigration, the Second Amendment, race relations, and other issues. Today’s hearing is not intended to look back at 2016 as much as it is to look forward.
- Election day is a mere five months away, and malicious actors, including Russia but also others, persist in attempts to interfere in our political system in order to gain an advantage against our country and to undermine our most precious right: that to a free and fair vote.
- We are holding this hearing and we engage regularly with tech and social media companies because they are arguably best positioned to sound the alarm if and when another external actor attempts to interfere in our democratic discourse, first, because their technical capacity and security acumen allows them to detect malicious activity on their platforms and make attributions through technical indicators that are available only to the companies themselves, and, second, because we cannot have complete confidence that the White House will allow the Intelligence Community to look fully and promptly inform Congress if it detects foreign interference, especially if that interference appears to assist the President’s reelection.
- That is a dangerous and unprecedented state of affairs, but, nonetheless, it reflects the reality and why this hearing is so important.
- To the witnesses: As you describe in your respective written statements, a lot has changed since 2016. In many ways, we are better prepared today than we were four years ago. Each of your companies have taken significant steps and invested resources to detect coordinated inauthentic behavior and foreign interference, and, while there cannot be a guarantee, it would be far more difficult for Russia or another foreign adversary to run the same 2016 playbook undetected.
- Both Facebook and Twitter now regularly update the public, the committee, and Congress on their findings as they identify and disrupt coordinated inauthentic behavior and foreign interference targeting the United States and other nations globally. U.S. Government agencies with a responsibility to unearth and fight foreign interference coordinate and meet regularly with technology companies and with us.
- The companies themselves have established mechanisms to share threat information and indicators, both among themselves and with smaller industry peers. Independent researchers have taken up the mantle in cooperation with platforms to apply their skills and knowledge to detecting and analyzing malicious networks and comprehensive public reports.
- These are positive developments, but, as I look across the landscape, I can’t say that I am confident that the 2020 election will be free of interference by malicious actors, foreign or domestic, who aspire to weaponize your platforms to divide Americans, pit us against one another, and weaken our democracy.
- We are learning, but our adversaries are also learning as well, and not only Russia. Modest investments in the IRA and the hacking-and-dumping campaign aimed at the Clinton campaign paid off in spades, helping to elect the Kremlin’s favorite candidate and widening fissures between Americans, the lesson being: Influence operations on social media are cheap and effective, and attribution to specific threat actors isn’t always straightforward.
- While each of your platforms has begun to adopt policies around deepfakes and manipulated media, it remains to be seen whether they are sufficient to detect and remove sinister manipulated media at speed. For once a visceral first impression has been made, even if proven false later, it is nearly impossible to repair the damage.
- I am also concerned because the nature of your platforms, all of them, is to embrace and monetize virtuality and virality. The more sensational, the more divisive, the more shocking or emotionally charged, the faster it circulates. A tweet or Instagram photo or a YouTube video can be viewed by millions of Americans in the span of hours.
- A policy that only identifies and acts upon misinformation, whether from a foreign or domestic source, after millions of people have seen it is only a partial response at best. I recognize that, at scale, the challenge of moderation is daunting.
- As we get closer to November, the stakes will only grow. And make no mistake: Foreign actors and Presidents alike are testing the limits of manipulated media right now. And, finally, I am concerned because of an issue that I raised back in 2017 and repeatedly since.
- I am concerned about whether social media platforms like YouTube, Facebook, Instagram, and others wittingly or otherwise optimize for extreme content. These technologies are designed to engage users and keep them coming back, which is pushing us further apart and isolating Americans into information silos.
- Ultimately, the best and only corrective measure to address the pernicious problem of misinformation and foreign interference is ensuring that credible, verified, factual information rises above the polluting disinformation and falsehoods, whether about the location of polling places or about the medical consensus surrounding COVID-19.
- Over the past 3years, we have worked to protect more than 200 elections around the world. We have learned lessons from each of these, and we are applying these lessons to protect the 2020 election in November.
- We have taken a variety of steps to support the integrity and security of the electoral process, including: launching Facebook Protect, a program that helps secure the accounts of elected officials, candidates, and their staff; increasing political and issue ad transparency; investigating and stopping coordinated inauthentic behavior –we have removed more than 50 deceptive networks in 2019 alone –and labeling posts by state-controlled media outlets so that people understand where their news is coming from.
- Yesterday, we began blocking ads in the United States from these state-controlled outlets to provide an extra layer of protection against foreign influence in the public debate ahead of the 2020 election in November.
- In addition, we know that misinformation and influence operations are at their most virulent in information vacuums.
- So we combine our enforcement efforts with ensuring that people can access authentic, accurate information about major civic moments, like this global pandemic or voting.
- This is why we are creating a new Voter Information Center to fight misinformation, to encourage people to vote, and to make voters have accurate and up-to-date information from their local, State, and Federal election authorities.
- The threat of an interference in elections by foreign and domestic actors is real and evolving.
- Since 2016, we have made a number of significant investments to address these challenges and prepare against bad actors, taking lessons from the 2018 midterms and elections around the world. I am grateful for the opportunity to discuss our approach today, and I will begin by focusing on the policies, product changes, and partnerships Twitter now has in place.
- The Twitter rules directly address a number of potential threats to the integrity of elections. Under our civic integrity policy, individuals may not use Twitter for the purpose of manipulating or interfering in elections or other civic processes. This includes posting or sharing content that may suppress participation or mislead people about when, where, or how to participate in a civic process.
- We recently expanded this policy to cover civic events–for example, the Census–in addition to elections. We prohibit the use of Twitter services in a manner that intends to artificially amplify or suppress the conversation. Our rules prohibit fake accounts and those impersonating others. We do not permit the distribution of hacked materials that contain private information, trade secrets, or could put people in harm’s way.
- In addition to these new rules, Twitter’s advertising policies also play an important part in protecting the public conversation.
- Firstly, Twitter does not allow political advertising. Online political advertising represents entirely new challenges to civic discourse that today’s democratic infrastructure may not be prepared to handle, particularly the machine-learning-based optimization of messaging and microtargeting.
- Secondly, Twitter does not allow news media entities controlled by state authorities to advertise. This decision was initially taken with regard to Russia Today and Sputnik based on the Russian activities during the 2016 election. Last year, we expanded this policy to cover all state-controlled media entities globally, in addition to individuals who were affiliated with those organizations. While our policies are vital to protect the conversation, we also want to be proactive in helping people on Twitter find credible information by providing them with additional context.
- We prioritize interventions regarding misinformation based on the highest potential for harm and are currently focused on three main areas of content: synthetic and manipulated media, elections and civic integrity, and COVID-19.
- Where content does not break our rules and warrant removal, in these three areas, we may label tweets to help people come to their own views by providing additional context. These labels may link to a curated set of tweets posted by people on Twitter that include factual statements, counterpoint opinions and perspectives, and ongoing public conversation around the issue.
Google Director for Law Enforcement and Information Security Richard Salgado focused “on three main areas: first, our efforts to combat election-related interference; second, how we are empowering people with authoritative information; and, third, how we are improving transparency and accountability in advertising.” Salgado stated:
- “[a]s we previously reported to the committee, our investigation into the 2016 elections found relatively little violative foreign-government activity on our platform. Entering the 2018 midterms, we continued to improve our ability to detect and prevent election-related threats and engaged in information-sharing with others in the private sector and the government. While we saw limited misconduct linked to state-sponsored activity in the 2018 midterms, we continue to keep the public informed. We recently launched a quarterly bulletin to provide additional information about our findings concerning coordinated influence operations. This joins other public reporting across products as we shed light on what it is that we are seeing. Looking ahead to the November elections, we know that COVID-19 pandemic, widespread protests, and other significant events can provide fodder for nation-state-sponsored disinformation campaigns. We remain steadfast in our commitment to protect our users.
- Second, we have continued to improve the integrity of our products. Our approach is built on a framework of three strategies: making quality count in our ranking systems, giving users more context, and counteracting malicious actors. In Search, ranking algorithms are an important tool in our fight against disinformation. Ranking elevates information that our algorithms determine is the most authoritative above information that may be less reliable. Similarly, our work on YouTube focuses on identifying and removing content that violates our policies and elevating authoritative content when users search for breaking news. At the same time, we find and limit the spread of borderline content that comes close but just stops short of violating our policies. The work to protect Google products and our users is no small job, but it is important. We invest heavily in automated tools to tackle a broad set of malicious behaviors and in people who review content and help improve these tools. We applied many of these strategies in response to the COVID-19 pandemic and developed new ways to connect users to authoritative government information. Similarly, we worked to remove misinformation that poses harm to people and undermines efforts to reduce infection rates. On YouTube, we have clear policies prohibiting content that promotes medically unsubstantiated treatments or disputes the existence of COVID-19. We also reduce recommendations of borderline content.
- Third, Google has made election advertising far more transparent. We now require advertisers purchasing U.S. election ads to verify who they are and disclose who paid for the ad in the ad itself. We launched a transparency report with a searchable ad library as well. Microtargeting of election ads was never allowed on Google systems, but targeting of election ads in the U.S.is now further limited to general geographic location, age, gender, and context where the ad would appear. This aligns with long-established practices in media such as TV, radio, and print. Finally, this April, we announced that we will extend identity verification to all advertisers on our platform, with a roll-out beginning this summer.
© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.