A Senate Subcommittee Tackles Algorithms

The Senate Judiciary Committee’s Privacy, Technology, and the Law Subcommittee held a hearing titled “Algorithms and Amplification: How Social Media Platforms’ Design Choices Shape Our Discourse and Our Minds.”

Twitter

A hearing tries to get beyond the usual tech talking points to examine the role algorithms play online for good or ill.

Cocktail Party

Another Congressional committee is digging into “Big Tech” at a hearing with representatives from Facebook, Twitter, and YouTube. However, in what may be one of the first such hearings, the topic was on algorithms, a frequently invoked feature of online life. Algorithms are the secret sauce of online platforms that can push content to users or bury it. Experts have said that these algorithms are designed to tempt users and keep them engaged online through a steady stream of content that enrages or creates fear. Some critics of platforms have made the case through research that the algorithms of social media platforms are a key ingredient in radicalizing people, especially white nationalists and other extremists in the United States (U.S.).

Meeting

There have been a few bills introduced to regulate algorithms, but these are mostly aimed at the use of algorithms to violate existing federal law or to extend civil rights laws to algorithms. The First Amendment to the U.S. Constitution may be an insuperable obstacle to regulating algorithms as they may be considered the speech of platforms the U.S. government may not generally impinge. Nonetheless, policymakers in Washington will continue to exhibit an interest in how platforms’ algorithms work and the wider effects on society.

Geek Out

In the last Congress, Chair Chris Coons (D-DE) introduced the “Algorithmic Fairness Act” (S.5052) that would, according to his December 2020 press release:

  • Require the Federal Trade Commission (FTC) to conduct a study about the ways companies are developing and implementing algorithmic eligibility determinations
  • Direct the FTC to use its Section 5 authority to prevent companies from acting on algorithmic eligibility determinations that are deemed unfair under the FTC’s unfairness standard
  • Require companies to create an audit trail for each algorithmic eligibility determination it makes about a consumer, preserving records about the data and methodology used to make the determination, how the algorithm was created and trained, and the ultimate decision rendered
  • Require companies to notify consumers when they have been the subject of an algorithmic eligibility determination and provide the consumer with the information the company used to make such a determination and an opportunity to correct the data that the company used
  • Establish research funding and a leadership program rewarding fair, accountable, and transparent examples of data analytics

However, Coons’ bill largely does not address the issues the use of algorithms to amplify content on social media platforms. Others have introduced legislation to address this aspect of algorithms.

In 2019, Senators Ron Wyden (D-OR) and Cory Booker (D-NJ) and Representative Yvette Clarke (D-NY) introduced the “Algorithmic Accountability Act” (S.1108/H.R.2231) that would:

  • Authorize the Federal Trade Commission (FTC) to create regulations requiring companies under its jurisdiction to conduct impact assessments of highly sensitive automated decision systems. This requirement would apply both to new and existing systems.
  • Require companies to assess their use of automated decision systems, including training data, for impacts on accuracy, fairness, bias, discrimination, privacy and security.
  • Require companies to evaluate how their information systems protect the privacy and security of consumers’ personal information.
  • Require companies to correct any issues they discover during the impact assessments.

Like Coons’ legislation, this bill would glancingly address algorithmic amplification and escalation of the sort the hearing is intended to address.

Last year, Representatives Anna G. Eshoo (D-CA) and Tom Malinowski (D-NJ) introduced the “Protecting Americans from Dangerous Algorithms Act,” (H.R.8636) “legislation to hold large social media platforms accountable for their algorithmic amplification of harmful, radicalizing content that leads to offline violence.” They asserted:

  • The bill narrowly amends Section 230 of the Communications Decency Act to remove liability immunity for a platform if its algorithm is used to amplify or recommend content directly relevant to a case involving interference with civil rights (42 U.S.C. 1985); neglect to prevent interference with civil rights (42 U.S.C. 1986); and in cases involving acts of international terrorism (18 U.S.C. 2333). 42 U.S.C. 1985 and 1986 are Reconstruction-era statutes originally designed to reach Ku Klux Klan conspirators, and are central to a recent suit alleging Facebook facilitated militia violence in Kenosha, WI. 18 U.S.C. 2333 is implicated in several lawsuits, including an earlier suit against Facebook, alleging its algorithm connected terrorists with one another and enabled physical violence against Americans. The bill only applies to platform companies with 50 million or more users.
  • Section 230 of the Communications Decency Act (47 U.S.C. 230) immunizes “interactive computer services” from legal liability for user-generated content (with some exceptions, such as for federal crimes). The largest internet platforms use sophisticated, opaque algorithms to determine the content their users see, leveraging users’ personal and behavioral data to deliver content designed to maximize engagement and the amount of time spent on the platforms. The Protecting Americans from Dangerous Algorithms Act establishes the principle that platforms should be accountable for content they proactively promote for business reasons, if doing so leads to specific offline harms.
  • The bill is distinct from other legislative proposals to reform Section 230 in that it preserves the core elements of the law that protect the speech of users, is narrowly targeted at the algorithmic promotion of content that leads to the worst types of offline harms, and does not seek to mandate political “neutrality” as a condition for Section 230 protections.

Eshoo and Malinowski approach algorithms and whatever harms they cause from the perspective of paring back Section 230 legal protection as means of driving platforms to reform their algorithms to avert online harm.

Chair Chris Coons (D-DE) (watch his opening statement) stressed that the hearing would be informational in nature so the subcommittee can learn more about algorithms and how they are deployed in social media and online platforms. He said that there is nothing inherently wrong with the use of algorithms to help platforms determine which content is most appealing to users. Coons noted the criticisms of algorithms that can harm attention spans, degrade public discourse, affect children, and hurt public health and democracy. He posed the question of the effects of algorithms so powerful that people end up staring at screens all day or become cocooned in a world that only agrees with one’s biases and views.  Coons wondered how society is affected when platforms amplify popular and possibly hateful and harmful content. He stressed he and Ranking Member Ben Sasse (R-NE) do not see these questions as partisan in nature and do not have specific legislative ore regulatory agendas.

Coon declared this is an area that requires urgent attention. He quoted Facebook CEO Mark Zuckerberg who warned of the dangers of online life. Coons cautioned that increasingly partisanship fueled by platforms is hurting democracy in the U.S. He said he and Sasse are interested in what platforms are doing and what steps might be taken so they can consider a path forward be it voluntary, regulatory, or legislative.

Ranking Member Ben Sasse (R-NE) (watch his opening statement or read his full written remarks) decried the tendency in Washington to boil down complicated issues into heroes and villains and to prescribe a solution before learning fully about the problem. He stated algorithms have costs and benefits that can make the world a better or worse place. Sasse cited the adage that in the digital or attentional economy if one is not paying for a service she is the product. He remarked Americans have access to powerful and amazing technology at no financial cost, but there may be other costs. Sasse stressed that technology can be misused, and Congress needs to be thoughtful about this.

Sasse stated there must be pushback on the idea that complicated qualitative problems have easy quantitative solutions. He asserted the committee has been told supercomputers would solve the nettlesome problems posed by these new technologies. Sasse asserted he and Coons are aligned in thinking that prudence, humility, and transparency are the best ways to begin.

Committee Chair Dick Durbin (D-IL) (watch his opening statement) claimed the U.S. stands at a crossroads regarding social media. He said the right to privacy, especially for children, is one of his foremost concerns in light of the immense quantity of data collected online. Durbin remarked he would be reintroducing the “Clean Slate for Kids Online Act” that would create an enforceable legal right that website companies delete all personal information collected or obtained before a child turns 13. He added the right to privacy and access to these data could keep the subcommittee busy.

Durbin cited a 2020 independent civil rights audit that found Facebook is not sufficiently attuned to how its algorithms fuel extreme and polarizing content and can drive people into self-enclosed echo chambers of extremism. He referenced a letter that Coons wrote to Facebook calling on the company to address the anti-Muslim bigotry on the platform. Durbin discussed the plot to kidnap Michigan Governor Gretchen Whitmer and the 17-year-old who shot two protestors in Kenosha, Wisconsin and how Facebook content played a role in both. He asserted that widespread online conspiracy theories fueled the 6 January “coup attempt.” Durbin declared Congress must address the extreme content on platforms and the role algorithms play in amplifying it.

Facebook Content Policy Vice President Monika Bickert (watch her opening statement and read her full written statement) stated:

  • Of course, News Feed ranking isn’t the only factor that goes into what a person might see on Facebook. There are certain types of content we simply don’t allow on our services. Our content policies, which we call our Community Standards, have been developed over many years with ongoing input from experts and researchers all over the world. We work hard to enforce those standards to help keep our community safe and secure, and we employ both technology and human review teams to do so. We publish quarterly reports on our work, and we’ve made significant progress identifying and removing content that violates our standards.
  • We recognize that not everyone agrees with every line in our Community Standards. In fact, there is no perfect way to draw the lines on what is acceptable speech; people simply do not agree on what is appropriate for discourse. We also recognize that many people think private companies shouldn’t be making so many big decisions about what content is acceptable. We agree that it would be better if these decisions were made according to frameworks agreed to by democratically accountable lawmakers. But in the absence of such laws, there are decisions that need to be made in real time.
  • Last year, Facebook established the Oversight Board to make an independent, final call on some of these difficult decisions. It is an external body of experts, and its decisions are binding—they can’t be overruled by Mark Zuckerberg or anyone else at Facebook. Indeed, the Board has already overturned a number of Facebook’s decisions, and we have adhered to the Board’s determinations. The Board itself is made up of experts and civic leaders from around the world with a wide range of backgrounds and perspectives, and they began issuing decisions and recommendations earlier this year.
  • If content is removed for violating our Community Standards, it does not appear in News Feed at all. Separately, there are types of content that might not violate Facebook’s Community Standards and are unlikely to contribute to a risk of actual harm but are still unwelcome to users, and so the ranking process reduces their distribution. For example, our algorithms actively reduce the distribution of things like clickbait (headlines that are misleading or exaggerated), highly sensational health claims (like those promoting “miracle cures”), and engagement bait (posts that explicitly seek to get users to engage with them). Facebook also reduces distribution for posts deemed false by one of the more than 80 independent fact-checking organizations that evaluate the accuracy of content on Facebook and Instagram. So overall, how likely a post is to be relevant and meaningful to you acts as a positive in the ranking process, and indicators that the post may be unwelcome (although non-violating) act as a negative. The posts with the highest scores after that are placed closest to the top of your Feed.
  • Facebook’s approach goes beyond addressing sensational and misleading content post by post. When Pages and Groups repeatedly post misinformation, Facebook reduces their overall distribution. If Groups or Pages repeatedly violate our Community Standards, we restrict or remove them.
  • The reality is that it’s not in Facebook’s interest—financially or reputationally—to push users towards increasingly extreme content. The company’s long-term growth will be best served if people continue to use and value its products for years to come. If we prioritized trying to keep a person online for a few extra minutes, but in doing so made that person unhappy or angry and less likely to return in the future, it would be self-defeating. Furthermore, the vast majority of Facebook’s revenue comes from advertising. Advertisers don’t want their brands and products displayed next to extreme or hateful content—they’ve always been very clear about that. Even though troubling content is a very small proportion of the total content people see on our services (hate speech is viewed 7 or 8 times for every 10,000 views of content on Facebook), Facebook’s long-term financial self-interest is to continue to reduce it so that advertisers and users have a good experience and continue to use our services.

Twitter U.S. Public Policy Head Lauren Culbertson (watch her opening statement and read her full written statement) stated:

  • Expanded Algorithmic Choice
    • At Twitter, we want to provide a useful, relevant experience to all people using our service. With hundreds of millions of Tweets every day on the service, we have invested heavily in building systems that organize content to show individuals the most relevant information for that individual first. With over 192 million people using Twitter each day in dozens of languages and countless cultural contexts, we rely upon machine learning algorithms to help us organize content by relevance.
    • We believe that people should have meaningful control over key algorithms that affect their experience online. In 2018, we redesigned the home Timeline, the main feature of our service, to allow people to control whether they see a ranked timeline, or a reverse chronological order ranking of the Tweets from accounts or topics they follow. This “sparkle icon” improvement has allowed people using our service to directly experience how algorithms shape what they see and has allowed for greater transparency into the technology we use to rank Tweets. This is a good start. And, we believe this points to an exciting, market-driven approach that provides individuals greater control over the algorithms that affect their experience on our service.
  • Responsible Machine Learning Initiative
    • We are committed to gaining and sharing a deeper understanding of the practical implications of our algorithms. Earlier this month, we launched our “Responsible Machine Learning” initiative, a multi-pronged effort designed to research the impact of our machine learning decisions, promote equity, and address potential unintentional harms. Responsible use of technology includes studying the effects that the technology can have over time. Sometimes, a system designed to improve people’s online experiences could begin to behave differently than was intended in the real world. We want to make sure we are studying such developments and using them to build better products.
    • This initiative is industry-leading and the very first step and investment into a journey of evaluating our algorithms and working through ways we can apply those findings to make Twitter and our entire industry better. We will apply what we learn to our work going forward, and we plan to share our findings and solicit feedback from the public. While we are hopeful about the ways this may improve our service, our overarching goal is increasing transparency and contributing positively to the field of technology ethics at large.
  • Birdwatch
    • We’re exploring the power of decentralization to combat misinformation across the board through Birdwatch — a pilot program that allows people who use our service to apply crowdsourced annotations to Tweets that are possibly false or misleading. We know that when it comes to adding context, not everyone trusts tech companies — or any singular institution — to determine what context to add and when. Our hope is that Birdwatch will expand the range of voices involved in tackling misinformation as well as streamline the real-time feedback people already add to Tweets. We are working to ensure that a broad range of voices participate in the Birdwatch pilot so we can build a better product that meets the needs of diverse communities. We hope that engaging the broader community through initiatives like Birdwatch will help mitigate current deficits in trust.
    • We are committed to making the Birdwatch site as transparent as possible. All data contributed to Birdwatch will be publicly available and downloadable. As we develop algorithms that power Birdwatch — such as reputation and consensus systems — we intend to publish that code publicly in the Birdwatch Guide.
  • Bluesky
    • Twitter is funding Bluesky, an independent team of open source architects, engineers, and designers, to develop open and decentralized standards for social media. It is our hope that Bluesky will eventually allow Twitter and other companies to contribute to and access open recommendation algorithms that promote healthy conversation and ultimately provide individuals greater choice. These standards could support innovation, making it easier for startups to address issues like abuse and hate speech at a lower cost. We recognize that this effort is complex, unprecedented, and will take time but we currently plan to provide the necessary exploratory resources to push this project forward.

YouTube Government Affairs and Public Policy for the Americas and Emerging Markets Director Alexandra Veitch (watch her opening statement and read her full written statement) said:

  • Because of the importance of algorithms in the YouTube user experience, we welcome the opportunity to clarify our approach to this topic. In computer science terms, an algorithm is a set of instructions that direct a computer to carry out a specific task. An algorithm can be simple—asking a computer to calculate the sum of two numbers—or extremely complex, such as machine learning algorithms that consistently refine their ability to accomplish the goal for which they were programmed. An algorithm can manage a few inputs or nearly limitless inputs, and they can do one thing or perform a number of functions at once. Nearly everything that people do today on their devices is made possible by algorithms.
  • YouTube uses machine learning techniques to manage and moderate content on YouTube. YouTube’s machine learning systems sort through the massive volume of content to find the most relevant and useful results for a user’s search query, to identify opportunities to elevate authoritative news, and to provide a user with additional context via an information panel if appropriate. We also rely on machine learning technology to help identify patterns in content that may violate our Community Guidelines or videos that may contain borderline content—content that comes close to violating our Community Guideline but doesn’t quite cross the line. These systems scan content on our platform 24/7, enabling us to review hundreds of thousands of hours of video in a fraction of the time it would take a person to do the same. For example, more than 94% of the content we removed between October and December of 2020 was first flagged by our technology. This underscores just how critical machine learning is for content moderation.
  • Another area where we use machine learning is for recommendations. Recommendations on YouTube help users discover videos they may enjoy, and they help creator content reach new viewers and grow their audience across the platform. We share recommendations on YouTube’s homepage and in the “Up next” section to suggest videos a user may want to watch after they finish their current video. Our recommendation systems take into account many signals, including a user’s YouTube watch and search history (subject to a user’s privacy settings) and channels to which a user has subscribed. We also consider a user’s context—such as country and time of day—which, for example, helps our systems show locally relevant news, consistent with our effort to raise authoritative voices. Our systems also take into account engagement signals about the video itself—for example, whether others who clicked on the same video watched it to completion or clicked away shortly after starting to view the video. It is important to note that, where applicable, these signals are overruled by the other signals relating to our efforts to raise up content from authoritative sources and reduce recommendations of borderline content and harmful misinformation—even if it decreases engagement.
  • We also empower our users by giving them significant control over personalized recommendations, both in terms of individual videos as well as the way that watch and search history may inform recommendations. Users control what data is used to personalize recommendations by deleting or pausing activity history controls. Signed out users can pause and clear their watch history, while signed in users can also view, pause, and edit watch history at any time through the YouTube history settings. Clearing watch history means that a user will not be recommended videos based on content they previously viewed. Users can also clear their search history, remove individual search entries from search suggestions, or pause search history using the YouTube History settings.
  • In-product controls enable users to remove recommended content—including videos and channels—from their Home pages and Watch Next. Signed in users can also delete YouTube search and watch history through the Google My Account settings, set parameters to automatically delete activity data in specified time intervals, and stop saving activity data entirely. We also ask users directly about their experiences with videos using surveys that appear on the YouTube homepage and elsewhere throughout the app, and we use this direct feedback to fine-tune and improve our systems for all users.

Center for Humane Technology Co-Founder and President Tristan Harris (watch his opening statement and read his full written statement) claimed:

  • My fellow panelists from technology companies will say:
    • We catch XX% more hate speech, self-harm and harmful content using A.I.
    • We took down XX billions of fake accounts, up from YY% last year.
    • We have Content Oversight Boards and Trust & Safety Councils.
    • We spend $X million more on Trust & Safety in 2021 than we made in revenue in an entire year.
  • But none of this is adequate to the challenge stated above, when the entire model is predicated on dividing society. It’s like Exxon talking about the number of trees they have planted, while their extractive business model hasn’t changed.
  • As The Social Dilemma explains, the problem is their attention-harvesting business model. The narrower and more personalized our feeds, the fatter their bank accounts, and the more degraded the capacity of the American brain. The more money they make, the less capacity America has to define itself as America, reversing the United States inspiring and unifying motto of E Pluribus Unum or “out of many, one” into its opposite, “out of one, many.”
  • We are raising entire generations of young people who will have come up under these exaggerated prejudices, division, mental health problems, and an inability to determine what’s true. They walk around as a bag of cues and triggers that can be ignited. If this continues, we will see more shootings, more destabilization, more children with ADHD, more suicides and depression— deficits that are cultivated and exploited by these platforms.
  • We should aim for nothing less than a comprehensive shift to a humane, clean “Western digital infrastructure” worth wanting. We are collectively in the middle of a major transition from 20th century analog societies to 21st century “digitized” societies. Today we are offered two dystopian choices: either to install a Chinese “Orwellian” brain implant into society with authoritarian controls, censorship and mass behavior modification. Or we can install the U.S./Western “Huxleyan” societal brain implant that saturates us in distractions, outrage, trivia and amusing ourselves to death.
  • Let’s use today’s hearing to encourage a 3rd way, to have the government’s help in incentivizing Digital Open Societies worth wanting, that outcompete Digital Closed Societies.

Shorenstein Center on Media, Politics, and Public Policy Research Director Dr. Joan Donovan (watch her opening statement and read her full written statement) contended:

  • My last point is about the past five years of social media shaping our public discourse. Social media provides a different opportunity for the enemies of democracy to sow chaos and plan violent attacks. It’s fourth generation warfare, where it is difficult to tell the difference between citizens and combatants. The reason why Russia impersonated US social movements in 2016 was expressly because movements elicit lots of engagement, where participants see sharing content and network-making as political acts. That kind of political participation was challenging for city governance during the 2011 Occupy Movement, but that moment—a decade ago—should have taught Facebook, YouTube, and Twitter more about the range of effects their products could have on society. Now we see these products used by authoritarians who leverage a mix of authentic political participation paired with false accounts and fake engagement to win elections.
  • Cobbled together across products, our new media ecosystem is the networked terrain for a hybrid information war that ultimately enables dangerous groups to organize violent events—like the nationalists, militias, white supremacists, conspiracists, anti-vaccination groups, and others who collaborated under the banner of Stop The Steal in order to breach the Capitol. Last week, a Buzzfeed article included a leaked internal Facebook memo on the exponential growth of “Stop the Steal” groups on their platform. The report clearly illustrated that groups exposing violent and hateful content can grow very fast on across the product. Even when Facebook removes groups, it does not stop the individuals running them from trying again. Adaption by media manipulators is a core focus of our research at the Shorenstein Center. Facebook found that their own tools allowed Stop the Steal organizers to leverage openness and scale to grow faster than Facebook’s own internal teams could counter.
  • In short, even when aware of the risks of their product to democracy, Facebook’s interventions do little to contain exposure of misinformation-at-scale to the general public. When determined to stop the spread of misinformation, Facebook could not counter it with their internal policies. Misinformation-at-scale is a feature of Facebook’s own design and is not easily rooted out. Because Facebook defines the problem of misinformation-at-scale as one of coordinated inauthentic behavior, they were woefully unprepared handle the threats posed by their own products. They were unprepared in 2016 and have since then been unable to handle the new ways that motivated misinformers use their products.
  • What began in 2016 with false accounts and fake engagement inflaming and amplifying societal wedge issues slowly transformed overtime into a coordinated attack on US democracy and public health. The biggest problem facing our nation is misinformation-at-scale, where technology companies must put community safety and privacy at the core of their business model, ensure that advertising technology is utilized responsibly, and quickly act on groups coordinating disinformation, hate, harassment, and incitement across the media ecosystem. A problem this big will require Federal oversight.
  • But I am hopeful that another future is possible, if tech companies, regulators, researchers, and advocacy begin to work together to build a public interest internet modeled on the principles that the public has a right to access accurate information on demand. The cost of doing nothing is democracy’s end.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2021. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Pete 😀 from Pixabay

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s