Another Process Focused Online Content Bill

Subscribe to my newsletter, The Wavelength, if you want the content on my blog delivered to your inbox four times a week.

It is my current position that if the United States (U.S.) Congress and the President to reach agreement on legislation to revamp how the U.S. regulates online content, it will be through requiring greater transparency and regularizing the rights users have visa vis the platforms. The “Platform Accountability and Consumer Transparency (PACT) Act,” (S.797), a bill recently reintroduced by Senator Brian Schatz (D-HI) and Senate Minority Whip John Thune (R-SD) takes the kind of process-based approach that may make the grade. Instead of targeting certain content, the PACT Act targets all illegal content and lays out the process platforms must use to weigh complaints about content that may violate terms of service. (see here for more detail and analysis.)

Thune, Schatz and a few other cosponsors have reintroduced a different process-focused bill that sums up its purpose in its title: the “Filter Bubble Transparency Act” (S.2024). They asserted this bill “would require large-scale internet platforms that collect data from more than 1 million users and gross more than $50 million per year to provide greater transparency to consumers and allow users to view content that has not been curated as a result of a secret algorithm.” Clearly, they are seeking to address the issue posed by the use of algorithms that feed people content that confirms and reinforces one’s worldview. And so, if their bill were to work as advertised, a flat-earther would not be fed only material confirming their thesis about Earth. Likewise for those who think the U.S. faked the Apollo Moon landings, and so on.

As mentioned, Thune and Schatz have a group of cosponsors from across the ideological spectrum in Congress that suggests such an approach may prove favorable to stakeholders. Senators Richard Blumenthal (D-CT), Jerry Moran (R-KS), Marsha Blackburn (R-TN), and Mark Warner (D-VA) joined them in cosponsoring this bill. All but Schatz cosponsored the bill introduced last October, legislation that went nowhere given it was an election year and there was a general logjam on any wide-gauge technology or privacy legislation. And there may still be. And so, despite this bill’s bipartisan pedigree and relatively modest scope, it may also fall victim to the impasse over U.S. national privacy legislation and what to do about 47 U.S.C. 230 and social media content moderation.

As always, let’s start with the definitions. There are three classes of entities the bill to which the bill will pertain: large internet platforms and two types of search engines.

The largest of internet platforms will be regulated under the legislation. A covered internet platform is:

any public-facing website, internet application, or mobile application, including a social network site, video sharing service, search engine, or content aggregation service.

But such a definition is too broad for the sponsors of the bill, and there is language making clear that this definition shall not apply to smaller and emerging companies. Hence, only platforms like Twitter, Facebook, TikTok, Instagram, Google would fall under this definition. The definition excludes any platform that:

(i) is wholly owned, controlled, and operated by a person that—

(I) for the most recent 6-month period, did not employ more than 500 employees;

(II) for the most recent 3-year period, averaged less than $50,000,000 in annual gross receipts; and

(III) collects or processes on an annual basis the personal data of less than 1,000,000 individuals; or

(ii) is operated for the sole purpose of conducting research that is not made for profit either directly or indirectly.

So it would seem under this definition that smaller or new internet platforms would not qualify to be regulated under the bill. However, any platform with ambitions of massive growth would be on notice about how they would need to offer their services to the public in order to minimize encouraging or pushing people into filter bubbles, the whole point of the bill.

It deserves some stress to point out large internet platforms cover more than social media companies. The definition also includes “any public-facing website, internet application, or mobile application.” There is a clause specifying that the definition includes any “social network site, video sharing service, search engine, or content aggregation service,” but this clause does not limit the term only to such entities. Presumably, any large internet platform would qualify. Hence, Amazon might find itself enmeshed in these requirements for the algorithms used on its platform to suggest items for people to buy. Might this definition be so broad as to also pull in large media sites like the New York Times, CNN, Fox, and others? On its face this definition sweeps widely and large platforms might qualify if they do not fall below the employee, revenue, and data processing thresholds (500, $50 million, and 1,000,000 respectively.)

There are also definitions of upstream and downstream providers, the two other classes of companies the Filter Bubble Transparency Act would regulate, which will be relevant in the section of the bill that governs search results. Accordingly, an upstream provider is an entity that allows a downstream provider to access “an index of web pages” under a contract (termed a “search syndication contract” in the bill.) The downstream provider is the entity on the other side of this contract that is presumably providing people with an interface they can use for searches. Given the common practice of search engines contracting with other search engines to expand the universe of results, it appears the sponsors want to be comprehensive in their approach to limiting filter bubbles.

Turning to what the bill does, one year after enactment, covered internet platforms could not use opaque algorithms unless people are provided notice about the type of algorithm the platform uses and the platform also makes available an input-transparent algorithm to which a person may easily switch. So, for Twitter users, think of the Sparkle Icon at the top right of your screen that you can tap to see tweets in chronological order instead of as promoted by Twitter’s algorithms. Nonetheless, some time on the definitions of opaque and input-transparent algorithms is helpful, especially since there is a data privacy angle the sponsors may be trying to camouflage.

An input-transparent algorithm is

an algorithmic ranking system that does not use the user-specific data of a user to determine the order or manner that information is furnished to such user on a covered internet platform, unless the user-specific data is expressly provided to the platform by the user for such purpose.

The Filter Bubble Transparency Act seems intended to drive large internet platforms to offering users a “better” product from the perspective of not having artificial intelligence-driven algorithms suggesting content to them they would not normally seek or find. Of course, operating in the background of such a policy goal is the increasingly partisan nature of the U.S., split between the two parties, a phenomenon blamed on people increasingly living in filter bubbles. There have also been claims that the algorithms of companies like Facebook are driving extremism in the U.S., predominantly through the promotion of more engaging content that allegedly drives people into white nationalist beliefs. And so, the bill seems to prod companies into providing an unfiltered experience on their platforms. Otherwise, a company would have to give notice that the content it is providing is based on information harvested from the user under the opaque algorithm scenario. And before I examine the opaque algorithm definition, we need to look more closely at what is defined as “user-specific data” and what is not.

The bill qualifies “user-specific data” as being “provided by a user for the express purpose of determining the order or manner that information is furnished to a user on a covered internet platform.” This clause suggests that data a user shares not for the express purpose of determining how information shall be furnished is outside the definition, and large platforms may collect and use such data as they will. That issue aside for the moment, the definition gets very specific as to the type of data that will and will not qualify. The term includes:

user-supplied search terms, filters, speech patterns (if provided for the purpose of enabling the platform to accept spoken input or selecting the language in which the user interacts with the platform), saved preferences, and the user’s current geographical location;

data supplied to the platform by the user that expresses the user’s desire that information be furnished to them, such as the social media profiles the user follows, the video channels the user subscribes to, or other sources of content on the platform the user follows;

However, the term “user-specific data” does not include:

the history of the user’s connected device, including the user’s history of web searches and browsing, geographical locations, physical activity, device interaction, and financial transactions; and…inferences about the user or the user’s connected device, without regard to whether such inferences are based on data described in clause (i) (i.e. the above description of what is user-specific data.)

Thus, from my vantage, there is lots of sensitive user data that falls outside the definition, meaning that large internet platforms can use these data when employing an input-transparent algorithm. Thus, all the past locations and its web browsing history on a device a person uses to connect to a large internet platform would be fair game.

Now the time has come to consider an opaque algorithm, which is defined as:

an algorithmic ranking system that determines the order or manner that information is furnished to a user on a covered internet platform based, in whole or part, on user-specific data that was not expressly provided by the user to the platform for such purpose.

And so, platforms using opaque algorithms could indeed use all data outside the those defined as user-specific data. As a reminder, under the ban on a large online platform’s use of opaque algorithms, they merely need to provide notice to a person “that the platform uses an opaque algorithm that makes inferences based on user-specific data to select the content the user sees.” Additionally, this need be only a one-time notice, and people using these services would not be greeted with such notice every time she logs on. However, platforms would need to have a prominently placed icon users could see and select to get to the input-transparent algorithm version. As mentioned, large internet platforms using an opaque algorithm will need to also offer an input-transparent version a person may use instead.

But, once notice is provided once and ignored or not acted upon, on a large internet platform, a person would be subject to content provided per an opaque algorithm that uses all the user-specific data it cares to collect from the user and elsewhere.

On this subject, the bill does not address the thriving data sharing and brokering world in which virtually everyone’s personal data are spread far and wide, for the proscription on how one’s data may be used to limit the fostering of filter bubbles is limited to some of the data a person provides. To reiterate, large internet platforms would seem to be free to collect and process data gathered from other sources to continue using opaque algorithms.

As mentioned, there are similar algorithmic legal responsibilities for upstream providers of search engine services. Any contract between an upstream provider and downstream provider must include language requiring the former to give the latter the same input-transparent algorithms the former uses to meet its new legal responsibilities. Additionally, any upstream provider cannot

impose any additional costs, degraded quality, reduced speed, or other constraint on the functioning of such algorithm when used by the downstream provider to operate an internet search engine relative to the performance of such algorithm when used by the upstream provider to operate an internet search engine.

Apparently, the sponsors of the bill have concerns that upstream providers may use the opportunity of providing the input-transparent algorithm to provide inferior service as a means of hobbling competition from the downstream provider. And, speaking of this latter class of entities, downstream providers with fewer than 1,000 employees need not offer notice about the use of opaque algorithms, permitting them to use them without informing people. While compliance costs for smaller companies can be an issue, it seems like an easy lift for small downstream providers

The Federal Trade Commission (FTC) would enforce the new requirements and be allowed to treat violations as transgressions against “of a rule defining an unfair or deceptive act or practice prescribed under section 18(a)(1)(B) of the Federal Trade Commission Act.” This authority allows the FTC to seeks civil fines of more than $43,000 per violation for first offenses. Of course, the FTC has resource issues and cannot currently enforce the full breadth of the FTC Act, giving rise to concerns that the Filter Bubble Transparency Act would go largely unenforced.

In the final analysis, this is a modestly scoped bill that would provide a bit more in terms of consumer protection and rights through what is essentially an enhanced notice and consent regime. People using Facebook or Twitter would be able to opt into a less personalized, less filtered online experience, but only if they choose to do so. This bill may be a product of the art of the possible given the deeply entrenched lines on online content moderation law and policy. And, as such, this sort of legislation may stand the best chance of enactment in response to the many criticisms leveled against online platforms and the harms they wreak on users.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2021. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Photo by Drew Beamer on Unsplash

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s