Senate Judiciary Hearing On Google

A committee looks at the possible antitrust practices of Google in the adtech market.

The Senate Judiciary Committee’s Antitrust, Competition Policy & Consumer Rights Subcommittee will hold a hearing on 15 September titled “Stacking the Tech: Has Google Harmed Competition in Online Advertising?.” In their press release announcing the hearing, Chair Mike Lee (R-UT) and Ranking Member Amy Klobuchar (D-MN) asserted:

Google is the dominant player in online advertising, a business that accounts for around 85% of its revenues and which allows it to monetize the data it collects through the products it offers for free. Recent consumer complaints and investigations by law enforcement have raised questions about whether Google has acquired or maintained its market power in online advertising in violation of the antitrust laws. News reports indicate this may also be the centerpiece of a forthcoming antitrust lawsuit from the U.S. Department of Justice. This hearing will examine these allegations and provide a forum to assess the most important antitrust investigation of the 21st century.

Chair Mike Lee (R-UT) said the focus of the hearing is Google’s online advertising business and whether it is monopolist or has engaged in any conduct that harms competition and consumers. He said he would discuss antitrust policy more broadly before discussing Google. Lee remarked he has served on the subcommittee for nine years, six of which as chair, and during this period antitrust policy has evolved and a gulf has widened between the two sides of the issue. He claimed there are those who would like to see no antitrust laws at all, while others are overly deferential to speculative efficiencies, quick to dismiss actual evidence of competitive harm when it might conflict with unproven economic theories. Lee argued this end of the spectrum fetishizes freedom even when harm might endanger freedom. He claimed they forget that markets, like governments, do not keep themselves free, and that liberty s only secure when power is diffused.

Lee said at the other extreme is a line of arguments that has been pushing for years an agenda to transform antitrust laws from a tool based in economic science to protect and promote competitive markets into a panacea for all their perceived social ills. He said built on the myopic economic presence that big is bad, which is, to them, the beginning and end of the question, some at this end of the spectrum would use antitrust policy to address labor, racial, and income disparities. Lee conceded these may be laudable goals, but these are not problems antitrust law is meant to solve nor are they goals antitrust law is capable of solving, at least not without creating a host of other problems. He argued that attempts to repurpose antitrust law into a social justice program would have scores of unintended consequences that would cripple the United States’ (U.S.) economy for generations. He noted there is hypocrisy in thinking big is bad only applies to corporations and not to government bureaucracy, the type needed to dismantle large companies and regulate them.

Lee said he is on the side of the American people, the law, and vigorous enforcement of antitrust laws that have made the U.S. the most prosperous nation on earth. He asserted that already enacted laws are, for the most part, sufficient to meet the challenges of the day. Lee reiterated the maxim that liberty is only secure when power is diffused, a principle central to the U.S.’ Constitutional Republic. Lee claimed the concept of federalism, perhaps the greatest contribution of the founding generation, is what makes the U.S. unique among all other nations. He stated this principle applies to economic power as it does to political power. Lee contended that antitrust laws may be properly described as federalism for the economy.

Lee said that hearing is focused on what may prove the seminal antitrust case of the 21st Century that may define the terms of competition and innovation in the U.S.’ dynamic economy for years and decades to come. He said unlike some of his House colleagues, he has no interest in staging a political spectacle to attack, condescend, and talk over witnesses. Lee remarked naïve though it may be in 2020, he said his hope is that by looking at this specific question, the subcommittee can have a serious and frank conversation about the state of competition in digital markets. He declared that online advertising is an incredibly complex business, one that touches every single person on the internet.

Lee explained the technologies that connect publishers and advertisers have evolved rapidly over the last decade, and he expansion of online advertising has facilitated an explosion of online content by allowing even the smallest website owner to monetize the content they produce. He said small and local businesses have also benefitted from being able to quickly and easily promote their businesses without any of the same capital investments that would have been required just a few decades ago. Lee admitted that at the same time, this growth and expansion has been largely consolidated onto a single platform, Google’s online ad business. He said that as business has grown, so, too, have complaints that Google, which operates both the ad selling and ad buying platforms and then sells its own inventory through those platforms has given rise to conflicts of interest and claims it has rigged online ad auction technology to favor its own interests and protect its own market share. Lee said whether this is true or not matters because so many businesses depend upon digital advertising to market their products or to monetize the content they produce. Web users in turn benefit from free online content and being connected to relevant businesses in a way that helps to make optimal business decisions. Lee said, simply put, markets function better when businesses thrive, and consumers are informed. He asserted ideally online advertising helps accomplish this, but, if, on the other hand, online advertising has been monopolized and constrained by opaque pricing and exclusionary conditions, everyone loses to that degree. Lee added that Google and other big tech companies have been accused of other bad acts unrelated to antitrust or competition, and he said he has repeatedly expressed his concern about anti-conservative bias by these firms. He pledged to continue to pursue these concerns but added that while his concerns about anti-conservative bias may have implications for antitrust like market power, today’s hearing is not fundamentally about those concerns.

Ranking Member Amy Klobuchar (D-MN) explained

  • We are not having this hearing because Google is successful. Google is successful. I just used it on my way here. Or because Google is big. That’s not why,from my perspective, we’re having this hearing. We are having it because even successful companies, even popular companies, and even innovative companies are subject to the laws of this country including our antitrust laws. 
  • We are all successful when we make sure that our economy is strong and our economy is working better. But the law can’t be blinded by Google’s success or its past innovations if the company in its zeal to achieve greater success crosses a line into anticompetitive behavior. It’s our job to regulate it. It’s that simple. So we’re going to touch on issues, I hope, today of competition, technological innovation, the use of personal data. These are some of the defining issues, as the chair has said, defining issues of our time and I personally think, as we go into the months to come, this won’t just be about Google. This isn’t even just about the tech industry as much as I believe we need to change our laws and look at monopsonies and look at changing the burdens and making it so that our laws are as sophisticated as the companies that now occupy our economy.

Klobuchar asserted:

  • I think we need to do all that and I think it should be a huge priority going into the year. But right now as the chairman mentioned, we are focused on this issue today. Our society has never been more dependent on this technology than we are now in the midst of this global pandemic. As I noted, not just Google, the pandemic has forced a bunch of small businesses to close their doors and the five largest tech companies continue to thrive to the point where they briefly accounted for nearly 25% value of the entire S&P 500 stock index just a few weeks ago.
  • Again, I don’t quarrel with their success, but we have to start looking at do our laws really match that situation. And even if the original intent when these companies started as start-ups was to be innovative, which they’ve been, at what point do you cross the line so you squelch innovation and competition from other companies? We start with this, the ownership and use of data.
  • The powerful companies that provide us with these technologies are also collecting personal information. We know that. They know who our friends are, they know the books we read, where we live, whether we’ve graduated from college, income levels, race, how many steps we took yesterday. The chairman and I share an interest in this. How long we’ve stayed where we are. Machine learning analyzes troves of personal data, allowing our firms to discern even more sensitive information about us, our medical conditions, political, religious views and even preferences that we don’t even know we have. And why would companies do all of this? Well, put simply, to target us with digital advertisements. There’s really no other reason. It is a capitalist society. That’s what they do.

Klobuchar stated

  • Now, Google makes more money doing that than any company in the world, hands down, by leveraging its unmatched access to consumer data gained through its existing dominance in online and mobile search, mobile operating systems, Android, email, Gmail, online and mobile video, YouTube, browsers, Chrome, mobile mapping apps, Google maps and ad technology.
  • So, this ad technology ecosystem, known as the ad tech stack, consists of advertisers on one side and publishers on the other. So let’s look at these two sides. On the advertising side Google controls access to the huge number of advertisers that place ads on Google search which is nearly 90% of the search market and has unparalleled access to data as I described. On the publisher side, Google has privileged access to ad data to inform its bidding strategies. And then it also effectively controls the process, the ad auction process, that gets an advertiser’s ad to be put on a publisher’s site. Google dominates all the markets for services on both sides of the ad/tech stack, the publisher side and the advertising side, and I hope that will be a lot of our focus today. Research has suggested that Google may be taking between 30 and 70 percent of every advertising dollar spent by advertisers using its services depriving publishers of that revenue. Who are the publishers? They’re content producers. They’re things like the Minneapolis Star Tribune, they depend on revenue, so many of our content producers, our news producers do to get by.  
  • And to me, given that my dad was a journalist, to me this is one of the key elements here because if you have unfairness in how that ad echo system is going, then you’re depriving these news organizations at a time when the first amendment is already under assault of the revenue that they need to keep going. So whether it’s happening, and we don’t know all of the details at the Department of Justice right now, this could be the beginning of a reckoning for our antitrust laws to start looking at how we’re going to grapple with the new kinds of markets that we see across the country. It would help answer the question whether our federal antitrust laws are able to restrain the business conduct of even the largest, most successful companies in the world. When you think of the breakup of AT&T, that was our last big thing that happened in the antitrust area. Really big thing. What did that lead to? Lower prices, more competition. It really worked. But we’re not able to do this right now.
  • And my hope is that we’re getting the start and the Justice Department, that things are going on at the FTC. But to really do that, they’re going to do resources to take on the legions of lawyers at the companies and that’s my first goal. What can we do for enforcement? My second, what do we have to do to make the laws work better, to look at some of the deals that have already been made? The third is what are the remedies? Do they make a difference in changing the behavior and allowing competition? I literally don’t have personal grudges against these companies like sometimes the president has expressed about various companies. I don’t. I just want our capitalist system to work. I want it to work. And to have it work you simply can’t have one company dominating areas of an industry. Our Founding Fathers started this country in part because they were rebelling against monopoly power.

Google Global Partnerships and Corporate Development President Donald Harrison stated

  • Online advertising prices in the U.S. have fallen more than 40% since 2010. According to the Progressive Policy Institute, “for every $3 that an advertiser spends on digital advertising, they would have to spend $5 on print advertising to get the same impact.” As a result, the share of U.S. GDP going to advertising in media has declined roughly 25% in recent years. The benefits of these lower prices ow directly to American businesses and consumers.
  • We help businesses grow from advertising on (1) our own sites, and (2) other publishers’ sites.
    • Advertising on Google sites and apps
    • A wide range of businesses, including many small firms, advertise on our sites and apps like Google Search and YouTube. That’s where we earn the majority of our advertising revenue.
    • We show no ads — and make no money — on the vast majority of searches. We show ads only on a small fraction of searches, typically those with commercial intent, such as searches for “sneakers” or “toaster.” We face intense competition for these types of searches. An estimated 55 percent of Americans start product searches on Amazon, not Google. And many online shoppers use Walmart, eBay, and other sites. For travel searches, many go to Expedia, Kayak, Orbitz, and TripAdvisor. Facebook, Bing, Twitter, Snap, Pinterest, and many more compete with us for a range of commercial advertisements.
    • Advertising on non-Google sites and apps
    • In addition to ads on our own properties, Google also helps businesses advertise on a wide range of other websites and mobile applications, known as “publishers.” We offer technology that (1) helps advertisers buy ad space — known as the “buy side,” and (2) helps publishers sell their ad space — known as the “sell side.” This technology is often referred to as “ad tech.”
    • The ad tech portion of our business accounts for a small fraction of our advertising revenue. And we share the majority of that revenue with publishers. Publishers get paid for every impression — each time an ad is viewed — even if the ad is never clicked. Of the revenue we retain, a large portion goes to defray the costs of running this complex and evolving business.
  • A crowded and competitive ad tech ecosystem
    • The ad tech space is crowded and competitive. Thousands of companies, large and small, work together and in competition with each other, each with different specialties and technologies. We compete with Adobe, Amazon, AT&T, Comcast, Facebook, News Corporation, Oracle, and Verizon, as well as leaders like Index Exchange, Magnite, MediaMath, OpenX, The Trade Desk, and many more.
  • Google shares billions of dollars with publishers, more than the industry average.
    • Even as online ad prices and ad tech fees have fallen, benefiting businesses and consumers, Google has helped publishers make more money from ads. In 2018, we paid more than $14 billion to the publishing partners in our ad network — up from $10 billion in 2015.
    • In 2019, when both advertisers and publishers used our tools, publishers kept over 69 percent of the ad revenue — more than the industry average. And when publishers use our tools to sell directly to advertisers, they keep even more of the revenue.

Chalice Custom Algorithms Chief Executive Officer Adam Heimlich contended

  • In 2016, Google combined search and display data, breaking a promise made to American regulators. Google also broke the industry’s privacy standard by linking consumers’ names, from Gmail, to the ID numbers assigned to browsers for exchange transactions.
  • Continuously, from 2016, Google came up with new ways to pollute the exchange ecosystem they’d previously seemed to embrace. Pollution came in the form of restrictions and exclusions that made the open web less efficient for buyers and sellers.
  • Google took YouTube, Google’s most valuable display property, off the exchanges, while making it available through an exclusive “pipe” from Google’s exchange bidder. Google excluded data providers from its websites and measurement partners from its platforms. Google’s selling platform denied publishers’ demand for a unified, exchange- vs-exchange action. To keep publishers from getting rid of Google’s software, Google funnels exclusive display demand from its search platform through it. Google weaponized new privacy laws to restrict advertisers’ and publishers’ access to their own ad data in Google tools.
  • Google tightened ties among its products until the shady broker was no longer one among a set of competitors: Google became the only display company not hobbled by the exclusions and restrictions it’d placed on everyone else. The power to interoperate among buy-side, sell-side and measurement software went from being a feature of the exchange ecosystem to a capability exclusive to Google.
  • Now, progress on innovation is squeezed to the margins of the industry, and new adtech is rare. The majority of advertisers have stagnated or regressed.
  • There’s more at stake than most people realize. The more efficient the ad market, the more likely it is that superior new products will find customers and thrive. When the ad exchanges function properly, the size advantage from flooding the airwaves is offset by quieter voices speaking directly to whoever’s most open to any given improvement. It tilts the incentives of every business toward innovation.
  • Google is dominating display by breaking interoperability and subtracting the efficiencies of a symmetrical market. Pre-2016, under intense competitive pressure, ad exchanges were becoming more transparent and privacy-respectful as the ecosystem grew. Google could have coped with these developments without using its market power destructively: There was nothing to stop Google from exiting the arena or competing within its open standards. Whether or not Google competes with other big tech firms is irrelevant to the harms they’ve caused publishers, measurement companies, platforms and small businesses like mine in the ~$50B open web display market.
  • It was efficient when publishers, platforms, measurement tools and service providers all interoperated. Innovators of a great new product or service could access a global marketplace of thousands of buyers and sellers quickly at low cost. Small businesses with great ideas had a shorter ramp to success.
  • Now, funding for new adtech startups has been drying up and the pace of innovation slowed down. The number-one concern I hear from potential investors is Google’s domination of the market my company operates in. For years, they’ve been breaking existing efficiencies and preventing the development of new ones.
  • Many expect Google to successfully mislead regulators about its conduct in the open web, and its harmful effects. I’m grateful for the opportunity to help scrutinize Google’s claims. For the sake of competition, the innovation competition drives and the benefits innovation brings, Google should be forced to either exit the ad exchange market or compete within its open standards.

Omidyar Network Beneficial Technology Senior Advisor David Dinielli stated

  • [U]nder current law, there is a strong case to be made that Google has illegally monopolized, or illegally maintained a monopoly in, the market for digital advertising on what is termed the “open web,” i.e., advertising that appears on websites as users traverse the internet.
  • Through a variety of conduct described herein, Google now occupies every layer the “ad tech stack”—a term that describes the various functions that serve to match website publishers with the advertisers who seek to deliver targeted ads to consumers who are viewing those websites. In antitrust parlance, website publishers provide the “supply” of ad space, and advertisers create the “demand” for that space. The market for this sort of advertising is unique and appears on its face dysfunctional from an antitrust standpoint: Google—through its various ad tech tools – represents both the suppliers and the purchasers and also conducts the real-time auctions that match buyers and sellers and determine the price. Moreover, Google appears to have engaged in a multitude of anti-competitive acts, such as making the ad space on YouTube (which it owns) available exclusively through its own ad tech tools, that were designed to cement its lock on this market and exclude competitors. As my co-author and I said in a recent paper about the digital advertising market, “all roads lead through Google.”
  • Google has asserted that the digital advertising market is vibrant and competitive, and that publishers and advertisers have many options in buying and selling advertising space. Of course, it is not surprising that there are other some other actors in this market, given the significant profits to be made. But a recent report from the United Kingdom’s Competition and Markets Authority (“CMA”) explained, based on an extensive factual investigation, that Google holds a dominant position—as high as 90%—in every layer of the ad tech stack. Moreover, a monopolization case in the U.S. does not require proof that the alleged monopolist hold 100% of a particular market—which would make it literally a monopolist—but rather that it has “monopoly power” and that it has engaged in anticompetitive conduct to obtain or maintain that power rather than competing on the merits. Google’s conduct as described herein surely fits that standard.
  • Digital advertising is complex and the tools and processes that allow for near- instantaneous placement of ads every time we open a web page can seem opaque. But the consequences of unchecked power in this market are significant. If advertisers are paying higher prices than would obtain in a well-functioning market, economic theory teaches that those higher advertising prices will be passed down to consumers in the form of increased prices for goods and services. If website publishers, such as local news outlets, are being paid less than they should for their supply of advertising space, they will invest less in content creation and news gathering. Google is the winner and the rest of us are the losers. This committee therefore is right in investigating if current antitrust law is up to the task of ensuring competition in digital advertising and exploring possible legislative fixes if it is not.

Netchoice Vice President and General Counsel Carl Szabo stated

  • Among the many Google products and services that consumers love are Google Search, YouTube, Gmail, and Google Drive—all amazingly useful, and all free. To many critics of “Big Tech,” however, when consumers enthusiastically choose these free-of-charge products, it amounts to proof that something must be wrong. Every successful new service or product that proves a winner with consumers is deemed by these critics to be just another antitrust violation.
  • But Google’s greatest successes are being won in markets with the greatest competition. In the digital ads market, for example, Google faces fierce competitive pressure. You would never know that listening to the critics.
  • For starters, Google is no monopoly. It’s wildly popular with consumers, yes. And true, it’s also very popular with investors. But the company faces competition from all corners, including from other tech platforms such as Facebook and Amazon (which are simultaneously and thus illogically also dubbed monopolies).
  • Far from being evidence of any unlawful conduct, Google’s success under these conditions offers abundant proof that it is meeting and exceeding the fundamental test that has been the bedrock of antitrust law for the last 40 decades: are consumers benefitting? There can be little doubt on this point, for Google’s users vote daily with their choices. In order to dismiss this as irrelevant, the critics are now arguing that antitrust enforcement should simply abandon the consumer welfare standard, enabling them to attack “bigness” per se. This would undermine the very purpose of antitrust law since its inception more than a century ago.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Photo by Morning Brew on Unsplash

Pending Legislation In U.S. Congress, Part IV

There is an even chance that Congress further narrows the Section 230 liability shield given criticism of how tech companies have wielded this language.

This year, Congress increased its focus on Section 230 of the Communications Act of 1934 that gives companies like Facebook, Twitter, Google, and others blanket immunity from litigation based on the content others post. Additionally, these platforms cannot be sued for “good faith” actions to take down or restrict material considered “to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.” Many Republicans are claiming both that these platforms are biased against conservative content (a claim not borne out by the evidence we have) and are not doing enough to find and remove material that exploits children. Many Democrats are arguing the platforms are not doing enough to remove right wing hate speech and agree, in some part, regarding material that exploits children.

Working in the background of any possible legislation to narrow Section 230 is an executive order issued by the President directing two agencies to investigate “online censorship” even though the Supreme Court of the United States has long held that a person or entity does not have First Amendment rights visa vis private entities. Finally, the debate over encryption is also edging its way into Section 230 by a variety of means, as the Trump Administration, especially the United States Department of Justice (DOJ) has been pressuring tech companies to address end-to-end encryption on devices and apps. One means of pressure is threatening to remove Section 230 liability protection to garner compliance on encryption issues.

In late July, the Senate Judiciary Committee met, amended and reported out the “Eliminating Abusive and Rampant Neglect of Interactive Technologies Act of 2020” (EARN IT Act of 2020) (S.3398), a bill that would change 47 USC 230 by narrowing the liability shield and potentially making online platforms liable to criminal and civil actions for having child sexual materials on their platforms. The bill as introduced in March was changed significantly when a manager’s amendment was released and then further changed at the markup. The Committee reported out the bill unanimously, sending it to the full Senate, perhaps signaling the breadth of support for the legislation. It is possible this could come before the full Senate this year. If passed, the EARN IT Act of 2020 would represent a second piece of legislation to change Section 230 in the last two years with enactment of “Allow States and Victims to Fight Online Sex Trafficking Act of 2017” (P.L. 115-164). There is, at present, no House companion bill.

In advance of the markup, two of the sponsors, Judiciary Committee Chair Lindsey Graham (R-SC) and Senator Richard Blumenthal (D-CT) released a manager’s amendment to the EARN IT Act. The bill would still establish a National Commission on Online Child Sexual Exploitation Prevention (Commission) that would design and recommend voluntary “best practices” applicable to technology companies such as Google, Facebook, and many others to address “the online sexual exploitation of children.”

Moreover, instead of creating a process under which the DOJ, Department of Homeland Security (DHS), and the Federal Trade Commission (FTC) would accept or reject these standards, as in the original bill, the DOJ would merely have to publish them in the Federal Register. Likewise, the language establishing a fast track process for Congress to codify these best practices has been stricken, too as well as the provisions requiring certain technology companies to certify compliance with the best practices.

Moreover, the revised bill also lacks the safe harbor against lawsuits based on having “child sexual abuse material” on their platform for following the Commission’s best practices. Therefore, instead of encouraging technology companies to use the best practices in exchange for continuing to enjoy liability protection, the language creating this safe harbor in the original bill has been stricken. Now the manager’s amendment strikes liability protection under 47 USC 230 for these materials except if a platform is acting as a Good Samaritan in removing these materials. Consequently, should a Facebook or Google fail to find and take down these materials in an expeditious fashion, then they would face federal and state liability to civil and criminal lawsuits.

However, the Committee adopted an amendment offered by Senator Patrick Leahy (D-VT) that would change 47 USC 230 by making clear that the use of end-to-end encryption does not make providers liable for child sexual exploitation laws and abuse material. Specifically, no liability would attach because the provider

  • utilizes full end-to-end encrypted messaging   services,   device   encryption,   or   other   encryption services;
  • does  not  possess  the  information  necessary to decrypt a communication; or
  • fails to take an action that would otherwise  undermine  the  ability  of  the  provider  to  offer  full  end-to-end  encrypted  messaging  services, device encryption, or other encryption services.

Moreover, in advance of the first hearing to markup the EARN IT Act of 2020, key Republican stakeholders released a bill that would require device manufacturers, app developers, and online platforms to decrypt data if a federal court issues a warrant based on probable cause. Critics of the EARN IT Act of 2020 claimed the bill would force big technology companies to choose between weakening encryption or losing their liability protection under Section 230. They likely see this most recent bill as another shot across the bow of technology companies, many of which continue to support and use end-to-end encryption even though the United States government and close allies are pressuring them on the issue. However, unlike the EARN IT Act of 2020, this latest bill does not have any Democratic cosponsors.

Graham and Senators Tom Cotton (R-AR) and Marsha Blackburn (R-TN) introduced the “Lawful Access to Encrypted Data Act” (S.4051) that would require the manufacturers of devices such as smartphones, app makers, and platforms to decrypt a user’s data if a federal court issues a warrant to search a device, app, or operating system.

The assistance covered entities must provide includes:

  • isolating the information authorized to be searched;
  • decrypting or decoding information on the electronic device or remotely stored electronic information that is authorized to be searched, or otherwise providing such information in an intelligible format, unless the independent actions of an unaffiliated entity make it technically impossible to do so; and
  • providing technical support as necessary to ensure effective execution of the warrant for the electronic devices particularly described by the warrant.


The DOJ would be able to issue “assistance capability directives” that would require the recipient to prepare or maintain the ability to aid a law enforcement agency that obtained a warrant that needs technical assistance to access data. Recipients of such orders can file a petition in federal court in Washington, DC to modify or set aside the order on only three grounds: it is illegal, it does meet the requirements of the new federal regulatory structure, or “it is technically impossible for the person to make any change to the way the hardware, software, or other property of the person behaves in order to comply with the directive.” If a court rules against the recipient of such an order, it must comply, and if any recipient of such an order does not comply, a court may find it in contempt of court, allowing for a range of punishments until the contempt is cured. The bill also amends the “Foreign Intelligence Surveillance Act” (FISA) to require the same decryption and assistance in FISA activities, which are mostly surveillance of people outside the United States. The bill would focus on those device manufacturers that sell more than 1 million devices and those platforms and apps with more than 1 million users, meaning obviously companies like Apple, Facebook, Google, and others. The bill also tasks the DOJ with conducting a prize competition “to incentivize and encourage research and innovation into solutions providing law enforcement access to encrypted data pursuant to legal process”

In response to the EARN IT Act, a bicameral group of Democrats released legislation to dramatically increase funding for the United States’ government to combat the online exploitation of children that has served as an alternate proposal to  a bill critics claim would force technology companies to give way on encryption under pain of losing the Section 230 liability shield. The “Invest in Child Safety Act” (H.R.6752/S.3629) would require $5 billion in funding outside the appropriations process to bolster current efforts to fight online exploitation and abuse. This bill was introduced roughly two months after the EARN IT Act of 2020, and in their press release, Senators Ron Wyden (D-OR), Kirsten Gillibrand (D-NY), Bob Casey D-PA) and Sherrod Brown (D-OH) stated

The Invest in Child Safety Act would direct $5 billion in mandatory funding to investigate and target the pedophiles and abusers who create and share child sexual abuse material online. And it would create a new White House office to coordinate efforts across federal agencies, after DOJ refused to comply with a 2008 law requiring coordination and reporting of those efforts. It also directs substantial new funding for community-based efforts to prevent children from becoming victims in the first place.  

Representatives Anna Eshoo (D-CA), Kathy Castor (D-FL), Ann M. Kuster (D-NH), Eleanor Holmes Norton (D-DC), Alcee L. Hastings (D-FL), and Deb Haaland (D-NM) introduced the companion bill in the House.

The bill would establish in the Executive Office of the President an Office to Enforce and Protect Against Child Sexual Exploitation headed by a Senate confirmed Director who would coordinate efforts across the U.S. government to fight child exploitation. Within six months of the appointment of the first Director, he or she would need to submit to Congress “an enforcement and protection strategy” and thereafter send an annual report as well. The DOJ and Federal Bureau of Investigation would receive additional funding to bolster and improve their efforts in this field.

In June, Senator Josh Hawley (R-MO) introduced the “Limiting Section 230 Immunity to Good Samaritans Act” (S.3983) that is cosponsored by Senators Marco Rubio (R-FL), Kelly Loeffler (R-GA), Mike Braun (R-IN) and Tom Cotton (R-AR). The bill would amend the liability shield in 47 U.S.C. 230 to require large social media platforms like Facebook and Twitter to update their terms of service so that they must operate under “good faith” or face litigation with possible monetary damages for violating these new terms of service. Hawley’s bill would add a definition of “good faith” to the statute, which echoes one of the recommendations made by the DOJ. In relevant part, the new terms of service would bar so-called “edge providers” from “intentional[]  selective  enforcement  of  the  terms  of  service  of  the  interactive  computer  service,  including  the  intentionally  selective  enforcement  of  policies  of  the  provider  relating  to  restricting  access to or availability of material.” If such “selective enforcement” were to occur, then edge providers could be sued but the plaintiffs would have to show the edge provider actually knew they were breaching the terms of service by selectively enforcing its platform rules.

The focus of such alleged “selective enforcement” arise from allegations that conservative material posted on Twitter and Facebook is being targeted in ways that liberal material is not, including being taken down. This claim has been leveled by many Republican stakeholders. And now they are proposing providing affected people with the right to sue; however, it is not clear whether these Republicans have changed their minds on allowing private rights of action against technology companies as a means of enforcing laws. To date, many Republicans have opposed private rights of action for data breaches or violations of privacy.

In early July, Senator Brian Schatz (D-HI) and Senate Majority Whip John Thune (R-SD) introduced the “Platform Accountability and Consumer Transparency (PACT) Act” (S.4066) that would reform Section 230. Schatz and Thune are offering their bill as an alternative to the EARN IT Act of 2020. Schatz and Thune serve as the Ranking Member and Chair of the Communications, Technology, Innovation and the Internet Subcommittee of the Senate Commerce, Science, and Transportation Committee and are thus key stakeholders on any legislation changing Section 230.

Under the PACT Act, so-called “interactive computer services” (the term of art used in Section 230) would need to draft and publish “acceptable use polic[ies]” that would inform users of what content may be posted, a breakdown of the process by which the online platform reviews content to make sure it is in compliance with policy, and spell out the process people may use to report potentially policy-violating content, illegal content, and illegal activity. The PACT Act defines each of the three terms:

  • ‘‘illegal activity’’ means activity conducted by an information content provider that has been determined by a Federal or State court to violate Federal criminal or civil law.
  • ‘‘illegal content’’ means information provided by an information content provider that has been determined by a Federal or State court to violate—
    • Federal criminal or civil law; or
    • State defamation law.
  • “potentially policy-violating content’’ means content that may violate the acceptable use policy of the provider of an interactive computer service.

The first two definitions will pose problems in practice, for if one state court determines content is illegal but another does not, then how must an online platform respond to comply with the reformed Section 230. The same would be true of illegal activity. Consequently, online platforms may be forced to monitor content state to state, hardly a practical system and one that would favor existing market entrants while proving a barrier to entry for new entrants. And, then based on different state or federal court rulings are online platforms to then allow or take down content on the basis of where the person posting the content lives?

In any event, after receiving notice, online platforms would have 24 hours to remove illegal content or activity and two weeks for potentially policy-violating content to review the notice and determine if the content actually violates the platform’s policies. In the latter case, the platform would be required to notify the person that posted the content and allow them an appeal if the online platform decides to take down the content because it violated its policies based on a user complaint. There would be a different standard for small business providers, requiring them to act on the three categories of information within a reasonable period of time after receiving notice. And, telecommunications and cloud networks and other entities would be exempted from this reform to Section 230 altogether.

However, Section 230’s liability shield would be narrowed with respect to illegal content and activity. If a provider knows of the illegal content and activity but does not remove it within 24 hours, then they would lose the shield from lawsuits. So, if Facebook fails to take down a posting urging someone to assassinate the President, a federal crime, within 24 hours of being notified it was posted, it could be sued. However, Facebook and similar companies would not have an affirmative duty to locate and remove illegal content and activity, however, and could continue to enjoy Section 230 liability if there is either type of content on its platform so long as there is no notice provided. And yet, Section 230 would be narrowed overall as the provision making clear that all federal criminal and civil laws and regulations are outside the liability protection. Currently, this provision only pertains to federal criminal statutes. And, state attorneys general would be able to enforce federal civil laws if the lawsuit could also be brought on the basis of a civil law in the attorney general’s state.

Interactive computer services must publish a quarterly transparency report including the total number of instances in which illegal content, illegal activity, or potentially policy-violating content was flagged and the number of times action was taken, among other data. Additionally, they would need to identify the number of times they demonetized or deprioritized content. These reports would be publicly available.

The FTC would be explicitly empowered to act under the bill. Any violations of the process by which an online platform reviews notices of potentially policy-violating content, appeals, and transparency reports would be violations of an FTC regulation defining an unfair or deceptive act or practice, allowing the agency to seek civil fines for first violations. But, this authority is circumscribed by a provision barring the FTC from reviewing “any action or decision by a provider of an interactive computer service related to the application of the acceptable use policy of the provider.” This limitation would seem to allow an online platform to remove content on its own initiative if it violates the platform’s policies without the FTC being able to review such decisions. This would provide ample incentive for Facebook, Twitter, Reddit, and others to police their platforms so that they could avoid FTC action. The FTC’s jurisdiction would be widened to include non-profits regarding how they manage removing content based on a user complaint the same way for profit entities would be subject to the agency’s scrutiny.

The National Institute of Technology and Standards (NIST) would need to develop “a voluntary framework, with input from relevant experts, that consists of non-binding standards, guidelines, and best practices to manage risk and shared challenges related to, for the purposes of this Act, good faith moderation practices by interactive computer service providers.”

This week, Senate Commerce, Science, and Transportation Committee Chair Roger Wicker (R-MS), Graham, and Blackburn introduced the latest Section 230 bill, the “Online Freedom and Viewpoint Diversity Act” (S.4534) that would essentially remove liability protection for social media platforms and others that choose to correct, label, or remove material, mainly political material. A platform’s discretion would be severely limited as to when and under what circumstances it could take down content. This bill would seem tailored to conservatives who believe Twitter, Facebook, etc. are biased against them and their viewpoints.

In May, after Twitter factchecked two of his Tweets regarding false claims made about mail voting in California in response to the COVID-19 pandemic, President Donald Trump signed a long rumored executive order (EO) seen by many as a means of cowing social media platforms: the “Executive Order on Preventing Online Censorship.” This EO directed federal agencies to act, and one has by asking the Federal Communications Commission (FCC) to start a rulemaking, which has been initiated. However, there is at least one lawsuit pending to enjoin action on the EO that could conceivably block implementation.

In the EO, the President claimed

Section 230 was not intended to allow a handful of companies to grow into titans controlling vital avenues for our national discourse under the guise of promoting open forums for debate, and then to provide those behemoths blanket immunity when they use their power to censor content and silence viewpoints that they dislike.  When an interactive computer service provider removes or restricts access to content and its actions do not meet the criteria of subparagraph (c)(2)(A), it is engaged in editorial conduct.  It is the policy of the United States that such a provider should properly lose the limited liability shield of subparagraph (c)(2)(A) and be exposed to liability like any traditional editor and publisher that is not an online provider.

Consequently, the EO directs that “all executive departments and agencies should ensure that their application of section 230(c) properly reflects the narrow purpose of the section and take all appropriate actions in this regard.”

With respect to specific actions, the Department of Commerce’s the National Telecommunications and Information Administration (NTIA) was directed to file a petition for rulemaking with the FCC to clarify the interplay between clauses of Section 230, notably whether the liability shield that protects companies like Twitter and Facebook for content posted on an online platform also extends to so-called “editorial decisions,” presumably actions like Twitter’s in factchecking Trump regarding mail balloting. The NTIA was also to ask the FCC to define better the conditions under which an online platform may take down content in good faith that are “deceptive, pretextual, or inconsistent with a provider’s terms of service; or taken after failing to provide adequate notice, reasoned explanation, or a meaningful opportunity to be heard.” The NTIA was directed to also ask the FCC to promulgate any other regulations necessary to effectuate the EO.

The FTC must consider whether online platforms are violating Section 5 of the FTC Act barring unfair or deceptive practices, which “may include practices by entities covered by section 230 that restrict speech in ways that do not align with those entities’ public representations about those practices.” As of yet, the FTC has not done so, and in remarks before Congress, FTC Chair Joseph Simons has opined that doing so is outside the scope of the agency’s mission. Consequently, there has been talk in Washington that the Trump Administration is looking for a new FTC Chair.

Following the directive in the EO, on 27 July, the NTIA filed a petition with the FCC, asking the agency to start a rulemaking to clarify alleged ambiguities in 47 USC 230 regarding the limits of the liability shield for the content others post online versus the liability protection for “good faith” moderation by the platform itself.

The NTIA asserted “[t]he FCC should use its authorities to clarify ambiguities in section 230 so as to make its interpretation appropriate to the current internet marketplace and provide clearer guidance to courts, platforms, and users…[and] urges the FCC to promulgate rules addressing the following points:

  1. Clarify the relationship between subsections (c)(1) and (c)(2), lest they be read and applied in a manner that renders (c)(2) superfluous as some courts appear to be doing.
  2. Specify that Section 230(c)(1) has no application to any interactive computer service’s decision, agreement, or action to restrict access to or availability of material provided by another information content provider or to bar any information content provider from using an interactive computer service.
  3. Provide clearer guidance to courts, platforms, and users, on what content falls within (c)(2) immunity, particularly section 230(c)(2)’s “otherwise objectionable” language and its requirement that all removals be done in “good faith.”
  4. Specify that “responsible, in whole or in part, for the creation or development of information” in the definition of “information content provider,” 47 U.S.C.
    § 230(f)(3), includes editorial decisions that modify or alter content, including but not limited to substantively contributing to, commenting upon, editorializing about, or presenting with a discernible viewpoint content provided by another information content provider.
  5. Mandate disclosure for internet transparency similar to that required of other internet companies, such as broadband service providers.

NTIA argued that

  • Section 230(c)(1) has a specific focus: it prohibits “treating” “interactive computer services,” i.e., internet platforms, such as Twitter or Facebook, as “publishers.” But, this provision only concerns “information” provided by third parties, i.e., “another internet content provider”68 and does not cover a platform’s own content or editorial decisions.
  • Section (c)(2) also has a specific focus: it eliminates liability for interactive computer services that act in good faith “to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.”

In early August, the FCC asked for comments on the NTIA petition, and comments were due by 2 September. Over 2500 comments have been filed, and a cursory search turned up numerous form letter comments drafted by a conservative organization that were then submitted by members and followers.

Finally, the House’s “FY 2021 Financial Services and General Government Appropriations Act” (H.R. 7668) has a provision that would bar either the FTC or FCC from taking certain actions related to Executive Order 13925, “Preventing Online Censorship.” It is very unlikely Senate Republicans, some of whom have publicly supported this Executive Order, will allow this language into the final bill funding the agencies.

There has been other executive branch action on Section 230. In mid-June, the DOJ released “a set of reform proposals to update the outdated immunity for online platforms under Section 230” according to a department press release. While these proposals came two weeks after President Donald Trump’s “Executive Order on Preventing Online Censorship” signed after Twitter fact checked two tweets that were not true (see here for more detail and analysis), the DOJ launched its review of 47 U.S.C. 230 in February 2020.

The DOJ explained “[t]he Section 230 reforms that the Department of Justice identified generally fall into four categories:

1) Incentivizing Online Platforms to Address Illicit Content. The first category of potential reforms is aimed at incentivizing platforms to address the growing amount of illicit content online, while preserving the core of Section 230’s immunity for defamation.

  1. Bad Samaritan Carve-Out. First, the Department proposes denying Section 230 immunity to truly bad actors. The title of Section 230’s immunity provision—“Protection for ‘Good Samaritan’ Blocking and Screening of Offensive Material”—makes clear that Section 230 immunity is meant to incentivize and protect responsible online platforms. It therefore makes little sense to immunize from civil liability an online platform that purposefully facilitates or solicits third-party content or activity that would violate federal criminal law.
  2. Carve-Outs for Child Abuse, Terrorism, and Cyber-Stalking. Second, the Department proposes exempting from immunity specific categories of claims that address particularly egregious content, including (1) child exploitation and sexual abuse, (2) terrorism, and (3) cyber-stalking. These targeted carve-outs would halt the over-expansion of Section 230 immunity and enable victims to seek civil redress in causes of action far afield from the original purpose of the statute.
  3. Case-Specific Carve-Outs for Actual Knowledge or Court Judgments. Third, the Department supports reforms to make clear that Section 230 immunity does not apply in a specific case where a platform had actual knowledge or notice that the third party content at issue violated federal criminal law or where the platform was provided with a court judgment that content is unlawful in any respect.

2) Clarifying Federal Government Civil Enforcement Capabilities. A second category of reform would increase the ability of the government to protect citizens from illicit online conduct and activity by making clear that the immunity provided by Section 230 does not apply to civil enforcement by the federal government, which is an important complement to criminal prosecution.

3) Promoting Competition. A third reform proposal is to clarify that federal antitrust claims are not covered by Section 230 immunity. Over time, the avenues for engaging in both online commerce and speech have concentrated in the hands of a few key players. It makes little sense to enable large online platforms (particularly dominant ones) to invoke Section 230 immunity in antitrust cases, where liability is based on harm to competition, not on third-party speech.

4) Promoting Open Discourse and Greater Transparency. A fourth category of potential reforms is intended to clarify the text and original purpose of the statute in order to promote free and open discourse online and encourage greater transparency between platforms and users.

  1. Replace Vague Terminology in (c)(2). First, the Department supports replacing the vague catch-all “otherwise objectionable” language in Section 230 (c)(2) with “unlawful” and “promotes terrorism.” This reform would focus the broad blanket immunity for content moderation decisions on the core objective of Section 230—to reduce online content harmful to children—while limiting a platform’s ability to remove content arbitrarily or in ways inconsistent with its terms or service simply by deeming it “objectionable.”
  2. Provide Definition of Good Faith. Second, the Department proposes adding a statutory definition of “good faith,” which would limit immunity for content moderation decisions to those done in accordance with plain and particular terms of service and accompanied by a reasonable explanation, unless such notice would impede law enforcement or risk imminent harm to others. Clarifying the meaning of “good faith” should encourage platforms to be more transparent and accountable to their users, rather than hide behind blanket Section 230 protections.
  3. Continue to Overrule Stratton Oakmont to Avoid the Moderator’s Dilemma. Third, the Department proposes clarifying that a platform’s removal of content pursuant to Section 230 (c)(2) or consistent with its terms of service does not, on its own, render the platform a publisher or speaker for all other content on its service.

While the DOJ did not release legislative language to affect these changes, it is possible to suss out the DOJ’s purposes in making these recommendations. The Department clearly believes that the Section 230 liability shield deprives companies like Facebook of a number of legal and financial incentives to locate, takedown, or block material such as child pornography. The New York Times published articles last year (see here and here) about the shortcomings critics have found in a number of online platforms’ efforts to find and remove this material. If the companies faced civil liability for not taking down the images, the rationale seems to go, then they would devote much greater resources to doing so. Likewise, with respect to terrorist activities and cyber-bullying, the DOJ seems to think this policy change would have the same effect.

Some of the DOJ’s other recommendations seem aimed at solving an issue often alleged by Republicans and conservatives: that their speech is more heavily policed and censored than others on the political spectrum. The recommendations call for removing the word “objectionable” from the types of material a provider may remove or restrict in good faith and adding “unlawful” and “promotes terrorism.” The recommendations would also call for a statutory definition of “good faith,” which dovetails with an action in the EO for an Administration agency to petition the Federal Communications Commission (FCC) to conduct a rulemaking to better define this term.

Some consider the Department’s focus on Section 230 liability a proxy for its interest in having technology companies drop default end-to-end encryption and securing their assistance in accessing any communications on such platforms. If this were true, the calculation seems to be technology companies would prefer to be shielded from financial liability over ensuring users communications and transactions are secured via encryption.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Gerd Altmann from Pixabay

Further Reading, Other Developments, and Coming Events (13 August)

Here are Further Reading, Other Developments, and Coming Events:

Coming Events

  • On 18 August, the National Institute of Standards and Technology (NIST) will host the “Bias in AI Workshop, a virtual event to develop a shared understanding of bias in AI, what it is, and how to measure it.”
  • The United States’ Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) announced that its third annual National Cybersecurity Summit “will be held virtually as a series of webinars every Wednesday for four weeks beginning September 16 and ending October 7:”
    • September 16: Key Cyber Insights
    • September 23: Leading the Digital Transformation
    • September 30: Diversity in Cybersecurity
    • October 7: Defending our Democracy
    • One can register for the event here.
  • The Senate Judiciary Committee’s Antitrust, Competition Policy & Consumer Rights Subcommittee will hold a hearing on 15 September titled “Stacking the Tech: Has Google Harmed Competition in Online Advertising?.” In their press release, Chair Mike Lee (R-UT) and Ranking Member Amy Klobuchar (D-MN) asserted:
    • Google is the dominant player in online advertising, a business that accounts for around 85% of its revenues and which allows it to monetize the data it collects through the products it offers for free. Recent consumer complaints and investigations by law enforcement have raised questions about whether Google has acquired or maintained its market power in online advertising in violation of the antitrust laws. News reports indicate this may also be the centerpiece of a forthcoming antitrust lawsuit from the U.S. Department of Justice. This hearing will examine these allegations and provide a forum to assess the most important antitrust investigation of the 21st century.

Other Developments

  • Senate Intelligence Committee Acting Chair Marco Rubio (R-FL) and Vice Chairman Mark Warner (D-VA) released a statement indicating the committee had voted to adopt the fifth and final volume of its investigation of the Russian Federation’s interference in the 2016 election. The committee had submitted the report to the Intelligence Community for vetting and have received the report with edits and redactions. The report could be released sometime over the next few weeks.  Rubio and Warner stated “the Senate Intelligence Committee voted to adopt the classified version of the final volume of the Committee’s bipartisan Russia investigation. In the coming days, the Committee will work to incorporate any additional views, as well as work with the Intelligence Community to formalize a properly redacted, declassified, publicly releasable version of the Volume 5 report.” The Senate Intelligence Committee’s has released four previous reports:
  • The National Institute of Standards and Technology (NIST) is accepting comments until 11 September on draft Special Publication 800-53B, “Control Baselines for Information Systems and Organizations,” a guidance document that will serve a key role in the United States government’s efforts to secure and protect the networks and systems it operates and those run by federal contractors. NIST explained:
    • This publication establishes security and privacy control baselines for federal information systems and organizations and provides tailoring guidance for those baselines. The use of the security control baselines is mandatory, in accordance with OMB Circular A-130 [OMB A-130] and the provisions of the Federal Information Security Modernization Act4 [FISMA], which requires the implementation of a set of minimum controls to protect federal information and  information systems. Whereas use of the privacy control baseline is not mandated by law or [OMB A-130], SP 800-53B, along with other supporting NIST publications, is designed to help organizations identify the security and privacy controls needed to manage risk and satisfy the security and privacy requirements in FISMA, the Privacy Act of 1974 [PRIVACT], selected OMB policies (e.g., [OMB A-130]), and designated Federal Information Processing Standards (FIPS), among others
  • The United States Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) released an “Election Vulnerability Reporting Guide
    to provide “election administrators with a step-by-step guide, list of resources, and a template for establishing a successful vulnerability disclosure program to address possible vulnerabilities in their state and local election systems…[and] [t]he six steps include:
    • Step 1: Identify Systems Where You Would Accept Security Testing, and those Off-Limits
    • Step 2: Draft an Easy-to-Read Vulnerability Disclosure Policy (See Appendix III)
    • Step 3: Establish a Way to Receive Reports/Conduct Follow-On Communication
    • Step 4: Assign Someone to Thank and Communicate with Researchers
    • Step 5: Assign Someone to Vet and Fix the Vulnerabilities
    • Step 6: Consider Sharing Information with Other Affected Parties
  • The United Kingdom’s Information Commissioner’s Office (ICO) has issued “Guidance on AI and data protection” that “clarifies how you can assess the risks to rights and freedoms that AI can pose from a data protection perspective; and the appropriate measures you can implement to mitigate them.” The ICO explained “[w]hile data protection and ‘AI ethics’ overlap, this guidance does not provide generic ethical or design principles for your use of AI.” The ICO stated “[i]t corresponds to data protection principles, and is structured as follows:
    • part one addresses accountability and governance in AI, including data protection impact assessments (DPIAs);
    • part two covers fair, lawful and transparent processing, including lawful bases, assessing and improving AI system performance, and mitigating potential discrimination;
    • part three addresses data minimisation and security; and
    • part four covers compliance with individual rights, including rights related to automated decision-making.
  •  20 state attorneys general wrote Facebook Chief Executive Officer Mark Zuckerberg and Chief Operating Officer Sheryl Sandberg “to request  that  you  take  additional  steps  to prevent   Facebook   from   being used   to   spread   disinformation   and   hate   and   to   facilitate discrimination.” They also asked “that you take more steps to provide redress for users who fall victim to intimidation and harassment, including violence and digital abuse.” The attorneys general said that “[b]ased on our collective experience, we believe that Facebook should take additional actions including the following steps—many of which are highlighted in Facebook’s recent Civil Rights Audit—to strengthen its commitment to civil rights and fighting disinformation and discrimination:
    • Aggressively enforce Facebook policies against hate speech and organized hate organizations: Although Facebook has developed policies against hate speech and organizations that peddle it, we remain concerned that Facebook’s policies on Dangerous Individuals and Organizations, including but not limited to its policies on white nationalist and white supremacist content, are not enforced quickly and comprehensively enough. Content that violates Facebook’s own policies too often escapes removal just because it comes as coded language, rather than specific magic words. And even where Facebook takes steps to address a particular violation, it often fails to proactively address the follow-on actions by replacement or splinter groups that quickly emerge.
    • Allow public, third-party audits of hate content and enforcement: To gauge the ongoing progress of Facebook’s enforcement efforts, independent experts should be permitted access to the data necessary to conduct regular, transparent third-party audits of hate and hate-related misinformation on the platform, including any information made available to the Global Oversight Board. As part of this effort, Facebook should capture data on the prevalence of different forms of hate content on the platform, whether or not covered by Facebook’s own community standards, thus allowing the public to determine whether enforcement of anti-hate policies differs based on the type of hate content at issue.
    • Commit to an ongoing, independent analysis of Facebook’s content population scheme and the prompt development of best practices guidance: By funneling users toward particular types of content, Facebook’s content population scheme, including its algorithms, can push users into extremist online communities that feature divisive and inflammatory messages, often directed at particular groups. Although Facebook has conducted research and considered programs to reduce this risk, there is still no mandatory guidance for coders and other teams involved in content population. Facebook should commit to an ongoing, independent analysis of its content population scheme, including its algorithms, and also continuously implement mandatory protocols as best practices are identified to curb bias and prevent recommendations of hate content and groups.
    • Expand policies limiting inflammatory advertisements that vilify minority groups: Although Facebook currently prohibits ads that claim that certain people, because of their membership in a protected group, pose a threat to the physical safety of communities or the nation, its policies still allow attacks that characterize such groups as threats to national culture or values. The current prohibition should be expanded to include such ads.
  • New Zealand’s Ministry of Statistics “launched the Algorithm Charter for Aotearoa New Zealand” that “signals that [the nation’s agencies] are committed to being consistent, transparent and accountable in their use of algorithms.”
    • The Ministry explained “[t]he Algorithm Charter is part of a wider ecosystem and works together with existing tools, networks and research, including:
      • Principles for the Safe and Effective Use of Data and Analytics (Privacy Commissioner and Government Chief Data Steward, 2018)
      • Government Use of Artificial Intelligence in New Zealand (New Zealand Law Foundation and Otago University, 2019)
      • Trustworthy AI in Aotearoa – AI Principles (AI Forum New Zealand, 2020)
      • Open Government Partnership, an international agreement to increase transparency.
      • Data Protection and Use Policy (Social Wellbeing Agency, 2020)
      • Privacy, Human Rights and Ethics Framework (Ministry of Social Development).
  • The European Union (EU) imposed its first cyber sanctions under its Framework for a Joint EU Diplomatic Response to Malicious Cyber Activities (aka the cyber diplomacy toolbox) against six hackers and three entities from the Russian Federation, the People’s Republic of China (PRC) and the Democratic People’s Republic of Korea for attacks against the against the Organisation for the Prohibition of Chemical Weapons (OPCW) in the Netherlands, the malware attacks known as Petya and WannaCry, and Operation Cloud Hopper. The EU’s cyber sanctions follow sanctions the United States has placed on a number of people and entities from the same nations and also indictments the U.S. Department of Justice has announced over the years. The sanctions are part of the effort to levy costs on nations and actors that conduct cyber attacks. The EU explained:
    • The attempted cyber-attack was aimed at hacking into the Wi-Fi network of the OPCW, which, if successful, would have compromised the security of the network and the OPCW’s ongoing investigatory work. The Netherlands Defence Intelligence and Security Service (DISS) (Militaire Inlichtingen- en Veiligheidsdienst – MIVD) disrupted the attempted cyber-attack, thereby preventing serious damage to the OPCW.
    • “WannaCry” disrupted information systems around the world by targeting information systems with ransomware and blocking access to data. It affected information systems of companies in the Union, including information systems relating to services necessary for the maintenance of essential services and economic activities within Member States.
    • “NotPetya” or “EternalPetya” rendered data inaccessible in a number of companies in the Union, wider Europe and worldwide, by targeting computers with ransomware and blocking access to data, resulting amongst others in significant economic loss. The cyber-attack on a Ukrainian power grid resulted in parts of it being switched off during winter.
    • “Operation Cloud Hopper” has targeted information systems of multinational companies in six continents, including companies located in the Union, and gained unauthorised access to commercially sensitive data, resulting in significant economic loss.
  • The United States’ Federal Communications Commission (FCC) is asking for comments on the Department of Commerce’s the National Telecommunications and Information Administration’s (NTIA) petition asking the agency to start a rulemaking to clarify alleged ambiguities in 47 USC 230 regarding the limits of the liability shield for the content others post online versus the liability protection for “good faith” moderation by the platform itself. The NTIA was acting per direction in an executive order allegedly aiming to correct online censorship. Executive Order 13925, “Preventing Online Censorship” was issued in late May after Twitter factchecked two of President Donald Trump’s Tweets regarding false claims made about mail voting in California in response to the COVID-19 pandemic. Comments are due by 2 September.
  • The Australian Competition & Consumer Commission (ACCC) released for public consultation a draft of “a mandatory code of conduct to address bargaining power imbalances between Australian news media businesses and digital platforms, specifically Google and Facebook.” The government in Canberra had asked the ACCC to draft this code earlier this year after talks broke down between the Australian Treasury
    • The ACCC explained
      • The code would commence following the introduction and passage of relevant legislation in the Australian Parliament. The ACCC released an exposure draft of this legislation on 31 July 2020, with consultation on the draft due to conclude on 28 August 2020. Final legislation is expected to be introduced to Parliament shortly after conclusion of this consultation process.
    • This is not the ACCC’s first interaction with the companies. Late last year, the ACCC announced a legal action against Google “alleging they engaged in misleading conduct and made false or misleading representations to consumers about the personal location data Google collects, keeps and uses” according to the agency’s press release. In its initial filing, the ACCC is claiming that Google mislead and deceived the public in contravention of the Australian Competition Law and Android users were harmed because those that switched off Location Services were unaware that their location information was still be collected and used by Google for it was not readily apparent that Web & App Activity also needed to be switched off.
    • A year ago, the ACCC released its final report in its “Digital Platforms Inquiry” that “proposes specific recommendations aimed at addressing some of the actual and potential negative impacts of digital platforms in the media and advertising markets, and also more broadly on consumers.”
  • The United States’ Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) issued “released core guidance documentation for the Trusted Internet Connections (TIC) program, developed to assist agencies in protecting modern information technology architectures and services.” CISA explained “In accordance with the Office of Management and Budget (OMB) Memorandum (M) 19-26: Update to the TIC Initiative, TIC 3.0 expands on the original initiative to drive security standards and leverage advances in technology to secure a wide spectrum of agency network architectures.” Specifically, CISA released three core guidance documents:
    • Program Guidebook (Volume 1) – Outlines the modernized TIC program and includes its historical context
    • Reference Architecture (Volume 2) – Defines the concepts of the program to guide and constrain the diverse implementations of the security capabilities
  • Senators Ron Wyden (D-OR), Bill Cassidy (R-LA) and ten other Members wrote the Federal Trade Commission (FTC) urging the agency “to investigate widespread privacy violations by companies in the advertising technology (adtech) industry that are selling private data about millions of Americans, collected without their knowledge or consent from their phones, computers, and smart TVs.” They asked the FTC “to use its authority to conduct broad industry probes under Section 6(b) of the FTC Act to determine whether adtech companies and their data broker partners have violated federal laws prohibiting unfair and deceptive business practices.” They argued “[t]he FTC should not proceed with its review of the Children’s Online Privacy Protection Act (COPPA) Rule before it has completed this investigation.”
  •  “100 U.S. women lawmakers and current and former legislators from around the world,” including Speaker of the House Nancy Pelosi (D-CA), sent a letter to Facebook CEO Mark Zuckerberg and COO Sheryl Sandberg urging the company “to take decisive action to protect women from rampant and increasing online attacks on their platform that have caused many women to avoid or abandon careers in politics and public service.” They noted “[j]ust a few days ago, a manipulated and widely shared video that depicted Speaker Pelosi slurring her speech was once again circulating on major social media platforms, gaining countless views before TikTok, Twitter, and YouTube all removed the footage…[and] [t]he video remains on Facebook and is labeled “partly false,” continuing to gain millions of views.” The current and former legislators “called on Facebook to enforce existing rules, including:
    • Quick removal of posts that threaten candidates with physical violence, sexual violence or death, and that glorify, incite or praise violence against women; disable the relevant accounts, and refer offenders to law enforcement.
    • Eliminate malicious hate speech targeting women, including violent, objectifying or dehumanizing speech, statements of inferiority, and derogatory sexual terms;
    • Remove accounts that repeatedly violate terms of service by threatening, harassing or doxing or that use false identities to attack women leaders and candidates; and
    • Remove manipulated images or videos misrepresenting women public figures.
  • The United States’ Departments of Commerce and Homeland Security released an update “highlighting more than 50 activities led by industry and government that demonstrate progress in the drive to counter botnet threats.” in May 2018, the agencies submitted “A Report to the President on Enhancing the Resilience of the Internet and Communications Ecosystem Against Botnets and Other Automated, Distributed Threats” that identified a number of steps and prompted a follow on “A Road Map Toward Resilience Against Botnets” released in November 2018.
  • United States (U.S.) Secretary of Commerce Wilbur Ross and European Commissioner for Justice Didier Reynders released a joint statement explaining that “[t]he U.S. Department of Commerce and the European Commission have initiated discussions to evaluate the potential for an enhanced EU-U.S. Privacy Shield framework to comply with the July 16 judgment of the Court of Justice of the European Union in the Schrems II case.”
    • Maximillian Schrems filed a complaint against Facebook with Ireland’s Data Protection Commission (DPC) in 2013, alleging that the company’s transfer of his personal data violated his rights under European Union law because of the mass U.S. surveillance revealed by former National Security Agency (NSA) contractor Edward Snowden. Ultimately, this case resulted in a 2015 Court of Justice of the European Union (CJEU) ruling that invalidated the Safe Harbor agreement under which the personal data of EU residents was transferred to the US by commercial concerns. The EU and US executed a follow on agreement, the EU-U.S. Privacy Shield, that was designed to address some of the problems the CJEU turned up, and the U.S. passed a law, the “Judicial Redress Act of 2015” (P.L. 114-126), to provide EU citizens a way to exercise their EU rights in US courts via the “Privacy Act of 1974.”
    • However, Schrems continued and soon sought to challenge the legality of the European Commission’s signing off on the Privacy Shield agreement, the adequacy decision issued in 2016, and also the use of standard contractual clauses (SCC) by companies for the transfer of personal data to the US. The CJEU struck down the adequacy decision, throwing into doubt many entities’ transfers out of the EU into the U.S. but upheld SCCs in a way that suggested EU data protection authorities (DPA) may need to review all such agreements to ensure they comply with EU law.
  • The European Commission (EC) announced an “an in-depth investigation to assess the proposed acquisition of Fitbit by Google under the EU Merger Regulation.” The EC voiced its concern “that the proposed transaction would further entrench Google’s market position in the online advertising markets by increasing the already vast amount of data that Google could use for personalisation of the ads it serves and displays.” The EC detailed its “preliminary competition concerns:
    • Following its first phase investigation, the Commission has concerns about the impact of the transaction on the supply of online search and display advertising services (the sale of advertising space on, respectively, the result page of an internet search engine or other internet pages), as well as on the supply of ”ad tech” services (analytics and digital tools used to facilitate the programmatic sale and purchase of digital advertising). By acquiring Fitbit, Google would acquire (i) the database maintained by Fitbit about its users’ health and fitness; and (ii) the technology to develop a database similar to Fitbit’s one.
    • The data collected via wrist-worn wearable devices appears, at this stage of the Commission’s review of the transaction, to be an important advantage in the online advertising markets. By increasing the data advantage of Google in the personalisation of the ads it serves via its search engine and displays on other internet pages, it would be more difficult for rivals to match Google’s online advertising services. Thus, the transaction would raise barriers to entry and expansion for Google’s competitors for these services, to the ultimate detriment of advertisers and publishers that would face higher prices and have less choice.
    • At this stage of the investigation, the Commission considers that Google:
      • is dominant in the supply of online search advertising services in the EEA countries (with the exception of Portugal for which market shares are not available);
      • holds a strong market position in the supply of online display advertising services at least in Austria, Belgium, Bulgaria, Croatia, Denmark, France, Germany, Greece, Hungary, Ireland, Italy, Netherlands, Norway, Poland, Romania, Slovakia, Slovenia, Spain, Sweden and the United Kingdom, in particular in relation to off-social networks display ads;
      • holds a strong market position in the supply of ad tech services in the EEA.
    • The Commission will now carry out an in-depth investigation into the effects of the transaction to determine whether its initial competition concerns regarding the online advertising markets are confirmed.
    • In addition, the Commission will also further examine:
      • the effects of the combination of Fitbit’s and Google’s databases and capabilities in the digital healthcare sector, which is still at a nascent stage in Europe; and
      • whether Google would have the ability and incentive to degrade the interoperability of rivals’ wearables with Google’s Android operating system for smartphones once it owns Fitbit.
    • In February after the deal had been announced, the European Data Protection Board (EDPB) made clear it position that Google and Fitbit will need to scrupulously observe the General Data Protection Regulation’s privacy and data security requirements if the body is sign off on the proposed $2.2 billion acquisition. Moreover, at present Google has not informed European Union (EU) regulators of the proposed deal. The deal comes at a time when both EU and U.S. regulators are already investigating Google for alleged antitrust and anticompetitive practices, and the EDPB’s opinion could carry weight in this process.
  • The United States’ (U.S.) Department of Homeland Security released a Privacy Impact Assessment for the U.S. Border Patrol (USPB) Digital Forensics Programs that details how it may conduct searches of electronic devices at the U.S. border and ports of entry. DHS explained
    • As part of USBP’s law enforcement duties, USBP may search and extract information from electronic devices, including: laptop computers; thumb drives; compact disks; digital versatile disks (DVDs); mobile phones; subscriber identity module (SIM) cards; digital cameras; vehicles; and other devices capable of storing electronic information.
    • Last year, a U.S. District Court held that U.S. Customs and Border Protection (CPB) and U.S. Immigration and Customs Enforcement’s (ICE) current practices for searches of smartphones and computers at the U.S. border are unconstitutional and the agency must have reasonable suspicion before conducting such a search. However, the Court declined the plaintiffs’ request that the information taken off of their devices be expunged by the agencies. This ruling follows a Department of Homeland Security Office of the Inspector General (OIG) report that found CPB “did not always conduct searches of electronic devices at U.S. ports of entry according to its Standard Operating Procedures” and asserted that “[t]hese deficiencies in supervision, guidance, and equipment management, combined with a lack of performance measures, limit [CPB’s] ability to detect and deter illegal activities related to terrorism; national security; human, drug, and bulk cash smuggling; and child pornography.”
    • In terms of a legal backdrop, the United States Supreme Court has found that searches and seizures of electronic devices at borders and airports are subject to lesser legal standards than those conducted elsewhere in the U.S. under most circumstances. Generally, the government’s interest in securing the border against the flow of contraband and people not allowed to enter allow considerable leeway to the warrant requirements for many other types of searches. However, in recent years two federal appeals courts (the Fourth and Ninth Circuits) have held that searches of electronic devices require suspicion on the part of government agents while another appeals court (the Eleventh Circuit) held differently. Consequently, there is not a uniform legal standard for these searches.
  • The Inter-American Development Bank (IDB) and the Organization of Americans States (OAS) released their second assessment of cybersecurity across Latin America and the Caribbean that used the Cybersecurity Capacity Maturity Model for Nations (CMM) developed at University of Oxford’s Global Cyber Security Capacity Centre (GSCC). The IDB and OAS explained:
    • When the first edition of the report “Cybersecurity: Are We Ready in Latin America and the Caribbean?” was released in March 2016, the IDB and the OAS aimed to provide the countries of Latin America and the Caribbean (LAC) not only with a picture of the state of cybersecurity but also guidance about the next steps that should be pursued to strengthen national cybersecurity capacities. This was the first study of its kind, presenting the state of cybersecurity with a comprehensive vision and covering all LAC countries.
    • The great challenges of cybersecurity, like those of the internet itself, are of a global nature. Therefore, it is undeniable that the countries of LAC must continue to foster greater cooperation among themselves, while involving all relevant actors, as well as establishing a mechanism for monitoring, analysis, and impact assessment related to cybersecurity both nationally and regionally. More data in relation to cybersecurity would allow for the introduction of a culture of cyberrisk management that needs to be extended both in the public and private sectors. Countries must be prepared to adapt quickly to the dynamic environment around us and make decisions based on a constantly changing threat landscape. Our member states may manage these risks by understanding the impact on and the likelihood of cyberthreats to their citizens, organizations, and national critical infrastructure. Moving to the next level of maturity will require a comprehensive and sustainable cybersecurity policy, supported by the country’s political agenda, with allocation of  financial resources and qualified human capital to carry it out.
    • The COVID-19 pandemic will pass, but events that will require intensive use of digital technologies so that the world can carry on will continue happening. The challenge of protecting our digital space will, therefore, continue to grow. It is the hope of the IDB and the OAS that this edition of the report will help LAC countries to have a better understanding of their current state of cybersecurity capacity and be useful in the design of the policy initiatives that will lead them to increase their level of cyberresilience.
  • The European Data Protection Supervisor (EDPS) issued an opinion on “the European Commission’s action plan for a comprehensive Union policy on preventing money laundering and terrorism financing (C(2020)2800 final), published on 7 May 2020.” The EDPS asserted:
    • While  the  EDPS acknowledges the  importance  of  the  fight  against money  laundering  and terrorism financing as an objective of general interest, we call for the legislation to strike a balance between the interference with the fundamental rights of privacy and personal data protection and  the measures that  are  necessary  to  effectively  achieve  the  general  interest goals on anti-money  laundering  and  countering the  financing  of terrorism (AML/CFT) (the principle of proportionality).
    • The EDPS recommends that the Commission monitors the effective implementation of the existing  AML/CFT  framework while ensuring that the  GDPR  and  the  data  protection framework are respected and complied with. This is particularly relevant for the works on the interconnection of central bank account mechanisms and beneficial ownership registers that should be largely inspired by the principles of data minimisation, accuracy and privacy-by-design and by default.  

Further Reading

  • China already has your data. Trump’s TikTok and WeChat bans can’t stop that.” By Aynne Kokas – The Washington Post. This article persuasively makes the case that even if a ban on TikTok and WeChat were to work, and there are substantive questions as to how a ban would given how widely the former has been downloaded, the People’s Republic of China (PRC) is almost certainly acquiring massive reams of data on Americans through a variety of apps, platforms, and games. For example, Tencent, owner of WeChat, has a 40% stake in Epic Games that has Fortnite, a massively popular multiplayer game (if you have never heard of it, ask one of the children in your family). Moreover, a recent change to PRC law mandates that companies operating in the PRC must share their data bases for cybersecurity reviews, which may be an opportunity aside from hacking and exfiltrating United States entities, to access data. In summation, if the Trump Administration is serious about stopping the flow of data from the U.S. to the PRC, these executive orders will do very little.
  • Big Tech Makes Inroads With the Biden Campaign” by David McCabe and Kenneth P. Vogel – The New York Times. Most likely long before former Vice President Joe Biden clinched the Democratic nomination, advisers volunteered to help plot out his policy positions, a process that intensified this year. Of course, this includes technology policy, and many of those volunteering for the campaign’s Innovation Policy Committee have worked or are working for large technology companies directly or as consultants or lobbyists. This piece details some of these people and their relationships and how the Biden campaign is managing possible conflicts of interest. Naturally, those on the left wing of the Democratic Party calling for tighter antitrust, competition, and privacy regulation are concerned that Biden might be pulled away from these positions despite his public statements arguing that the United States government needs to get tougher with some practices.
  • A Bible Burning, a Russian News Agency and a Story Too Good to Check Out” By Matthew Rosenberg and Julian E. Barnes – The New York Times. The Russian Federation seems to be using a new tactic with some success for sowing discord in the United States that is the information equivalent of throwing fuel onto a fire. In this case, a fake story manufactured by a Russian outlet was seized on by some prominent Republicans, in part, because it fits their preferred world view of protestors. In this instance, a Russian outlet created a fake story amplifying an actual event that went viral. We will likely see more of this, and it is not confined to fake stories intended to appeal to the right. The same is happening with content meant for the left wing in the United States.
  • Facebook cracks down on political content disguised as local news” by Sara Fischer – Axios. As part of its continuing effort to crack down on violations of its policies, Facebook will no longer allow groups with a political viewpoint to masquerade as news. The company and outside experts have identified a range of instances where groups propagating a viewpoint, as opposed to reporting, have used a Facebook exemption by pretending to be local news outlets.
  • QAnon groups have millions of members on Facebook, documents show” By Ari Sen and Brandy Zadrozny – NBC News. It appears as if some Facebooks are leaking the results of an internal investigation that identified more than 1 million users who are part of QAnon groups. Most likely these employees want the company to take a stronger stance on the conspiracy group QAnon like the company has with COVID-19 lies and misinformation.
  • And, since Senator Kamala Harris (D-CA) was named former Vice President Joe Biden’s (D-DE) vice presidential pick, this article has become even more relevant than when I highlighted it in late July: “New Emails Reveal Warm Relationship Between Kamala Harris And Big Tech” – HuffPost. Obtained via an Freedom of Information request, new email from Senator Kamala Harris’ (D-CA) tenure as her state’s attorney general suggest she was willing to overlook the role Facebook, Google, and others played and still play in one of her signature issues: revenge porn. This article makes the case Harris came down hard on a scammer running a revenge porn site but did not press the tech giants with any vigor to take down such material from their platforms. Consequently, the case is made if Harris is former Vice President Joe Biden’s vice presidential candidate, this would signal a go easy approach on large companies even though many Democrats have been calling to break up these companies and vigorously enforce antitrust laws. Harris has largely not engaged on tech issues during her tenure in the Senate. To be fair, many of these companies are headquartered in California and pump billions of dollars into the state’s economy annually, putting Harris in a tricky position politically. Of course, such pieces should be taken with a grain of salt since it may have been suggested or planted by one of Harris’ rivals for the vice president nomination or someone looking to settle a score.
  • Unwanted Truths: Inside Trump’s Battles With U.S. Intelligence Agencies” by Robert Draper – The New York Times. A deeply sourced article on the outright antipathy between President Donald Trump and Intelligence Community officials, particularly over the issue of how deeply Russia interfered in the election in 2016. A number of former officials have been fired or forced out because they refused to knuckle under to the White House’s desire to soften or massage conclusions of Russia’s past and current actions to undermine the 2020 election in order to favor Trump.
  • Huawei says it’s running out of chips for its smartphones because of US sanctions” By Kim Lyons – The Verge and “Huawei: Smartphone chips running out under US sanctions” by Joe McDonald – The Associated Press. United States (U.S.) sanctions have started biting the Chinese technology company Huawei, which announced it will likely run out of processor chips for its smartphones. U.S. sanctions bar any company from selling high technology items like processors to Huawei, and this capability is not independently available in the People’s Republic of China (PRC) at present.
  • Targeting WeChat, Trump Takes Aim at China’s Bridge to the World” By Paul Mozur and Raymond Zhong – The New York Times. This piece explains WeChat, the app, the Trump Administration is trying to ban in the United States (U.S.) without any warning. It is like a combination of Facebook, WhatsApp, news app, and payment platform and is used by more than 1.2 billion people.
  • This Tool Could Protect Your Photos From Facial Recognition” By Kashmir Hill – The New York Times. Researchers at the University of Chicago have found a method of subtly altering photos of people that appears to foil most facial recognition technologies. However, a number of experts interviewed said it is too late to stop companies like AI Clearview.
  • I Tried to Live Without the Tech Giants. It Was Impossible.” By Kashmir Hill – The New York Times. This New York Times reporter tried living without the products of large technology companies, which involved some fairly obvious challenges and some that were not so obvious. Of course, it was hard for her to skip Facebook, Instagram, and the like, but cutting out Google and Amazon proved hardest and basically impossible because of the latter’s cloud presence and the former’s web presence. The fact that some of the companies cannot be avoided if one wants to be online likely lends weight to those making the case these companies are anti-competitive.
  • To Head Off Regulators, Google Makes Certain Words Taboo” by Adrianne Jeffries – The Markup. Apparently, in what is a standard practice at large companies, employees at Google were coached to avoid using certain terms or phrases that antitrust regulators would take notice of such as: “market,” “barriers to entry,” and “network effects.” The Markup obtained a 16 August 2019 document titled “Five Rules of Thumb For Written Communications” that starts by asserting “[w]ords matter…[e]specially in antitrust laws” and goes on to advise Google’s employees:
    • We’re out to help users, not hurt competitors.
    • Our users should always be free to switch, and we don’t lock anyone in.
    • We’ve got lots of competitors, so don’t assume we control or dominate any market.
    • Don’t try and define a market or estimate our market share.
    • Assume every document you generate, including email, will be seen by regulators.
  • Facebook Fired An Employee Who Collected Evidence Of Right-Wing Pages Getting Preferential Treatment” By Craig Silverman and Ryan Mac – BuzzFeed News. A Facebook engineer was fired after adducing proof in an internal communications system that the social media platform is more willing to change false and negative ratings to claims made by conservative outlets and personalities than any other viewpoint. If this is true, it would be opposite to the narrative spun by the Trump Administration and many Republicans in Congress. Moreover, Facebook’s incentives would seem to align with giving conservatives more preferential treatment because many of these websites advertise on Facebook, the company probably does not want to get crosswise with the Administration, sensational posts and content drive engagement which increases user numbers that allows for higher ad rates, and it wants to appear fair and impartial.
  • How Pro-Trump Forces Work the Refs in Silicon Valley” By Ben Smith – The New York Times. This piece traces the nearly four decade old effort of Republicans to sway mainstream media and now Silicon Valley to its viewpoint.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Photo credit: Gerd Altmann on Pixabay

Further Reading, Other Developments, and Coming Events (30 July)

First things first, if you would like to receive my Technology Policy Update, email me. You can find some of these Updates from 2019 and 2020 here.

Here are Further Reading, Other Developments, and Coming Events.

Coming Events

  • On 30 July, the Senate Commerce, Science, and Transportation Committee’s Security Subcommittee will hold a hearing titled “The China Challenge: Realignment of U.S. Economic Policies to Build Resiliency and Competitiveness” with these witnesses:
    • The Honorable Nazak Nikakhtar, Assistant Secretary for Industry and Analysis, International Trade Administration, U.S. Department of Commerce
    • Dr. Rush Doshi, Director of the Chinese Strategy Initiative, The Brookings Institution
    • Mr. Michael Wessel, Commissioner, U.S. – China Economic and Security Review Commission
  • On 30 July, the House Armed Services Committee’s Intelligence and Emerging Threats and Capabilities Subcommittee will hold a hearing titled “Review of the Recommendations of the Cyberspace Solarium Commission” with these witnesses:
    • Senator Angus King (I-ME), Chairman, Cyberspace Solarium Commission
    • Representative Mike Gallagher (R-WI), Chairman, Cyberspace Solarium Commission
    • The Honorable Patrick Murphy, Commissioner, Cyberspace Solarium Commission
    • Mr. Frank Cilluffo, Commissioner, Cyberspace Solarium Commission
  • On 31 July, the House Intelligence Committee will mark up its Intelligence Authorization Act.
  • On 31 July the Select Committee on the Modernization of Congress will hold a business meeting “to consider proposed recommendations.”
  • On 3 August the House Oversight and Reform Committee will hold a hearing on the tenth “Federal Information Technology Acquisition Reform Act” (FITARA) scorecard on federal information technology.
  • On 4 August, the Senate Armed Services Committee will hold a hearing titled “Findings and Recommendations of the Cyberspace Solarium Commission” with these witnesses:
    • Senator Angus S. King, Jr. (I-ME), Co-Chair, Cyberspace Solarium Commission
    • Representative Michael J. Gallagher (R-WI), Co-Chair, Cyberspace Solarium Commission
    • Brigadier General John C. Inglis, ANG (Ret.), Commissioner, Cyberspace Solarium Commission
  • On 6 August, the Federal Communications Commission (FCC) will hold an open meeting to likely consider the following items:
    • C-band Auction Procedures. The Commission will consider a Public Notice that would adopt procedures for the auction of new flexible-use overlay licenses in the 3.7–3.98 GHz band (Auction 107) for 5G, the Internet of Things, and other advanced wireless services. (AU Docket No. 20-25)
    • Radio Duplication Rules. The Commission will consider a Report and Order that would eliminate the radio duplication rule with regard to AM stations and retain the rule for FM stations. (MB Docket Nos. 19-310. 17-105)
    • Common Antenna Siting Rules. The Commission will consider a Report and Order that would eliminate the common antenna siting rules for FM and TV broadcaster applicants and licensees. (MB Docket Nos. 19-282, 17-105)
    • Telecommunications Relay Service. The Commission will consider a Report and Order to repeal certain TRS rules that are no longer needed in light of changes in technology and voice communications services. (CG Docket No. 03-123)
  • The National Institute of Standards and Technology (NIST) will hold the “Exploring Artificial Intelligence (AI) Trustworthiness: Workshop Series Kickoff Webinar,” “a NIST initiative involving private and public sector organizations and individuals in discussions about building blocks for trustworthy AI systems and the associated measurements, methods, standards, and tools to implement those building blocks when developing, using, and testing AI systems” on 6 August.
  • On 18 August, the National Institute of Standards and Technology (NIST) will host the “Bias in AI Workshop, a virtual event to develop a shared understanding of bias in AI, what it is, and how to measure it.”

Other Developments

  • Senate Armed Services Committee Chair James Inhofe (R-OK) has publicly placed a hold on the re-nomination of Federal Communications Commission member over the agency’s April decision to permit Ligado to proceed with its plan “to deploy a low-power terrestrial nationwide network in the 1526-1536 MHz, 1627.5-1637.5 MHz, and 1646.5-1656.5 MHz bands that will primarily support Internet of Things (IoT) services.” This is the latest means of pressing the FCC Inhofe and allies on Capitol Hill and in the Trump Administration have taken. In the recently passed “National Defense Authorization Act (NDAA) for Fiscal Year 2021” (S.4049) there is language requiring “the Secretary of Defense to enter into an agreement with the National Academies of Science, Engineering, and Medicine to conduct an independent technical review of the Order and Authorization adopted by the FCC on April 19, 2020 (FCC 20–48). The independent technical review would include a comparison of the two different approaches used for evaluation of potential harmful interference. The provision also would require the National Academies of Science, Engineering, and Medicine to submit a report on the independent technical review.” This provision may make it into the final FY 2021 NDAA, which would stop Ligado from proceeding before the conclusion of the study.
  • Senator Josh Hawley (R-MO) has released yet another bill amending 47 USC 230 (aka Section 230), the “Behavioral Advertising Decisions Are Downgrading Services (BAD ADS) Act,” that “remove Section 230 immunity from Big Tech companies that display manipulative, behavioral ads or provide data to be used for them.” Considering that targeting advertising forms a significant part of the revenue stream for such companies, this seems to be of a piece with other bills of Hawley’s and others to pressure social media platforms. Hawley noted he “has been a leading critic of Section 230’s protection of Big Tech firms and recently called for Twitter to lose immunity if it chooses to editorialize on political speech.”
  • The United States National Counterintelligence and Security Center (US NCSC) issued a statement on election security on the 100th day before the 2020 Presidential Election. US NCSC Director William Evanina described the risks facing the US heading into November but did not detail US efforts to address and counter the efforts of foreign nations to influence and disrupt Presidential and Congressional elections this fall. The US NCSC explained it is working with other federal agencies and stakeholders, however.
    • US NCSC Director William Evanina explained the purpose of the press release is to “share insights with the American public about foreign threats to our election and offer steps to citizens across the country to build resilience and help mitigate these threats…[and] to update Americans on the evolving election threat landscape, while also safeguarding our intelligence sources and methods.” Evanina noted “Office of the Director of National Intelligence (ODNI) has been providing robust intelligence-based briefings on election security to the presidential campaigns, political committees, and Congressional audiences.” Including the assertion “[i]n leading these classified briefings, I have worked to ensure fidelity, accountability, consistency and transparency with these stakeholders and presented the most timely and accurate information we have to offer” may be Evanina’s way of pushing back on concerns that the White House has placed people loyal to the President at the top of some IC entities who may lack independence. Top Democrats
    • The US NCSC head asserted “[e]lection security remains a top priority for the Intelligence Community and we are committed in our support to the Department of Homeland Security (DHS) and the Federal Bureau of Investigation (FBI), given their leadership roles in this area.”
    • Evanina claimed “[a]t this time, we’re primarily concerned with China, Russia and Iran — although other nation states and non-state actors could also do harm to our electoral process….[and] [o]ur insights and judgments will evolve as the election season progresses:
      • China is expanding its influence efforts to shape the policy environment in the United States, pressure political figures it views as opposed to China’s interests, and counter criticism of China. Beijing recognizes its efforts might affect the presidential race.
      • Russia’s persistent objective is to weaken the United States and diminish our global role. Using a range of efforts, including internet trolls and other proxies, Russia continues to spread disinformation in the U.S. that is designed to undermine confidence in our democratic process and denigrate what it sees as an anti-Russia “establishment” in America.
      • Iran seeks to undermine U.S. democratic institutions and divide the country in advance of the elections. Iran’s efforts center around online influence, such as spreading disinformation on social media and recirculating anti-U.S. content.
    • Speaker of the House Nancy Pelosi (D-CA), Senate Minority Leader Chuck Schumer (D-NY), House Intelligence Committee Chair Adam Schiff (D-CA), and Senate Intelligence Committee Ranking Member Mark Warner (D-VA) released their response to the NCSC statement:
      • The statement just released by NCSC Director William Evanina does not go nearly far enough in arming the American people with the knowledge they need about how foreign powers are seeking to influence our political process. The statement gives a false sense of equivalence to the actions of foreign adversaries by listing three countries of unequal intent, motivation and capability together. The statement, moreover, fails to fully delineate the goal, nature, scope and capacity to influence our election, information the American people must have as we go into November. To say without more, for example, that Russia seeks to ‘denigrate what it sees as an anti-Russia ‘establishment’ in America’ is so generic as to be almost meaningless. The statement omits much on a subject of immense importance.
      • “In our letter two weeks ago, we called on the FBI to provide a defensive briefing to the entire Congress about specific threats related to a concerted foreign disinformation campaign, and this is more important than ever.  But a far more concrete and specific statement needs to be made to the American people, consistent with the need to protect sources and methods.  We can trust the American people with knowing what to do with the information they receive and making those decisions for themselves. But they cannot do so if they are kept in the dark about what our adversaries are doing, and how they are doing it.  When it comes to American elections, Americans must decide.”
    • Senate Majority Leader Mitch McConnell (R-KY) and Senate Intelligence Committee Chair Marco Rubio (R-FL) issued their own statement:
      • We are disappointed by the statement from Senator Schumer, Senator Warner, Speaker Pelosi, and Representative Schiff about Bill Evanina, the Director of the National Counterintelligence and Security Center. Evanina is a career law enforcement and intelligence professional with extensive experience in counterintelligence. His reputation as a straight-shooter immune from politics is well-deserved. It is for this reason that Evanina received overwhelming support from the Senate when he was confirmed to be Director of the NCSC and again when the Administration tapped him to lead the nation’s efforts to protect the 2020 elections from foreign interference.
      • We believe the statement baselessly impugns his character and politicizes intelligence matters. Their manufactured complaint undercuts Director Evanina’s nonpartisan public outreach to increase Americans’ awareness of foreign influence campaigns right at the beginning of his efforts.
      • Prior to their public statements, Director Evanina had previewed his efforts and already offered to provide another round of briefings to the Congress on the threat and steps the US government has taken over the last three and a half years to combat it. We believe the threat is real, and is more complex than many partisans may wish to admit. We welcome these briefings, and hope our colleagues will listen to the career professionals who have been given this mission.
      •  We will not discuss classified information in public, but we are confident that while the threat remains, we are far better prepared than four years ago. The intelligence community, law enforcement, election officials, and others involved in securing our elections are far better postured, and Congress dramatically better informed, than any of us were in 2016—and our Democrat colleagues know it.
  • The Australian Cyber Security Centre (ACSC) and the Digital Transformation Agency (DTA) issued “new Cloud Security Guidance co-designed with industry to support the secure adoption of cloud services across government and industry.” The agencies stated this new release “will guide organisations including government, Cloud Service Providers (CSP), and Information Security Registered Assessors Program (IRAP) assessors on how to perform a comprehensive assessment of a cloud service provider and its cloud services, so a risk-informed decision can be made about its suitability to handle an organisation’s data.” ACSC and DTA added “The Cloud Security Guidance is supported by forthcoming updates to the Australian Government Information Security Manual (ISM), the Attorney-General’s Protective Security Policy Framework (PSPF), and the DTA’s Secure Cloud Strategy.”
  • The National Institute of Standards and Technology (NIST) studied how well facial recognition technology and services could identify people wearing masks and, to no great surprise, the results were not good with respect to accuracy. NIST stressed that the facial recognition technology were not calibrated for masks in qualifying its results. In its Interagency Report NISTIR 8311, NIST found
    • Algorithm accuracy with masked faces declined substantially across the board. Using unmasked images, the most accurate algorithms fail to authenticate a person about 0.3% of the time. Masked images raised even these top algorithms’ failure rate to about 5%, while many otherwise competent algorithms failed between 20% to 50% of the time.
    • Masked images more frequently caused algorithms to be unable to process a face, technically termed “failure to enroll or template” (FTE). Face recognition algorithms typically work by measuring a face’s features — their size and distance from one another, for example — and then comparing these measurements to those from another photo. An FTE means the algorithm could not extract a face’s features well enough to make an effective comparison in the first place.
    • The more of the nose a mask covers, the lower the algorithm’s accuracy. The study explored three levels of nose coverage — low, medium and high — finding that accuracy degrades with greater nose coverage.
    • While false negatives increased, false positives remained stable or modestly declined. Errors in face recognition can take the form of either a “false negative,” where the algorithm fails to match two photos of the same person, or a “false positive,” where it incorrectly indicates a match between photos of two different people. The modest decline in false positive rates show that occlusion with masks does not undermine this aspect of security.
    • The shape and color of a mask matters. Algorithm error rates were generally lower with round masks. Black masks also degraded algorithm performance in comparison to surgical blue ones, though because of time and resource constraints the team was not able to test the effect of color completely.
    • NIST explained this report
      • is the first of a series of reports on the performance of face recognition algorithms on faces occluded by protective face masks [2] commonly worn to reduce inhalation of viruses or other contaminants. This study is being run under the Ongoing Face Recognition Vendor Test (FRVT) executed by the National Institute of Standards and Technology (NIST). This report documents accuracy of algorithms to recognize persons wearing face masks. The results in this report apply to algorithms provided to NIST before the COVID-19 pandemic, which were developed without expectation that NIST would execute them on masked face images.
  • The United States National Science Foundation (NSF) and the Office of Science and Technology Policy (OSTP) inside the White House announced the establishment of the Quantum Leap Challenges Institutes program and “$75 million for three new institutes designed to have a tangible impact in solving” problems associated with quantum information science and engineering. NSF added “Quantum Leap Challenge Institutes also form the centerpiece of NSF’s Quantum Leap, an ongoing, agency-wide effort to enable quantum systems research and development.” NSF and OSTP named the following institutes:
    • NSF Quantum Leap Challenge Institute for Present and Future Quantum Computing. Today’s quantum computing prototypes are rudimentary, error-prone, and small-scale. This institute, led by the University of California, Berkeley, plans to learn from these to design advanced, large-scale quantum computers, develop efficient algorithms for current and future quantum computing platforms, and ultimately demonstrate that quantum computers outperform even the best conceivable classical computers.
  • The United States Department of Energy (DOE) published its “Blueprint for the Quantum Internet” “that lays out a blueprint strategy for the development of a national quantum internet, bringing the United States to the forefront of the global quantum race and ushering in a new era of communications” and held an event to roll out the new document and approach. The Blueprint is part of the Administration’s effort to implement the “National Quantum Initiative Act” (P.L. 115-368), a bill “[t]o provide for a coordinated Federal program to accelerate quantum research and development for the economic and national security of the United States.” Under Secretary of Energy for Science Paul Dabbar explained in a blog post that “[t]he Blueprint lays out four priority research opportunities to make this happen:
    • Providing the foundational building blocks for Quantum Internet;
    • Integrating Quantum networking devices;
    • Creating repeating, switching, and routing technologies for Quantum entanglement;
    • Enabling error correction of Quantum networking functions.
  • The European Commission (EC) is requesting feedback until 10 September on its impact assessment for future European Union legislation on artificial intelligence (AI). The EC explained “the  overall  policy  objective  is  to  ensure  the  development  and  uptake  of lawful  and trustworthy  AI across the Single Market through the creation of an ecosystem of trust.” Earlier this year, as part of its Digital Strategy, the EC recently released a white paper earlier this year, “On Artificial Intelligence – A European approach to excellence and trust,” in which the Commission articulates its support for “a regulatory and investment oriented approach with the twin objective of promoting the uptake of AI and of addressing the risks associated with certain uses of this new technology.” The EC stated that “[t]he purpose of this White Paper is to set out policy options on how to achieve these objectives…[but] does not address the development and use of AI for military purposes.”

Further Reading

  • Google Takes Aim at Amazon. Again.” – The New York Times. For the fifth time in the last decade, Google will try to take on Amazon, in part, because the latter’s dominance in online retailing is threatening the former’s dominance in online advertising. Google is offering a suite of inducements for retailers to use its platform, Google Shopping. One wonders if Google gains traction whether Amazon would point to the competition as proof it is not engaged in anti-competitive practices to regulators.
  • Twitter’s security woes included broad access to user accounts” – Ad Age. This piece details the years long tension inside the social media giant between strengthening internal security and developing features to make more money. Not surprisingly, the latter consideration almost always trumped the former, a situation exacerbated by Twitter’s growing use of third-party contractors to handle back end functions, including security. Apparently, many contractors would spy on celebrities’ accounts, sometimes using workarounds to defeat Twitter’s security. Even though this article claims it was only contractors, one wonders if some Twitter employees were doing the same. Whatever the case, Twitter’s board has been warned about weak security for years and opted against heeding this advice, a factor that likely allowed the platform to get hacked a few weeks ago. Worse still, the incentives do not seem aligned to drive better security in the future. 
  • We’re in the middle of the COVID-19 crisis. Big Tech is already preparing for the next one.” – Protocol. For people who think large technology companies have not had a prominent enough role during the current pandemic, this news will be reassuring. The Consumer Technology Association (CTA), a non-profit organized under Section 501(c)(6) of United States’ tax laws, has commenced with a “Public Health Tech Initiative” “[t]o ensure an effective public sector response to future pandemics like COVID-19.” This group “will explore and create recommendations for the use of technology in dealing with and recovering from future public health emergencies.”
  • Car Companies Want to Monitor Your Every Move With Emotion-Detecting AI” – Vice’s Motherboard. A number of companies are selling auto manufacturers on a suite of technology that could record everything that happens in your car, including facial analysis algorithms, for a variety of purposes with financial motives such as behavioral advertising, setting insurance rates, and others. The United States does not have any laws that directly regulate such practices whereas the European Union does, suggesting such technology would be deployed less in Europe.
  • Russian Intelligence Agencies Push Disinformation on Pandemic” – The New York Times. United States (US) intelligence agencies declassified and share intelligence with journalists purporting to show how Russian Federation intelligence agencies have adapted their techniques in their nonstop disinformation campaign against the US, the North Atlantic Treaty Organization, and others. As Facebook, Twitter, and others have grown adept at locating and removing content from obvious Russian outlets like RT and Sputnik, Russian agencies are utilizing more subtle techniques, aiming at the same goal of undermining confidence among Americans and elsewhere in the government.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Trump Administration Asks FCC To Act on Social Media EO

NTIA is asking the FCC to interpret Section 230 in a way that would reduce the liability protection of social media companies with the goal of pressuring these companies to reduce moderation of conservative viewpoints .

First things first, if you would like to receive my Technology Policy Update, email me. You can find some of these Updates from 2019 and 2020 here.

The Trump Administration has proceeded with a step in implementing its executive order (EO) to regulate social media platforms for alleged violations of freedom of speech through a clarification of 47 USC 230 (aka Section 230). At issue is the liability shield companies like Twitter, Facebook, and others enjoy in federal law to most claims for content posted by third parties that the Trump Administration is arguing has been misconstrued both from Congress’ original intent and the plain language of the 1996 law. Moreover, the Trump Administration and many Republicans claim some of these companies are actively censoring conservative viewpoints unfairly and in violation of Section 230 and imply First Amendment rights are being violated, too. Many on the left are also unhappy with how Section 230 seems to be insulating large technology companies from legal responsibility to take down what they see as violent and extremist content, especially white supremacist material and untrue claims. The EO that set this proceeding into motion had been rumored for more than a year, possibly as leverage over Twitter and Facebook so they would not moderate conservative content. Lending credence to this view is the fact that the EO was hurriedly issued after Twitter fact checked two of President Donald Trump’s untrue claims about mail voting.

Following the directive in the EO, on 27 July, the Department of Commerce’s the National Telecommunications and Information Administration (NTIA) filed a petition with the Federal Communications Commission (FCC), asking the agency to start a rulemaking to clarify alleged ambiguities in 47 USC 230 regarding the limits of the liability shield for the content others post online versus the liability protection for “good faith” moderation by the platform itself.

The NTIA asserted “[t]he FCC should use its authorities to clarify ambiguities in section 230 so as to make its interpretation appropriate to the current internet marketplace and provide clearer guidance to courts, platforms, and users…[and] urges the FCC to promulgate rules addressing the following points:

  1. Clarify the relationship between subsections (c)(1) and (c)(2), lest they be read and applied in a manner that renders (c)(2) superfluous as some courts appear to be doing.
  2. Specify that Section 230(c)(1) has no application to any interactive computer service’s decision, agreement, or action to restrict access to or availability of material provided by another information content provider or to bar any information content provider from using an interactive computer service.
  3. Provide clearer guidance to courts, platforms, and users, on what content falls within (c)(2) immunity, particularly section 230(c)(2)’s “otherwise objectionable” language and its requirement that all removals be done in “good faith.”
  4. Specify that “responsible, in whole or in part, for the creation or development of information” in the definition of “information content provider,” 47 U.S.C.
    § 230(f)(3), includes editorial decisions that modify or alter content, including but not limited to substantively contributing to, commenting upon, editorializing about, or presenting with a discernible viewpoint content provided by another information content provider.
  5. Mandate disclosure for internet transparency similar to that required of other internet companies, such as broadband service providers.

NTIA argued that

  • Section 230(c)(1) has a specific focus: it prohibits “treating” “interactive computer services,” i.e., internet platforms, such as Twitter or Facebook, as “publishers.” But, this provision only concerns “information” provided by third parties, i.e., “another internet content provider”68 and does not cover a platform’s own content or editorial decisions.
  • Section (c)(2) also has a specific focus: it eliminates liability for interactive computer services that act in good faith “to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.”

The FCC has discretion in whether it will accede to the NTIA’s petition that it conduct this rulemaking. If the agency determines action is justified by the petition, it could either start a notice and comment rulemaking with a proposed rule being released for comment or it could merely issue a final rule. If the FCC decides the NTIA’s petition does not require agency action, it must notify the NTIA why it is rejecting its petition.

It is possible the FCC will prove receptive to the NTIA petition and start a rulemaking that may or may not conclude before the election or a potential Biden Administration in January. The agency will need to process and analyze the likely voluminous comments and arguments that will be submitted under FCC rules on the NTIA’s petition. It may also be the case that the agency is privately not receptive to the Trump Administration’s arguments and slow walks the process. The agency could sidestep this petition in a number of ways. First, its regulations provide “[p]etitions which are moot, premature, repetitive, frivolous, or which plainly do not warrant consideration by the Commission may be denied or dismissed without prejudice to the petitioner.” Second, the agency may be able to argue with justification it is working through the numerous comments and legal ramifications. Thirdly, there is at least one lawsuit pending to enjoin action on the EO that the agency could use as justification for not immediately acting.

Executive Order 13925, “Preventing Online Censorship” was issued in late May after Twitter factchecked two of his Tweets regarding false claims made about mail voting in California in response to the COVID-19 pandemic, Trump signed the long rumored EO seen by many as a means of cowing social media platforms. Given that the First Amendment to the United States Constitution guarantees freedom of speech in relation to government action, it is not clear how Twitter would be considered a government agency and therefore subject to the First Amendment.

Twitter’s first factchecking of Trump’s tweeting occurred when he made false claims about California’s plan to mail ballots to registered voters, and, not as the President claimed, to all residents of California. On 26 May, Trump tweeted across two Tweets:

There is NO WAY (ZERO!) that Mail-In Ballots will be anything less than substantially fraudulent. Mail boxes will be robbed, ballots will be forged & even illegally printed out & fraudulently signed. The Governor of California is sending Ballots to millions of people, anyone….. ….living in the state, no matter who they are or how they got there, will get one. That will be followed up with professionals telling all of these people, many of whom have never even thought of voting before, how, and for whom, to vote. This will be a Rigged Election. No way!

On 27 May, Twitter added “a label to two @realDonaldTrump Tweets about California’s vote-by-mail plans as part of our efforts to enforce our civic integrity policy. We believe those Tweets could confuse voters about what they need to do to receive a ballot and participate in the election process.”

In the next day after Twitter added this label, word began to leak from the White House that a long rumored executive order regarding Section 230 of the Communications Decency Act was being prepared for the president’s signature. And, late in the day on 28 May, after a day of reporting on the EO by media, Trump did indeed sign the “Executive Order on Preventing Online Censorship,” which asserted

Section 230 was not intended to allow a handful of companies to grow into titans controlling vital avenues for our national discourse under the guise of promoting open forums for debate, and then to provide those behemoths blanket immunity when they use their power to censor content and silence viewpoints that they dislike.  When an interactive computer service provider removes or restricts access to content and its actions do not meet the criteria of subparagraph (c)(2)(A), it is engaged in editorial conduct.  It is the policy of the United States that such a provider should properly lose the limited liability shield of subparagraph (c)(2)(A) and be exposed to liability like any traditional editor and publisher that is not an online provider.

Consequently, the EO directs that “all executive departments and agencies should ensure that their application of section 230(c) properly reflects the narrow purpose of the section and take all appropriate actions in this regard.”

In addition to tasking the NTIA to file a petition with the FCC, the EO directed other agencies to act. Elsewhere in the EO, it is provided that the head of each federal agency must review their online spending and then report to the Office of Management and Budget (OMB). The Department of Justice would then “review the viewpoint-based speech restrictions imposed by each online platform identified in the [reports submitted to OMB] and assess whether any online platforms are problematic vehicles for government speech due to viewpoint discrimination, deception to consumers, or other bad practices.”

The Federal Trade Commission (FTC) must consider whether online platforms are violating Section 5 of the FTC Act barring unfair or deceptive practices, which “may include practices by entities covered by section 230 that restrict speech in ways that do not align with those entities’ public representations about those practices.”

Of course, the House’s FY 2021 Financial Services and General Government Appropriations Act (H.R. 7668) has a provision that would bar either the FTC or FCC from taking certain actions related to EO. It is very unlikely Senate Republicans, some of whom have publicly supported this Executive Order will allow this language into the final bill funding the agencies.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Gerd Altmann from Pixabay

Further Reading, Other Developments, and Coming Events (24 July)

First things first, if you would like to receive my Technology Policy Update, email me. You can find some of these Updates from 2019 and 2020 here.

Here are Further Reading, Other Developments, and Coming Events.

Coming Events

  • On  27 July, the House Judiciary Committee’s Antitrust, Commercial, and Administrative Law Subcommittee will hold its sixth hearing on “Online Platforms and Market Power” titled “Examining the Dominance of Amazon, Apple, Facebook, and Google” that will reportedly have the heads of the four companies as witnesses.
  • On 28 July, the Senate Commerce, Science, and Transportation Committee’s Communications, Technology, Innovation, and the Internet Subcommittee will hold a hearing titled “The PACT Act and Section 230: The Impact of the Law that Helped Create the Internet and an Examination of Proposed Reforms for Today’s Online World.”
  • On 28 July the House Science, Space, and Technology Committee’s Investigations and Oversight and Research and Technology Subcommittees will hold a joint virtual hearing titled “The Role of Technology in Countering Trafficking in Persons” with these witnesses:
    • Ms. Anjana Rajan, Chief Technology Officer, Polaris
    • Mr. Matthew Daggett, Technical Staff, Humanitarian Assistance and Disaster Relief Systems Group, Lincoln Laboratory, Massachusetts Institute of Technology
    • Ms. Emily Kennedy, President and Co-Founder, Marinus Analytics
  •  On 28 July, the House Homeland Security Committee’s Cybersecurity, Infrastructure Protection, & Innovation Subcommittee will hold a hearing titled “Secure, Safe, and Auditable: Protecting the Integrity of the 2020 Elections” with these witnesses:
    • Mr. David Levine, Elections Integrity Fellow, Alliance for Securing Democracy, German Marshall Fund of the United States
    • Ms. Sylvia Albert, Director of Voting and Elections, Common Cause
    • Ms. Amber McReynolds, Chief Executive Officer, National Vote at Home Institute
    • Mr. John Gilligan, President and Chief Executive Officer, Center for Internet Security, Inc.
  • On 30 July the House Oversight and Reform Committee will hold a hearing on the tenth “Federal Information Technology Acquisition Reform Act” (FITARA) scorecard on federal information technology.
  • On 30 July, the Senate Commerce, Science, and Transportation Committee’s Security Subcommittee will hold a hearing titled “The China Challenge: Realignment of U.S. Economic Policies to Build Resiliency and Competitiveness” with these witnesses:
    • The Honorable Nazak Nikakhtar, Assistant Secretary for Industry and Analysis, International Trade Administration, U.S. Department of Commerce
    • Dr. Rush Doshi, Director of the Chinese Strategy Initiative, The Brookings Institution
    • Mr. Michael Wessel, Commissioner, U.S. – China Economic and Security Review Commission
  • On 4 August, the Senate Armed Services Committee will hold a hearing titled “Findings and Recommendations of the Cyberspace Solarium Commission” with these witnesses:
    • Senator Angus S. King, Jr. (I-ME), Co-Chair, Cyberspace Solarium Commission
    • Representative Michael J. Gallagher (R-WI), Co-Chair, Cyberspace Solarium Commission
    • Brigadier General John C. Inglis, ANG (Ret.), Commissioner, Cyberspace Solarium Commission
  • On 6 August, the Federal Communications Commission (FCC) will hold an open meeting to likely consider the following items:
    • C-band Auction Procedures. The Commission will consider a Public Notice that would adopt procedures for the auction of new flexible-use overlay licenses in the 3.7–3.98 GHz band (Auction 107) for 5G, the Internet of Things, and other advanced wireless services. (AU Docket No. 20-25)
    • Radio Duplication Rules. The Commission will consider a Report and Order that would eliminate the radio duplication rule with regard to AM stations and retain the rule for FM stations. (MB Docket Nos. 19-310. 17-105)
    • Common Antenna Siting Rules. The Commission will consider a Report and Order that would eliminate the common antenna siting rules for FM and TV broadcaster applicants and licensees. (MB Docket Nos. 19-282, 17-105)
    • Telecommunications Relay Service. The Commission will consider a Report and Order to repeal certain TRS rules that are no longer needed in light of changes in technology and voice communications services. (CG Docket No. 03-123)

Other Developments

  • Slack filed an antitrust complaint with the European Commission (EC) against Microsoft alleging that the latter’s tying Microsoft Teams to Microsoft Office is a move designed to push the former out of the market. A Slack vice president said in a statement “Slack threatens Microsoft’s hold on business email, the cornerstone of Office, which means Slack threatens Microsoft’s lock on enterprise software.” While the filing of a complaint does not mean the EC will necessarily investigate, under its new leadership the EC has signaled in a number of ways its intent to address the size of some technology companies and the effect on competition.
  • The National Institute of Standards and Technology (NIST) has issued for comment NIST the 2nd Draft of NISTIR 8286, Integrating Cybersecurity and Enterprise Risk Management (ERM). NIST claimed this guidance document “promotes greater understanding of the relationship between cybersecurity risk management and ERM, and the benefits of integrating those approaches…[and] contains the same main concepts as the initial public draft, but their presentation has been revised to clarify the concepts and address other comments from the public.” Comments are due by 21 August 2020.
  • The United States National Security Commission on Artificial Intelligence (NSCAI) published its Second Quarter Recommendations, a compilation of policy proposals made this quarter. NSCAI said it is still on track to release its final recommendations in March 2021. The NSCAI asserted
    • The recommendations are not a comprehensive follow-up to the interim report or first quarter memorandum. They do not cover all areas that will be included in the final report. This memo spells out recommendations that can inform ongoing deliberations tied to policy, budget, and legislative calendars. But it also introduces recommendations designed to build a new framework for pivoting national security for the artificial intelligence (AI) era.
    • The NSCAI stated it “has focused its analysis and recommendations on six areas:
    • Advancing the Department of Defense’s internal AI research and development capabilities. The Department of Defense (DOD) must make reforms to the management of its research and development (R&D) ecosystem to enable the speed and agility needed to harness the potential of AI and other emerging technologies. To equip the R&D enterprise, the NSCAI recommends creating an AI software repository; improving agency- wide authorized use and sharing of software, components, and infrastructure; creating an AI data catalog; and expanding funding authorities to support DOD laboratories. DOD must also strengthen AI Test and Evaluation, Verification and Validation capabilities by developing an AI testing framework, creating tools to stand up new AI testbeds, and using partnered laboratories to test market and market-ready AI solutions. To optimize the transition from technological breakthroughs to application in the field, Congress and DOD need to reimagine how science and technology programs are budgeted to allow for agile development, and adopt the model of multi- stakeholder and multi-disciplinary development teams. Furthermore, DoD should encourage labs to collaborate by building open innovation models and a R&D database.
    • Accelerating AI applications for national security and defense. DOD must have enduring means to identify, prioritize, and resource the AI- enabled applications necessary to fight and win. To meet this challenge, the NSCAI recommends that DOD produce a classified Technology Annex to the National Defense Strategy that outlines a clear plan for pursuing disruptive technologies that address specific operational challenges. We also recommend establishing mechanisms for tactical experimentation, including by integrating AI-enabled technologies into exercises and wargames, to ensure technical capabilities meet mission and operator needs. On the business side, DOD should develop a list of core administrative functions most amenable to AI solutions and incentivize the adoption of commercially available AI tools.
    • Bridging the technology talent gap in government. The United States government must fundamentally re-imagine the way it recruits and builds a digital workforce. The Commission envisions a government-wide effort to build its digital talent base through a multi-prong approach, including: 1) the establishment of a National Reserve Digital Corps that will bring private sector talent into public service part-time; 2) the expansion of technology scholarship for service programs; and, 3) the creation of a national digital service academy for growing federal technology talent from the ground up.
    • Protecting AI advantages for national security through the discriminate use of export controls and investment screening. The United States must protect the national security sensitive elements of AI and other critical emerging technologies from foreign competitors, while ensuring that such efforts do not undercut U.S. investment and innovation. The Commission proposes that the President issue an Executive Order that outlines four principles to inform U.S. technology protection policies for export controls and investment screening, enhance the capacity of U.S. regulatory agencies in analyzing emerging technologies, and expedite the implementation of recent export control and investment screening reform legislation. Additionally, the Commission recommends prioritizing the application of export controls to hardware over other areas of AI-related technology. In practice, this requires working with key allies to control the supply of specific semiconductor manufacturing equipment critical to AI while simultaneously revitalizing the U.S. semiconductor industry and building the technology protection regulatory capacity of like-minded partners. Finally, the Commission recommends focusing the Committee on Foreign Investment in the United States (CFIUS) on preventing the transfer of technologies that create national security risks. This includes a legislative proposal granting the Department of the Treasury the authority to propose regulations for notice and public comment to mandate CFIUS filings for investments into AI and other sensitive technologies from China, Russia and other countries of special concern. The Commission’s recommendations would also exempt trusted allies and create fast tracks for vetted investors.
    • Reorienting the Department of State for great power competition in the digital age. Competitive diplomacy in AI and emerging technology arenas is a strategic imperative in an era of great power competition. Department of State personnel must have the organization, knowledge, and resources to advocate for American interests at the intersection of technology, security, economic interests, and democratic values. To strengthen the link between great power competition strategy, organization, foreign policy planning, and AI, the Department of State should create a Strategic Innovation and Technology Council as a dedicated forum for senior leaders to coordinate strategy and a Bureau of Cyberspace Security and Emerging Technology, which the Department has already proposed, to serve as a focal point and champion for security challenges associated with emerging technologies. To strengthen the integration of emerging technology and diplomacy, the Department of State should also enhance its presence and expertise in major tech hubs and expand training on AI and emerging technology for personnel at all levels across professional areas. Congress should conduct hearings to assess the Department’s posture and progress in reorienting to address emerging technology competition.
    • Creating a framework for the ethical and responsible development and fielding of AI. Agencies need practical guidance for implementing commonly agreed upon AI principles, and a more comprehensive strategy to develop and field AI ethically and responsibly. The NSCAI proposes a “Key Considerations” paradigm for agencies to implement that will help translate broad principles into concrete actions.
  • The Danish Defence Intelligence Service’s Centre for Cyber Security (CFCS) released its fifth annual assessment of the cyber threat against Denmark and concluded:
    • The cyber threat pose a serious threat to Denmark. Cyber attacks mainly carry economic and political consequences.
    • Hackers have tried to take advantage of the COVID-19 pandemic. This constitutes a new element in the general threat landscape.
    • The threat from cyber crime is VERY HIGH. No one is exempt from the threat. There is a growing threat from targeted ransomware attacks against Danish public authorities and private companies.  The threat from cyber espionage is VERY HIGH.
    • The threat is especially directed against public authorities dealing with foreign and security policy issues as well as private companies whose knowledge is of interest to foreign states. 
    • The threat from destructive cyber attacks is LOW. It is less likely that foreign states will launch destructive cyber attacks against Denmark. Private companies and public authorities operating in conflict-ridden regions are at a greater risk from this threat. 
    • The threat from cyber activism is LOW. Globally, the number of cyber activism attacks has dropped in recent years,and cyber activists rarely focus on Danish public authorities and private companies. The threat from cyber terrorism is NONE. Serious cyber attacks aimed at creating effects similar to those of conventional terrorism presuppose a level of technical expertise and organizational resources that militant extremists, at present, do not possess. Also, the intention remains limited. 
    • The technological development, including the development of artificial intelligence and quantum computing, creates new cyber security possibilities and challenges.

Further Reading

  • Accuse, Evict, Repeat: Why Punishing China and Russia for Cyberattacks Fails” – The New York Times. This piece points out that the United States (US) government is largely using 19th Century responses to address 21st Century conduct by expelling diplomats, imposing sanctions, and indicting hackers. Even a greater use of offensive cyber operations does not seem to be deterring the US’s adversaries. It may turn out that the US and other nations will need to focus more on defensive measures and securing its valuable data and information.
  • New police powers to be broad enough to target Facebook” – Sydney Morning Herald. On the heels of a 2018 law that some argue will allow the government in Canberra to order companies to decrypt users communications, Australia is considering the enactment of new legislation because of concern among the nation’s security services about end-to-end encryption and dark browsing. In particular, Facebook’s proposed changes to secure its networks is seen as fertile ground of criminals, especially those seeking to prey on children sexually.
  • The U.S. has a stronger hand in its tech battle with China than many suspect” – The Washington Post. A national security writer makes the case that the cries that the Chinese are coming may prove as overblown as similar claims made about the Japanese during the 1980s and the Russian during the Cold War. The Trump Administration has used some levers that may appear to impede the People’s Republic of China’s attempt to displace the United States. In all, this writer is calling for more balance in viewing the PRC and some of the challenges it poses.
  • Facebook is taking a hard look at racial bias in its algorithms” – Recode. After a civil rights audit that was critical of Facebook, the company is assembling and deploying teams to try to deal with the biases in its algorithms on Facebook and Instagram. Critics doubt the efforts will turn out well because economic incentives are aligned against rooting out such biases and the lack of diversity at the company.
  • Does TikTok Really Pose a Risk to US National Security?” – WIRED. This article asserts TikTok is probably no riskier than other social media apps even with the possibility that the People’s Republic of China (PRC) may have access to user data.
  • France won’t ban Huawei, but encouraging 5G telcos to avoid it: report” – Reuters. Unlike the United States, the United Kingdom, and others, France will not outright ban Huawei from their 5G networks but will instead encourage their telecommunications companies to use European manufacturers. Some companies already have Huawei equipment on the networks and may receive authorization to use the company’s equipment for up to five more years. However, France is not planning on extending authorizations past that deadline, which will function a de facto sunset. In contrast, authorizations for Ericsson or Nokia equipment were provided for eight years. The head of France’s cybersecurity agency stressed that France was not seeking to move against the People’s Republic of China (PRC) but is responding to security concerns.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Further Reading, Other Developments, and Coming Events (23 July)

First things first, if you would like to receive my Technology Policy Update, email me. You can find some of these Updates from 2019 and 2020 here.

Here are Further Reading, Other Developments, and Coming Events.

Other Developments

  • New Zealand’s Privacy Commissioner has begun the process of implementing the new Privacy Act 2020 and has started asking for input on the codes of practice that will effectuate the rewrite of the nation’s privacy laws. The Commissioner laid out the following schedule:
    • Telecommunications Information Privacy Code and Civil Defence National Emergencies (Information Sharing) Code
      • Open: 29 July 2020 / Close: 26 August 2020
    • The Commissioner noted “[t]he new Privacy Act 2020 is set to come into force on 1 December…[and] makes several key reforms to New Zealand’s privacy law, including amendments to the information privacy principles.” The Commissioner added “[a]s a result, the six codes of practice made under the Privacy Act 1993 require replacement.”
  • Australia’s 2020 Cyber Security Strategy Industry Advisory Panel issued its report and recommendations “to provide strategic advice to support the development of Australia’s 2020 Cyber Security Strategy.” The body was convened by the Minister for Home Affairs. The panel “recommendations are structured around a framework of five key pillars:
    • Deterrence: The Government should establish clear consequences for those targeting businesses and Australians. A key priority is increasing transparency on Government investigative activity, more frequent attribution and consequences applied where appropriate, and strengthening the Australian Cyber Security Centre’s (ACSC’s) ability to disrupt cyber criminals by targeting the proceeds of cybercrime.
    • Prevention: Prevention is vital and should include initiatives to help businesses and Australians remain safer online. Industry should increase its cyber security capabilities and be increasingly responsible for ensuring their digital products and services are cyber safe and secure, protecting their customers from foreseeable cyber security harm. While Australians have access to trusted goods and services, they also need to be supported with advice on how to practice safe behaviours at home and work. A clear definition is required for what constitutes critical infrastructure and systems of national significance across the public and private sectors. This should be developed with consistent, principles-based regulatory requirements to implement reasonable protection against cyber threats for both the public and private sectors.
    • Detection: There is clear need for the development of a mechanism between industry and Government for real-time sharing of threat information, beginning with critical infrastructure operators. The Government should also empower industry to automatically detect and block a greater proportion of known cyber security threats in real-time including initiatives such as ‘cleaner pipes’.
    • Resilience: We know malicious cyber activity is hitting Australians hard. The tactics and techniques used by malicious cyber actors are evolving so quickly that individuals, businesses and critical infrastructure operators in Australia are not fully able to protect themselves and their assets against every cyber security threat. As a result, it is recommended that the Government should strengthen the incident response and victim support options already in place. This should include conducting cyber security exercises in partnership with the private sector. Speed is key when it comes to recovering from cyber incidents, it is therefore proposed that critical infrastructure operators should collaborate more closely to increase preparedness for major cyber incidents.
    • Investment: The Joint Cyber Security Centre (JCSC) program is a highly valuable asset to form a key delivery mechanism for the initiatives under the 2020 Cyber Security Strategy should be strengthened. This should include increased resources and the establishment of a national board in partnership with industry, states and territories with an integrated governance structure underpinned by a charter outlining scope and deliverables.
  •  Six of the world’s data protection authorities issued an open letter to the teleconferencing companies “to set out our concerns, and to clarify our expectations and the steps you should be taking as Video Teleconferencing (VTC) companies to mitigate the identified risks and ultimately ensure that our citizens’ personal information is safeguarded in line with public expectations and protected from any harm.” The DPAs stated that “[t]he principles in this open letter set out some of the key areas to focus on to ensure that your VTC offering is not only compliant with data protection and privacy law around the world, but also helps build the trust and confidence of your userbase.” They added that “[w]e welcome responses to this open letter from VTC companies, by 30 September 2020, to demonstrate how they are taking these principles into account in the design and delivery of their services. Responses will be shared amongst the joint signatories to this letter.” The letter was drafted and signed by:
    • The Privacy Commissioner of Canada
    • The United Kingdom Information Commissioner’s Office
    • The Office of the Australian Information Commissioner
    • The Gibraltar Regulatory Authority
    • The Office of the Privacy Commissioner for Personal Data, Hong Kong, China
    • The Federal Data Protection and Information Commissioner of Switzerland
  • The United States Office of the Comptroller of the Currency (OCC) “is reviewing its regulations on bank digital activities to ensure that its regulations continue to evolve with developments in the industry” and released an “advance notice of proposed rulemaking (ANPR) [that] solicits public input as part of this review” by 8 August 2020. The OCC explained:
    • Over the past two decades, technological advances have transformed the financial industry, including the channels through which products and services are delivered and the nature of the products and services themselves. Fewer than fifteen years ago, smart phones with slide-out keyboards and limited touchscreen capability were newsworthy.[1] Today, 49 percent of Americans bank on their phones,[2] and 85 percent of American millennials use mobile banking.[3]
    • The first person-to-person (P2P) platform for money transfer services was established in 1998.[4] Today, there are countless P2P payment options, and many Americans regularly use P2P to transfer funds.[5] In 2003, Congress authorized digital copies of checks to be made and electronically processed.[6] Today, remote deposit capture is the norm for many consumers.[7] The first cryptocurrency was created in 2009; there are now over 1,000 rival cryptocurrencies,[8] and approximately eight percent of Americans own cryptocurrency.[9] Today, artificial intelligence (AI) and machine learning, biometrics, cloud computing, big data and data analytics, and distributed ledger and blockchain technology are used commonly or are emerging in the banking sector. Even the language used to describe these innovations is evolving, with the term “digital” now commonly used to encompass electronic, mobile, and other online activities.
    • These technological developments have led to a wide range of new banking products and services delivered through innovative and more efficient channels in response to evolving customer preferences. Back-office banking operations have experienced significant changes as well. AI and machine learning play an increasing role, for example, in fraud identification, transaction monitoring, and loan underwriting and monitoring. And technology is fueling advances in payments. In addition, technological innovations are helping banks comply with the complex regulatory framework and enhance cybersecurity to more effectively protect bank and customer data and privacy. More and more banks, of all sizes and types, are entering into relationships with technology companies that enable banks and the technology companies to establish new delivery channels and business practices and develop new products to meet the needs of consumers, businesses, and communities. These relationships facilitate banks’ ability to reach new customers, better serve existing customers, and take advantage of cost efficiencies, which help them to remain competitive in a changing industry.
    • Along with the opportunities presented by these technological changes, there are new challenges and risks. Banks should adjust their business models and practices to a new financial marketplace and changing customer demands. Banks are in an environment where they compete with non-bank entities that offer products and services that historically have only been offered by banks, while ensuring that their activities are consistent with the authority provided by a banking charter and safe and sound banking practices. Banks also must comply with applicable laws and regulations, including those focused on consumer protection and Bank Secrecy Act/anti-money laundering (BSA/AML) compliance. And, importantly, advanced persistent threats require banks to pay constant and close attention to increasing cybersecurity risks.
    • Notwithstanding these challenges, the Federal banking system is well acquainted with and well positioned for change, which has been a hallmark of this system since its inception. The OCC’s support of responsible innovation throughout its history has helped facilitate the successful evolution of the industry. The OCC has long understood that the banking business is not frozen in time and agrees with the statement made over forty years ago by the U.S. Court of Appeals for the Ninth Circuit: “the powers of national banks must be construed so as to permit the use of new ways of conducting the very old business of banking.” [10] Accordingly, the OCC has sought to regulate banking in ways that allow for the responsible creation or adoption of technological advances and to establish a regulatory and supervisory framework that allows banking to evolve, while ensuring that safety and soundness and the fair treatment of customers is preserved.
  • A trio of House of Representatives Members have introduced “legislation to put American consumers in the driver’s seat by giving them clearer knowledge about the technology they are purchasing.” The “Informing Consumers about Smart Devices Act” (H.R.7583) was drafted and released by Representatives John Curtis (R-UT), Seth Moulton (D-MA), and Gus Bilirakis (R-FL) and according to their press release, it would:
    • The legislation is in response to reports about household devices listening to individuals’ conversations without their knowledge. While some manufacturers have taken steps to more clearly label their products with listening devices, this legislation would make this information more obvious to consumers without overly burdensome requirements on producers of these devices. 
    • Specifically, the bill requires the Federal Trade Commission (FTC) to work alongside industry leaders to establish guidelines for properly disclosing the potential for their products to contain audio or visual recording capabilities. To ensure this does not become an overly burdensome labeling requirement, the legislation provides manufacturers the option of requesting customized guidance from the FTC that fits within their existing marketing or branding practices in addition to permitting these disclosures pre or post-sale of their products.
  • House Oversight and Reform Committee Ranking Member James Comer (R-KY) sent Twitter CEO Jack Dorsey a letter regarding last week’s hack, asking for answers to his questions about the security practices of the platform. Government Operations Subcommittee Ranking Member Jody Hice (R-GA) and 18 other Republicans also wrote Dorsey demanding an explanation of “Twitter’s intent and use of tools labeled ‘SEARCH BLACKLIST’ and ‘TRENDS BLACKLIST’ shown in the leaked screenshots.”
  • The United States Court of Appeals for the District of Columbia has ruled against United States Agency for Global Media (USAGM) head Michael Pack and enjoined his efforts to fire the board of the Open Technology Fund (OTF). The court stated “it appears likely that the district court correctly concluded that 22 U.S.C. § 6209(d) does not grant the Chief Executive Officer of the United States Agency for Global Media, Michael Pack, with the authority to remove and replace members of OTF’s board.” Four removed members of the OTF Board had filed suit against pack. Yesterday, District of Columbia Attorney General Karl Racine (D) filed suit against USAGM, arguing that Pack violated District of Columbia law by dissolving the OTF Board and creating a new one.
  • Three advocacy organizations have lodged their opposition to the “California Privacy Rights Act” (aka Proposition 24) that will be on the ballot this fall in California. The American Civil Liberties Union, the California Alliance for Retired Americans, and Color of Change are speaking out against the bill because “it stacks the deck in favor of big tech corporations and reduces your privacy rights.” Industry groups have also started advertising and advocating against the statute that would rewrite the “California Consumer Privacy Act” (CCPA) (AB 375).

Further Reading

  • Facebook adds info label to Trump post about elections” – The Hill. Facebook has followed Twitter in appending information to posts of President Donald Trump that implicitly rebut his false claims about fraud and mail-in voting. Interestingly, they also appended information to posts of former Vice President Joe Biden that merely asked people to vote Trump out in November. If Facebook continues this policy, it is likely to stoke the ire of Republicans, many of whom claim that the platform and others are biased against conservative voices and viewpoints.
  • Ajit Pai urges states to cap prison phone rates after he helped kill FCC caps” – Ars Technica. The chair of the Federal Communications Commission (FC) is imploring states to regulate the egregious rates charged on payphones to the incarcerated in prison. The rub here is that Pai fought against Obama-era FCC efforts to regulate these practices, claiming the agency lacked the jurisdiction to police intrastate calls. Pai pulled the plug on the agency’s efforts to fight for these powers in court when he became chair.
  • Twitter bans 7,000 QAnon accounts, limits 150,000 others as part of broad crackdown” – NBC News. Today, Twitter announced it was suspending thousands of account of conspiracy theorists who believe a great number of untrue things, namely the “deep state” of the United States is working to thwart the presidency of Donald Trump. Twitter announced in a tweet: “[w]e will permanently suspend accounts Tweeting about these topics that we know are engaged in violations of our multi-account policy, coordinating abuse around individual victims, or are attempting to evade a previous suspension — something we’ve seen more of in recent weeks.” This practice, alternately called brigading or swarming, has been employed on a number of celebrities who are alleged to be engaging in pedophilia. The group, QAnon, has even been quoted or supported by Members of the Republican Party, some of whom may see Twitter’s actions as ideological.
  • Russia and China’s vaccine hacks don’t violate rules of road for cyberspace, experts say” – The Washington Post. Contrary to the claims of the British, Canadian, and American governments, attempts by other nations to hack into COVID-19 research is not counter to cyber norms these and other nations have been pushing to make the rules of the road. The experts interviewed for the article are far more concerned about the long term effects of President Donald Trump allowing the Central Intelligence Agency to start launching cyber attacks when and how it wishes.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Modified EARN IT Act Marked Up; Before Markup, Graham, Cotton, and Blackburn Introduce Encryption Bill

The Senate Judiciary Committee unanimously reports out a revised bill to remove online child sexual material from Section 230 protection. The bill no longer allows companies to use a safe harbor based on adopting best practices for finding and removing this material. However, before the hearing, the chair of the committee introduced a bill requiring technology companies to decrypt or assist in decrypting data subject to a court order accompanying a search warrant.

First things first, if you would like to receive my Technology Policy Update, email me. You can find some of these Updates from 2019 and 2020 here.

The Senate Judiciary Committee met, amended and reported out the “Eliminating Abusive and Rampant Neglect of Interactive Technologies Act of 2020” (EARN IT Act of 2020) (S.3398), a bill that would change 47 USC 230 (aka Section 230) by narrowing the liability shield and potentially making online platforms liable to criminal and civil actions for having child sexual materials on their platforms. The bill as introduced in March was changed significantly this week when a manager’s amendment was released and then further changed at the markup. The Committee reported out the bill unanimously, sending it to the full Senate.

Last week, in advance of the first hearing to markup the EARN IT Act of 2020, key Republican stakeholders released a bill that would require device manufacturers, app developers, and online platforms to decrypt data if a federal court issues a warrant based on probable cause. Critics of the EARN IT Act of 2020 claimed the bill would force big technology companies to choose between weakening encryption or losing their liability protection under Section 230. They likely see this most recent bill as another shot across the bow of technology companies, many of which continue to support and use end-to-end encryption even though the United States government and close allies are pressuring them on the issue. However, unlike the EARN IT Act of 2020, this latest bill does not have any Democratic cosponsors.

Senate Judiciary Committee Chair Lindsey Graham (R-SC) and Senators Tom Cotton (R-AR) and Marsha Blackburn (R-TN) introduced the “Lawful Access to Encrypted Data Act” (S.4051) that would require the manufacturers of devices such as smartphones, app makers, and platforms to decrypt a user’s data if a federal court issues a warrant to search a device, app, or operating system.

The assistance covered entities must provide includes:

  • isolating the information authorized to be searched;
  • decrypting or decoding information on the electronic device or remotely stored electronic information that is authorized to be searched, or otherwise providing such information in an intelligible format, unless the independent actions of an unaffiliated entity make it technically impossible to do so; and
  • providing technical support as necessary to ensure effective execution of the warrant for the electronic devices particularly described by the warrant.

The Department of Justice (DOJ) would be able to issue “assistance capability directives” that would require the recipient to prepare or maintain the ability to aid a law enforcement agency that obtained a warrant that needs technical assistance to access data. Recipients of such orders can file a petition in federal court in Washington, DC to modify or set aside the order on only three grounds: it is illegal, it does meet the requirements of the new federal regulatory structure, or “it is technically impossible for the person to make any change to the way the hardware, software, or other property of the person behaves in order to comply with the directive.” If a court rules against the recipient of such an order, it must comply, and if any recipient of such an order does not comply, a court may find it in contempt of court, allowing for a range of punishments until the contempt is cured. The bill also amends the “Foreign Intelligence Surveillance Act” (FISA) to require the same decryption and assistance in FISA activities, which are mostly surveillance of people outside the United States.

The bill would focus on those device manufacturers that sell more than 1 million devices and those platforms and apps with more than 1 million users, meaning obviously companies like Apple, Facebook, Google, and others.

The bill also tasks the DOJ with conducting a prize competition “to incentivize and encourage research and innovation into solutions providing law enforcement access to encrypted data pursuant to legal process”

According to the Graham, Cotton, and Blackburn’s press release, the “[h]ighlights of the “Lawful Access to Encrypted Data Act” are:

  • Enables law enforcement to obtain lawful access to encrypted data.
    • Once a warrant is obtained, the bill would require device manufacturers and service providers to assist law enforcement with accessing encrypted data if assistance would aid in the execution of the warrant.
    • In addition, it allows the Attorney General to issue directives to service providers and device manufacturers to report on their ability to comply with court orders, including timelines for implementation.
      • The Attorney General is prohibited from issuing a directive with specific technical steps for implementing the required capabilities.
      • Anyone issued a directive may appeal in federal court to change or set aside the directive.
      • The Government would be responsible for compensating the recipient of a directive for reasonable costs incurred in complying with the directive.
  • Incentivizes technical innovation.
    • Directs the Attorney General to create a prize competition to award participants who create a lawful access solution in an encrypted environment, while maximizing privacy and security.
  • Promotes technical and lawful access training and provides real-time assistance.
    • Funds a grant program within the Justice Department’s National Domestic Communications Assistance Center (NDCAC) to increase digital evidence training for law enforcement and creates a call center for advice and assistance during investigations.

The EARN IT Act of 2020 was introduced in March by Graham, Senate Judiciary Committee Ranking Member Dianne Feinstein (D-CA), and Senators Richard Blumenthal (D-CT) and Josh Hawley (R-MO). If enacted, the EARN IT Act would represent a second piece of legislation to change Section 230 of the Communications Decency Act in the last two years with enactment of “Allow States and Victims to Fight Online Sex Trafficking Act of 2017” (P.L. 115-164).

In advance of this week’s markup, Graham and Blumenthal released a manager’s amendment to the EARN IT Act. The bill would still establish a National Commission on Online Child Sexual Exploitation Prevention (Commission) that would design and recommend voluntary “best practices” applicable to technology companies such as Google, Facebook, and many others to address “the online sexual exploitation of children.” However, instead of encouraging technology companies to use these best practices in exchange for continuing to enjoy liability protection, the language creating this safe harbor has been stricken.

Moreover, instead of creating a process under which the DOJ, Department of Homeland Security (DHS), and the Federal Trade Commission (FTC) would accept or reject these standards, as in the original bill, the DOJ would merely have to publish them in the Federal Register. Likewise, the language establishing a fast track process for Congress to codify these best practices has been stricken, too as well as the provisions requiring certain technology companies to certify compliance with the best practices.

Moreover, the revised bill also lacks the safe harbor against lawsuits based on having “child sexual abuse material” on their platform for following the Commission’s best practices. Now the manager’s amendment strikes liability protection under 47 USC 230 for these materials except if a platform is acting as a Good Samaritan in removing these materials. Consequently, should a Facebook or Google fail to find and take down these materials in an expeditious fashion, then they would face federal and state liability to civil and criminal lawsuits.

However, the Committee adopted an amendment offered by Senator Patrick Leahy (D-VT) that would change 47 USC 230 by making clear that the use of end-to-end encryption does not make providers liable for child sexual exploitation laws and abuse material. Specifically, no liability would attach because the provider

  • utilizes full end-to-end encrypted messaging services, device encryption, or other encryption services;
  • does not possess the information necessary to decrypt a communication; or
  • fails to take an action that would otherwise  undermine  the  ability  of  the  provider  to  offer  full  end-to-end  encrypted  messaging  services, device encryption, or other encryption services.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Gerd Altmann from Pixabay

Further Reading and Other Developments (6 June)

Other Developments

First things first, if you would like to receive my Technology Policy Update, email me. You can find some of these Updates from 2019 and 2020 here.

  • A number of tech trade groups are asking the House Appropriations Committee’s Commerce-Justice-Science Subcommittee “to direct the National Institute of Standards and Technology (NIST) to create guidelines that help companies navigate the technical and ethical hurdles of developing artificial intelligence.” They argued:
    • A NIST voluntary framework-based consensus set of best practices would be pro-innovation, support U.S. leadership, be consistent with NIST’s ongoing engagement on AI industry consensus standards development, and align with U.S. support for the OECD AI principles as well as the draft Memorandum to Heads of Executive Departments and Agencies, “Guidance for Regulation of Artificial Intelligence Applications.”
  • The Department of Defense (DOD) “named seven U.S. military installations as the latest sites where it will conduct fifth-generation (5G) communications technology experimentation and testing. They are Naval Base Norfolk, Virginia; Joint Base Pearl Harbor-Hickam, Hawaii; Joint Base San Antonio, Texas; the National Training Center (NTC) at Fort Irwin, California; Fort Hood, Texas; Camp Pendleton, California; and Tinker Air Force Base, Oklahoma.”  The DOD explained “[t]his second round, referred to as Tranche 2, brings the total number of installations selected to host 5G testing to 12…[and] builds on DOD’s previously-announced 5G communications technology prototyping and experimentation and is part of a 5G development roadmap guided by the Department of Defense 5G Strategy.”
  • The Federal Trade Commission announced a $150,000 settlement with “HyperBeard, Inc. [which] violated the Children’s Online Privacy Protection Act Rule (COPPA Rule) by allowing third-party ad networks to collect personal information in the form of persistent identifiers to track users of the company’s child-directed apps, without notifying parents or obtaining verifiable parental consent.”
  • The National Institute of Standards and Technology (NIST) released Special Publication 800-133 Rev. 2, Recommendation for Cryptographic Key Generation that “discusses the generation of the keys to be used with the approved  cryptographic  algorithms…[which] are  either  1) generated  using  mathematical  processing  on  the  output  of  approved  Random  Bit  Generators (RBGs) and  possibly  other  parameters or 2) generated based on keys that are generated in this fashion.”
  • United States Trade Representative (USTR) announced “investigations into digital services taxes that have been adopted or are being considered by a number of our trading partners.” These investigations are “with respect to Digital Services Taxes (DSTs) adopted or under consideration by Austria, Brazil, the Czech Republic, the European Union, India, Indonesia, Italy, Spain, Turkey, and the United Kingdom.” The USTR is accepting comments until 15 July.
  • NATO’s North Atlantic Council released a statement “concerning malicious cyber activities” that have targeted medical facilities stating “Allies are committed to protecting their critical infrastructure, building resilience and bolstering cyber defences, including through full implementation of NATO’s Cyber Defence Pledge.” NATO further pledged “to employ the full range of capabilities, including cyber, to deter, defend against and counter the full spectrum of cyber threats.”
  • The Public Interest Declassification Board (PIDB) released “A Vision for the Digital Age: Modernization of the U.S. National Security Classification and Declassification System” that “provides recommendations that can serve as a blueprint for modernizing the classification and declassification system…[for] there is a critical need to modernize this system to move from the analog to the digital age by deploying advanced technology and by upgrading outdated paper-based policies and practices.”
  • In a Department of State press release, a Declaration on COVID-19, the G7 Science and Technology Ministers stated their intentions “to work collaboratively, with other relevant Ministers to:
    • Enhance cooperation on shared COVID-19 research priority areas, such as basic and applied research, public health, and clinical studies. Build on existing mechanisms to further priorities, including identifying COVID-19 cases and understanding virus spread while protecting privacy and personal data; developing rapid and accurate diagnostics to speed new testing technologies; discovering, manufacturing, and deploying safe and effective therapies and vaccines; and implementing innovative modeling, adequate and inclusive health system management, and predictive analytics to assist with preventing future pandemics.
    • Make government-sponsored COVID-19 epidemiological and related research results, data, and information accessible to the public in machine-readable formats, to the greatest extent possible, in accordance with relevant laws and regulations, including privacy and intellectual property laws.
    • Strengthen the use of high-performance computing for COVID-19 response. Make national high-performance computing resources available, as appropriate, to domestic research communities for COVID-19 and pandemic research, while safeguarding intellectual property.
    • Launch the Global Partnership on AI, envisioned under the 2018 and 2019 G7 Presidencies of Canada and France, to enhance multi-stakeholder cooperation in the advancement of AI that reflects our shared democratic values and addresses shared global challenges, with an initial focus that includes responding to and recovering from COVID-19. Commit to the responsible and human-centric development and use of AI in a manner consistent with human rights, fundamental freedoms, and our shared democratic values.
    • Exchange best practices to advance broadband connectivity; minimize workforce disruptions, support distance learning and working; enable access to smart health systems, virtual care, and telehealth services; promote job upskilling and reskilling programs to prepare the workforce of the future; and support global social and economic recovery, in an inclusive manner while promoting data protection, privacy, and security.
  • The Digital, Culture, Media and Sport Committee’s Online Harms and Disinformation Subcommittee held a virtual meeting, which “is the second time that representatives of the social media companies have been called in by the DCMS Sub-committee in its ongoing inquiry into online harms and disinformation following criticism by Chair Julian Knight about a lack of clarity of evidence and further failures to provide adequate answers to follow-up correspondence.” Before the meeting, the Subcommittee sent a letter to Twitter, Facebook, and Google and received responses. The Subcommittee heard testimony from:
    • Facebook Head of Product Policy and Counterterrorism Monika Bickert
    • YouTube Vice-President of Government Affairs and Public Policy Leslie Miller
    • Google Global Director of Information Policy Derek Slater
    • Twitter Director of Public Policy Strategy Nick Pickles
  • Senators Ed Markey (D-MA), Ron Wyden (D-OR) and Richard Blumenthal (D-CT) sent a letter to AT&T CEO Randall Stephenson “regarding your company’s policy of not counting use of HBO Max, a streaming service that you own, against your customers’ data caps.” They noted “[a]lthough your company has repeatedly stated publicly that it supports legally binding net neutrality rules, this policy appears to run contrary to the essential principle that in a free and open internet, service providers may not favor content in which they have a financial interest over competitors’ content.”
  • The Brookings Institution released what it considers a path forward on privacy legislation and held a webinar on the report with Federal Trade Commissioner (FTC) Christine Wilson and former FTC Commissioner and now Microsoft Vice President and Deputy General Counsel Julie Brill.

Further Reading

  • Google: Overseas hackers targeting Trump, Biden campaigns” – Politico. In what is the latest in a series of attempted attacks, Google’s Threat Analysis Group announced this week that People’s Republic of China affiliated hackers tried to gain access to the campaign of former Vice President Joe Biden and Iranian hackers tried the same with President Donald Trump’s reelection campaign. The group referred the matter to the federal government but said the attacks were not successful. An official from the Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) remarked “[i]t’s not surprising that a number of state actors are targeting our elections…[and] [w]e’ve been warning about this for years.” It is likely the usual suspects will continue to try to hack into both presidential campaigns.
  • Huawei builds up 2-year reserve of ‘most important’ US chips” ­– Nikkei Asian Review. The Chinese tech giant has been spending billions of dollars stockpiling United States’ (U.S.) chips, particularly from Intel for servers and programable chips from Xilinx, the type that is hard to find elsewhere. This latter chip maker is seen as particularly crucial to both the U.S. and the People’s Republic of China (PRC) because it partners with the Taiwan Semiconductor Manufacturing Company, the entity persuaded by the Trump Administration to announce plans for a plant in Arizona. Shortly after the arrest of Huawei CFO Meng Wanzhou in 2018, the company began these efforts and spent almost $24 billion USD last year stockpiling crucial U.S. chips and other components.
  • GBI investigation shows Kemp misrepresented election security” – Atlanta-Journal Constitution. Through freedom of information requests, the newspaper obtained records from the Georgia Bureau of Investigation (GBI) on its investigation at the behest of then Secretary of State Brian Kemp, requested days before the gubernatorial election he narrowly won. At the time, Kemp claimed hackers connected to the Democratic Party were trying to get into the state’s voter database, when it was Department of Homeland Security personnel running a routine scan for vulnerabilities Kemp’s office had agreed to months earlier. The GBI ultimately determined Kemp’s claims did not merit a prosecution. Moreover, even though Kemp’s staff at the time continues to deny these findings, the site did have vulnerabilities, including one turned up by a software company employee.
  • Trump, Biden both want to repeal tech legal protections — for opposite reasons” – Politico. Former Vice President Joe Biden (D) wants to revisit Section 230 because online platforms are not doing enough to combat misinformation, in his view. Biden laid out his views on this and other technology matters for the editorial board of The New York Times in January, at which point he said Facebook should have to face civil liability for publishing misinformation. Given Republican and Democratic discontent with Section 230 and the social media platforms, there may be a possibility legislation is enacted to limit this shield from litigation.
  • Wearables like Fitbit and Oura can detect coronavirus symptoms, new research shows” –The Washington Post. Perhaps wearable health technology is a better approach to determining when a person has contracted COVID-19 than contact tracing apps. A handful of studies are producing positive results, but these studies have not yet undergone the per review process. Still, these devices may be able to determine disequilibrium in one’s system as compared to a baseline, suggesting an infection and a need for a test. This article, however, did not explore possible privacy implications of sharing one’s personal health data with private companies.
  • Singapore plans wearable virus-tracing device for all” – Reuters. For less than an estimated $10 USD for unit, Singapore will soon introduce wearable devices to better track contacts to fight COVID-19. In what may be a sign that the city-state has given up on its contact tracing app, TraceTogether, the Asian nation will soon release these wearables. If it not clear if everyone will be mandated to wear one and what privacy and data protections will be in place.
  • Exclusive: Zoom plans to roll out strong encryption for paying customers” – Reuters. In the same vein as Zoom allowing paying customers to choose where their calls are routing through (e.g. paying customers in the United States could choose a different region with lesser surveillance capabilities), Zoom will soon offer stronger security for paying customers. Of course, should Zoom’s popularity during the pandemic solidify into a dominant competitive position, this new policy of offering end-to-end encryption that the company cannot crack would likely rouse the ire of the governments of the Five Eyes nations. These plans breathe further life into the views of those who see a future in which privacy and security are commodities to be bought and those unable or unwilling to afford them will not enjoy either. Nonetheless, the company may still face a Federal Trade Commission (FTC) investigation into its apparently inaccurate claims that calls were encrypted, which may have violated Section 5 of the FTC Act along with similar investigations by other nations.
  • Russia and China target U.S. protests on social media” – Politico. Largely eschewing doctored material, the Russian Federation and the People’s Republic of China (PRC) are using social media platforms to further drive dissension and division in the United States (U.S.) during the protests by amplifying the messages and points of views of Americans, according to an analysis of one think tank. For example, some PRC officials have been tweeting out “Black Lives Matter” and claims that videos purporting to show police violence are, in fact, police violence. The goal to fan the flames and further weaken Washington. Thus far, the American government and the platforms themselves have not had much of a public response. Additionally, this represents a continued trend of the PRC in seeking to sow discord in the U.S. whereas before this year use of social media and disinformation tended to be confined to issues of immediate concern to Beijing.
  • The DEA Has Been Given Permission To Investigate People Protesting George Floyd’s Death” – BuzzFeed News. The Department of Justice (DOJ) used a little known section of the powers delegated to the agency to task the Drug Enforcement Agency (DEA) with conducting “covert surveillance” of to help police maintain order during the protests following the killing of George Floyd’s, among other duties. BuzzFeed News was given the two page memorandum effectuating this expansion of the DEA’s responsibilities beyond drug crimes, most likely by agency insiders who oppose the memorandum. These efforts could include use of authority granted to the agency to engage in “bulk collection” of some information, a practice the DOJ Office of the Inspector General (OIG) found significant issues with, including the lack of legal analysis on the scope of the sprawling collection practices.
  • Cops Don’t Need GPS Data to Track Your Phone at Protests” – Gizmodo. Underlying this extensive rundown of the types of data one’s phone leaks that is vacuumed up by a constellation of entities is the fact that more law enforcement agencies are buying or accessing these data because the Fourth Amendment’s protections do not apply to private parties giving the government information.
  • Zuckerberg Defends Approach to Trump’s Facebook Posts” – The New York Times. Unlike Twitter, Facebook opted not to flag President Donald Trump’s tweets about the protests arising from George Floyd’s killing last week that Twitter found to be glorifying violence. CEO Mark Zuckerberg reportedly deliberated at length with senior leadership before deciding the tweets did not violate the platform’s terms of service, a decision roundly criticized by Facebook employees, some of whom staged a virtual walkout on 1 June. In a conference call, Zuckerberg faced numerous questions about why the company does not respond more forcefully to tweets that are inflammatory or untrue. His answers that Facebook does not act as an arbiter of truth were not well freceived among many employees.
  • Google’s European Search Menu Draws Interest of U.S. Antitrust Investigators” – The New York Times. Allegedly Department of Justice (DOJ) antitrust investigators are keenly interested in the system Google lives under in the European Union (EU) where Android users are now prompted to select a default search engine instead of just making its Google’s. This system was put in place as a response to the EU’s €4.34 billion fine in 2018 for imposing “illegal restrictions on Android device manufacturers and mobile network operators to cement its dominant position in general internet search.” This may be seen as a way to address competition issues while not breaking up Google as some have called for. However, Google is conducting monthly auctions among the other search engines to be of the three choices given to EU consumers, which allows Google to reap additional revenue.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Trump Signs EO To Press Social Media Platforms

After Twitter fact checks Trump’s Tweets on mail voting in California, a rumored EO was signed that would direct the U.S. government to determine how to pare back Section 230 liability protection.  

First things first, if you would like to receive my Technology Policy Update, email me. You can find some of these Updates from 2019 and 2020 here.

This week, after Twitter fact checked two of his Tweets regarding false claims made about mail voting in California in response to the COVID-19 pandemic, President Donald Trump signed a long rumored executive order (EO) seen by many as a means of cowing social media platforms. Given that the First Amendment to the United States Constitution guarantees freedom of speech in relation to government action, it is not clear how Twitter would be considered a government agency and therefore subject to the First Amendment. Nonetheless, the new EO would request that an agency begin a rulemaking to clarify the liability shield with respect to editing content in 47 USC 230 online platforms benefit from and also task a different agency with investigating Twitter and other platforms to see if they are violating their terms of service.

Twitter’s first fact checking of Trump’s tweeting occurred when he made false claims about California’s plan to mail ballots to registered voters, and, not as the President claimed, to all residents of California. On 26 May, Trump tweeted across two Tweets:

There is NO WAY (ZERO!) that Mail-In Ballots will be anything less than substantially fraudulent. Mail boxes will be robbed, ballots will be forged & even illegally printed out & fraudulently signed. The Governor of California is sending Ballots to millions of people, anyone….. ….living in the state, no matter who they are or how they got there, will get one. That will be followed up with professionals telling all of these people, many of whom have never even thought of voting before, how, and for whom, to vote. This will be a Rigged Election. No way!

On 27 May, Twitter added “a label to two @realDonaldTrump Tweets about California’s vote-by-mail plans as part of our efforts to enforce our civic integrity policy. We believe those Tweets could confuse voters about what they need to do to receive a ballot and participate in the election process.”

In the next day after Twitter added this label, word began to leak from the White House that a long rumored executive order regarding Section 230 of the Communications Decency Act was being prepared for the president’s signature. And, late in the day on 28 May, after a day of reporting on the EO by media, Trump did indeed sign the “Executive Order on Preventing Online Censorship,” which asserted

Section 230 was not intended to allow a handful of companies to grow into titans controlling vital avenues for our national discourse under the guise of promoting open forums for debate, and then to provide those behemoths blanket immunity when they use their power to censor content and silence viewpoints that they dislike.  When an interactive computer service provider removes or restricts access to content and its actions do not meet the criteria of subparagraph (c)(2)(A), it is engaged in editorial conduct.  It is the policy of the United States that such a provider should properly lose the limited liability shield of subparagraph (c)(2)(A) and be exposed to liability like any traditional editor and publisher that is not an online provider.

Consequently, the EO directs that “all executive departments and agencies should ensure that their application of section 230(c) properly reflects the narrow purpose of the section and take all appropriate actions in this regard.”

With respect to specific actions, the Department of Commerce’s the National Telecommunications and Information Administration (NTIA) is directed to file a petition for rulemaking with the Federal Communications Commission (FCC) to clarify the interplay between clauses of Section 230, notably whether the liability shield that protects companies like Twitter and Facebook for content posted on an online platform also extends to so-called “editorial decisions,” presumably actions like Twitter’s in fact checking Trump regarding mail balloting. The NTIA would also ask the FCC to define better the conditions under which an online platform may take down content in good faith that are “deceptive, pretextual, or inconsistent with a provider’s terms of service; or taken after failing to provide adequate notice, reasoned explanation, or a meaningful opportunity to be heard.” The NTIA is also ask the FCC to promulgate any other regulations necessary to effectuate the EO.

Administration officials, including some at the FCC and FTC, have anonymously claimed the EO was pushed through this week, bypassing the usual policy vetting process. Moreover, some of these officials explained that FCC and FTC officials has mostly negative feedback on the draft EO circulated last year and claimed the agencies may even lack the authority to undertake the actions directed by the EO.

Three of the five FCC Commissioners made their positions clear on an NTIA petition to execute this portion of the EO. Commissioner Brendan Carr remarked “I welcome today’s Executive Order and its call for guidance on the scope of the unique and conditional set of legal privileges that Congress conferred on social media companies but no other set of speakers in Section 230…[and] I look forward to receiving the petition that the NTIA files.” Commissioner Geoffrey Starks said “I’ll review the final Executive Order when it’s released and assess its impact on the FCC, but one thing is clear:  the First Amendment and Section 230 remain the law of the land and control here.” Starks added “[o]ur top priority should be connecting all Americans to high-quality, affordable broadband.” Commissioner Jessica Rosenworcel stated

This does not work.  Social media can be frustrating.  But an Executive Order that would turn the Federal Communications Commission into the President’s speech police is not the answer.  It’s time for those in Washington to speak up for the First Amendment.  History won’t be kind to silence.

Elsewhere in the EO, it is provided that the head of each federal agency must review their online spending and then report to the Office of Management and Budget (OMB). The Department of Justice would then “review the viewpoint-based speech restrictions imposed by each online platform identified in the [reports submitted to OMB] and assess whether any online platforms are problematic vehicles for government speech due to viewpoint discrimination, deception to consumers, or other bad practices.”

The Federal Trade Commission (FTC) must consider whether online platforms are violating Section 5 of the FTC Act barring unfair or deceptive practices, which “may include practices by entities covered by section 230 that restrict speech in ways that do not align with those entities’ public representations about those practices.”

On 29 May Twitter labeled a Tweet of Trump’s about the riots in Minneapolis as being a violation of rules for “glorifying violence,” but the platform left the tweet up because “it may be in the public’s interest for the Tweet to remain accessible.” Twitter also disabled the reply function and explained “We try to prevent a tweet like this that otherwise breaks the Twitter rules from reaching more people, so we have disabled most of the ways to engage with it.”

Moreover, Twitter’s actions regarding Trump are not unprecedented for the platform. This year, it has removed Tweets from Brazil’s President Jair Bolsonaro and Venezuela’s President Nicolás Maduro who were pushing unproven COVID-19 treatments. However, these take downs seem at odds with a 2018 statement by Twitter:

Blocking a world leader from Twitter or removing their controversial Tweets would hide important information people should be able to see and debate. It would also not silence that leader, but it would certainly hamper necessary discussion around their words and actions.

Twitter claimed “[w]e review Tweets by leaders within the political context that defines them, and enforce our rules accordingly.”

Twitter also resisted calls from a number to suspend Trump’s account for violating the platform’s terms of service. In October 2019, then presidential candidate Senator Kamala Harris (D-CA) wrote Twitter CEO Jack Dorsey, arguing

I believe the President’s recent tweets rise to the level that Twitter should consider suspending his account. Others have had their accounts suspended for less offensive behavior. And when this kind of abuse is being spewed from the most powerful office in the United States, the stakes are too high to do nothing.

Moreover, Twitter banned political advertising in November 2019 but still allows issue advertising (e.g. an ad advocating for a southern border wall). And, Twitter is limiting the use of microtargeting.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.