Further Reading, Other Developments, and Coming Events (16 September)

Coming Events

  • The United States’ Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) announced that its third annual National Cybersecurity Summit “will be held virtually as a series of webinars every Wednesday for four weeks beginning September 16 and ending October 7:”
    • September 16: Key Cyber Insights
    • September 23: Leading the Digital Transformation
    • September 30: Diversity in Cybersecurity
    • October 7: Defending our Democracy
    • One can register for the event here.
  • The House Homeland Security Committee will hold a hearing titled “Worldwide Threats to the Homeland” on 17 September with the following witnesses:
    • Chad Wolf, Department of Homeland Security
    • Christopher Wray, Director, Federal Bureau of Investigation
    • Christopher Miller, Director, National Counterterrorism Center (NCTC)
  • On 17 September, the House Energy and Commerce Committee’s Communications & technology Subcommittee will hold a hearing titled “Trump FCC: Four Years of Lost Opportunities.”
  • The House Armed Services Committee’s Intelligence and Emerging Threats and Capabilities Subcommittee will hold a hearing’ titled “Interim Review of the National Security Commission on Artificial Intelligence Effort and Recommendations” on 17 September with these witnesses:
    • Dr. Eric Schmidt , Chairman, National Security Commission on Artificial Intelligence 
    • HON Robert Work, Vice Chairman, National Security Commission on Artificial Intelligence, HON Mignon Clyburn, Commissioner, National Security Commission on Artificial Intelligence 
    • Dr. José-Marie Griffiths, Commissioner, National Security Commission on Artificial Intelligence
  • On 22 September, the Federal Trade Commission (FTC) will hold a public workshop “to examine the potential benefits and challenges to consumers and competition raised by data portability.” The agency has released its agenda and explained:
    • The workshop will also feature four panel discussions that will focus on: case studies on data portability rights in the European Union, India, and California; case studies on financial and health portability regimes; reconciling the benefits and risks of data portability; and the material challenges and solutions to realizing data portability’s potential.
  • The Senate Judiciary Committee’s Intellectual Property Subcommittee will hold a hearing “Examining Threats to American Intellectual Property: Cyber-attacks and Counterfeits During the COVID-19 Pandemic” with these witnesses:
    • Adam Hickey, Deputy Assistant Attorney General National Security Division, Department of Justice
    • Clyde Wallace, Deputy Assistant Director Cyber Division, Federal Bureau of Investigation
    • Steve Francis, Assistant Director, HSI Global Trade Investigations Division Director, National Intellectual Property Rights Center, U.S. Immigration and Customs Enforcement, Department of Homeland Security
    • Bryan S. Ware, Assistant Director for Cybersecurity Cyber Security and Infrastructure Security Agency, Department of Homeland Security
  • The Senate Judiciary Committee’s Antitrust, Competition Policy & Consumer Rights Subcommittee will hold a hearing on 30 September titled “Oversight of the Enforcement of the Antitrust Laws” with Federal Trade Commission Chair Joseph Simons and United States Department of Justice Antitrust Division Assistant Attorney General Makan Delhrahim.
  • The Federal Communications Commission (FCC) will hold an open meeting on 30 September and has made available its agenda with these items:
    • Facilitating Shared Use in the 3.1-3.55 GHz Band. The Commission will consider a Report and Order that would remove the existing non-federal allocations from the 3.3-3.55 GHz band as an important step toward making 100 megahertz of spectrum in the 3.45-3.55 GHz band available for commercial use, including 5G, throughout the contiguous United States. The Commission will also consider a Further Notice of Proposed Rulemaking that would propose to add a co-primary, non-federal fixed and mobile (except aeronautical mobile) allocation to the 3.45-3.55 GHz band as well as service, technical, and competitive bidding rules for flexible-use licenses in the band. (WT Docket No. 19-348)
    • Expanding Access to and Investment in the 4.9 GHz Band. The Commission will consider a Sixth Report and Order that would expand access to and investment in the 4.9 GHz (4940-4990 MHz) band by providing states the opportunity to lease this spectrum to commercial entities, electric utilities, and others for both public safety and non-public safety purposes. The Commission also will consider a Seventh Further Notice of Proposed Rulemaking that would propose a new set of licensing rules and seek comment on ways to further facilitate access to and investment in the band. (WP Docket No. 07-100)
    • Improving Transparency and Timeliness of Foreign Ownership Review Process. The Commission will consider a Report and Order that would improve the timeliness and transparency of the process by which it seeks the views of Executive Branch agencies on any national security, law enforcement, foreign policy, and trade policy concerns related to certain applications filed with the Commission. (IB Docket No. 16-155)
    • Promoting Caller ID Authentication to Combat Spoofed Robocalls. The Commission will consider a Report and Order that would continue its work to implement the TRACED Act and promote the deployment of caller ID authentication technology to combat spoofed robocalls. (WC Docket No. 17-97)
    • Combating 911 Fee Diversion. The Commission will consider a Notice of Inquiry that would seek comment on ways to dissuade states and territories from diverting fees collected for 911 to other purposes. (PS Docket Nos. 20-291, 09-14)
    • Modernizing Cable Service Change Notifications. The Commission will consider a Report and Order that would modernize requirements for notices cable operators must provide subscribers and local franchising authorities. (MB Docket Nos. 19-347, 17-105)
    • Eliminating Records Requirements for Cable Operator Interests in Video Programming. The Commission will consider a Report and Order that would eliminate the requirement that cable operators maintain records in their online public inspection files regarding the nature and extent of their attributable interests in video programming services. (MB Docket No. 20-35, 17-105)
    • Reforming IP Captioned Telephone Service Rates and Service Standards. The Commission will consider a Report and Order, Order on Reconsideration, and Further Notice of Proposed Rulemaking that would set compensation rates for Internet Protocol Captioned Telephone Service (IP CTS), deny reconsideration of previously set IP CTS compensation rates, and propose service quality and performance measurement standards for captioned telephone services. (CG Docket Nos. 13-24, 03-123)
    • Enforcement Item. The Commission will consider an enforcement action.

Other Developments

  • The United States House of Representatives took up and passed two technology bills on 14 September. One of the bills, “Internet of Things (IoT) Cybersecurity Improvement Act of 2020” (H.R. 1668), was discussed in yesterday’s Technology Policy Update as part of an outlook on Internet of Things (IoT) legislation (see here for analysis). The House passed a revised version by voice vote, but its fate in the Senate may lie with the Senate Homeland Security & Governmental Affairs Committee, whose chair, Senator Ron Johnson (R-WI), has blocked a number of technology bills during his tenure to the chagrin of some House stakeholders. The House also passed the “AI in Government Act of 2019” (H.R.2575) that would establish an AI Center of Excellence within the General Services Administration that would
    • “(1) advise and promote the efforts of the Federal Government in developing innovative uses of artificial intelligence by the Federal Government to the benefit of the public; and
    • (2) improve cohesion and competency in the use of artificial intelligence.”
    • Also, this bill would direct the Office of Management and Budget (OMB) to “issue a memorandum to the head of each agency that shall—
      • inform the development of artificial intelligence governance approaches by those agencies regarding technologies and applications that—
        • are empowered or enabled by the use of artificial intelligence within that agency; and
        • advance the innovative use of artificial intelligence for the benefit of the public while upholding civil liberties, privacy, and civil rights;
      • consider ways to reduce barriers to the use of artificial intelligence in order to promote innovative application of those technologies for the benefit of the public, while protecting civil liberties, privacy, and civil rights;
      • establish best practices for identifying, assessing, and mitigating any bias on the basis of any classification protected under Federal nondiscrimination laws or other negative unintended consequence stemming from the use of artificial intelligence systems; and
      • provide a template of the required contents of the agency Governance Plans
    • The House Energy and Commerce Committee marked up and reported out more than 30 bills last week including:
      • The “Consumer Product Safety Inspection Enhancement Act” (H.R. 8134) that “would amend the Consumer Product Safety Act to enhance the Consumer Product Safety Commission’s (CPSC) ability to identify unsafe consumer products entering the United States, especially e-commerce shipments entering under the de minimis value exemption. Specifically, the bill would require the CPSC to enhance the targeting, surveillance, and screening of consumer products. The bill also would require electronic filing of certificates of compliance for all consumer products entering the United States.
      • The bill directs the CPSC to: 1) examine a sampling of de minimis shipments and shipments coming from China; 2) detail plans and timelines to effectively address targeting and screening of de minimis shipments; 3) establish metrics by which to evaluate the effectiveness of the CPSC’s efforts in this regard; 4) assess projected technology, resources, and staffing necessary; and 5) submit a report to Congress regarding such efforts. The bill further directs the CPSC to hire at least 16 employees every year until staffing needs are met to help identify violative products at ports.
      • The “AI for Consumer Product Safety Act” (H.R. 8128) that “would direct the Consumer Product Safety Commission (CPSC) to establish a pilot program to explore the use of artificial intelligence for at least one of the following purposes: 1) tracking injury trends; 2) identifying consumer product hazards; 3) monitoring the retail marketplace for the sale of recalled consumer products; or 4) identifying unsafe imported consumer products.” The revised bill passed by the committee “changes the title of the bill to the “Consumer Safety Technology Act”, and adds the text based on the Blockchain Innovation Act (H.R. 8153) and the Digital Taxonomy Act (H.R. 2154)…[and] adds sections that direct the Department of Commerce (DOC), in consultation with the Federal Trade Commission (FTC), to conduct a study and submit to Congress a report on the state of blockchain technology in commerce, including its use to reduce fraud and increase security.” The revised bill “would also require the FTC to submit to Congress a report and recommendations on unfair or deceptive acts or practices relating to digital tokens.”
      • The “American Competitiveness Of a More Productive Emerging Tech Economy Act” or the “American COMPETE Act” (H.R. 8132) “directs the DOC and the FTC to study and report to Congress on the state of the artificial intelligence, quantum computing, blockchain, and the new and advanced materials industries in the U.S…[and] would also require the DOC to study and report to Congress on the state of the Internet of Things (IoT) and IoT manufacturing industries as well as the three-dimensional printing industry” involving “among other things:1) listing industry sectors that develop and use each technology and public-private partnerships focused on promoting the adoption and use of each such technology; 2) establishing a list of federal agencies asserting jurisdiction over such industry sectors; and 3) assessing risks and trends in the marketplace and supply chain of each technology.
      • The bill would direct the DOC to study and report on the effect of unmanned delivery services on U.S. businesses conducting interstate commerce. In addition to these report elements, the bill would require the DOC to examine safety risks and effects on traffic congestion and jobs of unmanned delivery services.
      • Finally, the bill would require the FTC to study and report to Congress on how artificial intelligence may be used to address online harms, including scams directed at senior citizens, disinformation or exploitative content, and content furthering illegal activity.
  • The National Institute of Standards and Technology (NIST) issued NIST Interagency or Internal Report 8272 “Impact Analysis Tool for Interdependent Cyber Supply Chain Risks” designed to help public and private sector entities better address complicated, complex supply chain risks. NIST stated “[t]his publication de-scribes how to use the Cyber Supply Chain Risk Management (C-SCRM) Interdependency Tool that has been developed to help federal agencies identify and assess the potential impact of cybersecurity events in their interconnected supply chains.” NIST explained
    • More organizations are becoming aware of the importance of identifying cybersecurity risks associated with extensive, complicated supply chains. Several solutions have been developed to help manage supply chains; most focus on contract management or compliance. There is a need to provide organizations with a systematic and more usable way to evaluate the potential impacts of cyber supply chain risks relative to an organization’s risk appetite. This is especially important for organizations with complex supply chains and highly interdependent products and suppliers.
    • This publication describes one potential way to visualize and measure these impacts: a Cyber Supply Chain Risk Management (C-SCRM) Interdependency Tool (hereafter “Tool”), which is designed to provide a basic measurement of the potential impact of a cyber supply chain event. The Tool is not intended to measure the risk of an event, where risk is defined as a function of threat, vulnerability, likelihood, and impact. Research conducted by the authors of this publication found that, at the time of publication, existing cybersecurity risk tools and research focused on threats, vulnerabilities, and likelihood, but impact was frequently overlooked. Thus, this Tool is intended to bridge that gap and enable users and tool developers to create a more complete understanding of an organization’s risk by measuring impact in their specific environments.
    • The Tool also provides the user greater visibility over the supply chain and the relative importance of particular projects, products, and suppliers (hereafter referred to as “nodes”) compared to others. This can be determined by examining the metrics that contribute to a node’s importance, such as the amount of access a node has to the acquiring organization’s IT network, physical facilities, and data. By understanding which nodes are the most important in their organization’s supply chain, the user can begin to understand the potential impact a disruption of that node may cause on business operations. The user can then prioritize the completion of risk mitigating actions to reduce the impact a disruption would cause to the organization’s supply chain and overall business.
  • In a blog post, Microsoft released its findings on the escalating threats to political campaigns and figures during the run up to the United States’ (U.S.) election. This warning also served as an advertisement for Microsoft’s security products. But, be that as it may, these findings echo what U.S. security services have been saying for months. Microsoft stated
    • In recent weeks, Microsoft has detected cyberattacks targeting people and organizations involved in the upcoming presidential election, including unsuccessful attacks on people associated with both the Trump and Biden campaigns, as detailed below. We have and will continue to defend our democracy against these attacks through notifications of such activity to impacted customers, security features in our products and services, and legal and technical disruptions. The activity we are announcing today makes clear that foreign activity groups have stepped up their efforts targeting the 2020 election as had been anticipated, and is consistent with what the U.S. government and others have reported. We also report here on attacks against other institutions and enterprises worldwide that reflect similar adversary activity.
    • We have observed that:
      • Strontium, operating from Russia, has attacked more than 200 organizations including political campaigns, advocacy groups, parties and political consultants
      • Zirconium, operating from China, has attacked high-profile individuals associated with the election, including people associated with the Joe Biden for President campaign and prominent leaders in the international affairs community
      • Phosphorus, operating from Iran, has continued to attack the personal accounts of people associated with the Donald J. Trump for President campaign
    • The majority of these attacks were detected and stopped by security tools built into our products. We have directly notified those who were targeted or compromised so they can take action to protect themselves. We are sharing more about the details of these attacks today, and where we’ve named impacted customers, we’re doing so with their support.
    • What we’ve seen is consistent with previous attack patterns that not only target candidates and campaign staffers but also those they consult on key issues. These activities highlight the need for people and organizations involved in the political process to take advantage of free and low-cost security tools to protect themselves as we get closer to election day. At Microsoft, for example, we offer AccountGuard threat monitoring, Microsoft 365 for Campaigns and Election Security Advisors to help secure campaigns and their volunteers. More broadly, these attacks underscore the continued importance of work underway at the United Nations to protect cyberspace and initiatives like the Paris Call for Trust and Security in Cyberspace.
  • The European Data Protection Supervisor (EDPS) has reiterated and expanded upon his calls for caution, prudence, and adherence to European Union (EU) law and principles in the use of artificial intelligence, especially as the EU looks to revamp its approach to AI and data protection. In a blog post, EDPS Wojciech Wiewiórowski stated:
    • The expectations of the increasing use of AI and the related economic advantages for those who control the technologies, as well as its appetite for data, have given rise to fierce competition about technological leadership. In this competition, the EU strives to be a frontrunner while staying true to its own values and ideals.
    • AI comes with its own risks and is not an innocuous, magical tool, which will heal the world harmlessly. For example, the rapid adoption of AI by public administrations in hospitals, utilities and transport services, financial supervisors, and other areas of public interest is considered in the EC White Paper ‘essential’, but we believe that prudency is needed. AI, like any other technology, is a mere tool, and should be designed to serve humankind. Benefits, costs and risks should be considered by anyone adopting a technology, especially by public administrations who process great amounts of personal data.
    • The increase in adoption of AI has not been (yet?) accompanied by a proper assessment of what the impact on individuals and on our society as a whole will likely be. Think especially of live facial recognition (remote biometric identification in the EC White Paper). We support the idea of a moratorium on automated recognition in public spaces of human features in the EU, of faces but also and importantly of gait, fingerprints, DNA, voice, keystrokes and other biometric or behavioural signals.
    • Let’s not rush AI, we have to get it straight so that it is fair and that it serves individuals and society at large.
    • The context in which the consultation for the Data Strategy was conducted gave a prominent place to the role of data in matters of public interest, including combating the virus. This is good and right as the GDPR was crafted so that the processing of personal data should serve humankind. There are existing conditions under which such “processing for the public good” could already take place, and without which the necessary trust of data subjects would not be possible.
    • However, there is a substantial persuasive power in the narratives nudging individuals to ‘volunteer’ their data to address highly moral goals. Concepts such as ‘Data altruism”, or ‘Data donation” and their added value are not entirely clear and there is a need to better define and lay down their scope, and possible purposes, for instance, in the context of scientific research in the health sector. The fundamental right to the protection of personal data cannot be ‘waived’ by the individual concerned, be it through a ‘donation’ or through a ‘sale’ of personal data. The data controller is fully bound by the personal data rules and principles, such as purpose limitation even when processing data that have been ‘donated’ i.e. when consent to the processing had been given by the individual.

Further Reading

  • Peter Thiel Met With The Racist Fringe As He Went All In On Trump” By Rosie Gray and Ryan Mac — BuzzFeed News. A fascinating article about one of the technology world’s more interesting figures. As part of his decision to ally himself with Donald Trump when running for president, Peter Thiel also met with avowed white supremacists. However, it appears that the alliance is no longer worthy of his financial assistance or his public support as he supposedly was disturbed about the Administration’s response to the pandemic. However, Palantir, his company has flourished during the Trump Administration and may be going public right before matters may change under a Biden Administration.
  • TikTok’s Proposed Deal Seeks to Mollify U.S. and China” By David McCabe, Ana Swanson and Erin Griffith — The New York Times. ByteDance is apparently trying to mollify both Washington and Beijing in bringing Oracle onboard as “trusted technology partner,” for the arrangement may be acceptable to both nations under their export control and national security regimes. Oracle handling and safeguarding TikTokj user data would seem to address the Trump Administration’s concerns, but not selling the company nor permitting Oracle to access its algorithm for making recommendations would seem to appease the People’s Republic of China (PRC). Moreover, United States (U.S.) investors would hold control over TikTok even though PRC investors would maintain their stakes. Such an arrangement may satisfy the Committee on Foreign Investment in the United States (CFIUS), which has ordered ByteDance to sell the app that is an integral part of TikTok. The wild card, as always, is where President Donald Trump ultimately comes out on the deal.
  • Oracle’s courting of Trump may help it land TikTok’s business and coveted user data” By Jay Greene and Ellen Nakashima — The Washington Post. This piece dives into why Oracle, at first blush, seems like an unlikely suitor to TikTok, but it’s eroding business position visa vis cloud companies like Amazon explains its desire to diversify. Also, Oracle’s role as a data broker makes all the user data available from TikTok very attractive.
  • Chinese firm harvests social media posts, data of prominent Americans and military” By Gerry Shih — The Washington Post. Another view on Shenzhen Zhenhua Data Technology, the entity from the People’s Republic of China (PRC) exposed for collecting the personal data of more than 2.4 million westerners, many of whom hold positions of power and influence. This article quotes a number of experts allowed to look at what was leaked of the data base who are of the view the PRC has very little in the way of actionable intelligence, at this point. The country is leveraging publicly available big data from a variety of sources and may ultimately makes something useful from these data.
  • “‘This is f—ing crazy’: Florida Latinos swamped by wild conspiracy theories” By Sabrina Rodriguez and Marc Caputo — Politico. A number of sources are spreading rumors about former Vice President Joe Biden and the Democrats generally in order to curb support among a key demographic the party will need to carry overwhelmingly to win Florida.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Photo by Alexander Sinn on Unsplash

Senate Judiciary Hearing On Google

A committee looks at the possible antitrust practices of Google in the adtech market.

The Senate Judiciary Committee’s Antitrust, Competition Policy & Consumer Rights Subcommittee will hold a hearing on 15 September titled “Stacking the Tech: Has Google Harmed Competition in Online Advertising?.” In their press release announcing the hearing, Chair Mike Lee (R-UT) and Ranking Member Amy Klobuchar (D-MN) asserted:

Google is the dominant player in online advertising, a business that accounts for around 85% of its revenues and which allows it to monetize the data it collects through the products it offers for free. Recent consumer complaints and investigations by law enforcement have raised questions about whether Google has acquired or maintained its market power in online advertising in violation of the antitrust laws. News reports indicate this may also be the centerpiece of a forthcoming antitrust lawsuit from the U.S. Department of Justice. This hearing will examine these allegations and provide a forum to assess the most important antitrust investigation of the 21st century.

Chair Mike Lee (R-UT) said the focus of the hearing is Google’s online advertising business and whether it is monopolist or has engaged in any conduct that harms competition and consumers. He said he would discuss antitrust policy more broadly before discussing Google. Lee remarked he has served on the subcommittee for nine years, six of which as chair, and during this period antitrust policy has evolved and a gulf has widened between the two sides of the issue. He claimed there are those who would like to see no antitrust laws at all, while others are overly deferential to speculative efficiencies, quick to dismiss actual evidence of competitive harm when it might conflict with unproven economic theories. Lee argued this end of the spectrum fetishizes freedom even when harm might endanger freedom. He claimed they forget that markets, like governments, do not keep themselves free, and that liberty s only secure when power is diffused.

Lee said at the other extreme is a line of arguments that has been pushing for years an agenda to transform antitrust laws from a tool based in economic science to protect and promote competitive markets into a panacea for all their perceived social ills. He said built on the myopic economic presence that big is bad, which is, to them, the beginning and end of the question, some at this end of the spectrum would use antitrust policy to address labor, racial, and income disparities. Lee conceded these may be laudable goals, but these are not problems antitrust law is meant to solve nor are they goals antitrust law is capable of solving, at least not without creating a host of other problems. He argued that attempts to repurpose antitrust law into a social justice program would have scores of unintended consequences that would cripple the United States’ (U.S.) economy for generations. He noted there is hypocrisy in thinking big is bad only applies to corporations and not to government bureaucracy, the type needed to dismantle large companies and regulate them.

Lee said he is on the side of the American people, the law, and vigorous enforcement of antitrust laws that have made the U.S. the most prosperous nation on earth. He asserted that already enacted laws are, for the most part, sufficient to meet the challenges of the day. Lee reiterated the maxim that liberty is only secure when power is diffused, a principle central to the U.S.’ Constitutional Republic. Lee claimed the concept of federalism, perhaps the greatest contribution of the founding generation, is what makes the U.S. unique among all other nations. He stated this principle applies to economic power as it does to political power. Lee contended that antitrust laws may be properly described as federalism for the economy.

Lee said that hearing is focused on what may prove the seminal antitrust case of the 21st Century that may define the terms of competition and innovation in the U.S.’ dynamic economy for years and decades to come. He said unlike some of his House colleagues, he has no interest in staging a political spectacle to attack, condescend, and talk over witnesses. Lee remarked naïve though it may be in 2020, he said his hope is that by looking at this specific question, the subcommittee can have a serious and frank conversation about the state of competition in digital markets. He declared that online advertising is an incredibly complex business, one that touches every single person on the internet.

Lee explained the technologies that connect publishers and advertisers have evolved rapidly over the last decade, and he expansion of online advertising has facilitated an explosion of online content by allowing even the smallest website owner to monetize the content they produce. He said small and local businesses have also benefitted from being able to quickly and easily promote their businesses without any of the same capital investments that would have been required just a few decades ago. Lee admitted that at the same time, this growth and expansion has been largely consolidated onto a single platform, Google’s online ad business. He said that as business has grown, so, too, have complaints that Google, which operates both the ad selling and ad buying platforms and then sells its own inventory through those platforms has given rise to conflicts of interest and claims it has rigged online ad auction technology to favor its own interests and protect its own market share. Lee said whether this is true or not matters because so many businesses depend upon digital advertising to market their products or to monetize the content they produce. Web users in turn benefit from free online content and being connected to relevant businesses in a way that helps to make optimal business decisions. Lee said, simply put, markets function better when businesses thrive, and consumers are informed. He asserted ideally online advertising helps accomplish this, but, if, on the other hand, online advertising has been monopolized and constrained by opaque pricing and exclusionary conditions, everyone loses to that degree. Lee added that Google and other big tech companies have been accused of other bad acts unrelated to antitrust or competition, and he said he has repeatedly expressed his concern about anti-conservative bias by these firms. He pledged to continue to pursue these concerns but added that while his concerns about anti-conservative bias may have implications for antitrust like market power, today’s hearing is not fundamentally about those concerns.

Ranking Member Amy Klobuchar (D-MN) explained

  • We are not having this hearing because Google is successful. Google is successful. I just used it on my way here. Or because Google is big. That’s not why,from my perspective, we’re having this hearing. We are having it because even successful companies, even popular companies, and even innovative companies are subject to the laws of this country including our antitrust laws. 
  • We are all successful when we make sure that our economy is strong and our economy is working better. But the law can’t be blinded by Google’s success or its past innovations if the company in its zeal to achieve greater success crosses a line into anticompetitive behavior. It’s our job to regulate it. It’s that simple. So we’re going to touch on issues, I hope, today of competition, technological innovation, the use of personal data. These are some of the defining issues, as the chair has said, defining issues of our time and I personally think, as we go into the months to come, this won’t just be about Google. This isn’t even just about the tech industry as much as I believe we need to change our laws and look at monopsonies and look at changing the burdens and making it so that our laws are as sophisticated as the companies that now occupy our economy.

Klobuchar asserted:

  • I think we need to do all that and I think it should be a huge priority going into the year. But right now as the chairman mentioned, we are focused on this issue today. Our society has never been more dependent on this technology than we are now in the midst of this global pandemic. As I noted, not just Google, the pandemic has forced a bunch of small businesses to close their doors and the five largest tech companies continue to thrive to the point where they briefly accounted for nearly 25% value of the entire S&P 500 stock index just a few weeks ago.
  • Again, I don’t quarrel with their success, but we have to start looking at do our laws really match that situation. And even if the original intent when these companies started as start-ups was to be innovative, which they’ve been, at what point do you cross the line so you squelch innovation and competition from other companies? We start with this, the ownership and use of data.
  • The powerful companies that provide us with these technologies are also collecting personal information. We know that. They know who our friends are, they know the books we read, where we live, whether we’ve graduated from college, income levels, race, how many steps we took yesterday. The chairman and I share an interest in this. How long we’ve stayed where we are. Machine learning analyzes troves of personal data, allowing our firms to discern even more sensitive information about us, our medical conditions, political, religious views and even preferences that we don’t even know we have. And why would companies do all of this? Well, put simply, to target us with digital advertisements. There’s really no other reason. It is a capitalist society. That’s what they do.

Klobuchar stated

  • Now, Google makes more money doing that than any company in the world, hands down, by leveraging its unmatched access to consumer data gained through its existing dominance in online and mobile search, mobile operating systems, Android, email, Gmail, online and mobile video, YouTube, browsers, Chrome, mobile mapping apps, Google maps and ad technology.
  • So, this ad technology ecosystem, known as the ad tech stack, consists of advertisers on one side and publishers on the other. So let’s look at these two sides. On the advertising side Google controls access to the huge number of advertisers that place ads on Google search which is nearly 90% of the search market and has unparalleled access to data as I described. On the publisher side, Google has privileged access to ad data to inform its bidding strategies. And then it also effectively controls the process, the ad auction process, that gets an advertiser’s ad to be put on a publisher’s site. Google dominates all the markets for services on both sides of the ad/tech stack, the publisher side and the advertising side, and I hope that will be a lot of our focus today. Research has suggested that Google may be taking between 30 and 70 percent of every advertising dollar spent by advertisers using its services depriving publishers of that revenue. Who are the publishers? They’re content producers. They’re things like the Minneapolis Star Tribune, they depend on revenue, so many of our content producers, our news producers do to get by.  
  • And to me, given that my dad was a journalist, to me this is one of the key elements here because if you have unfairness in how that ad echo system is going, then you’re depriving these news organizations at a time when the first amendment is already under assault of the revenue that they need to keep going. So whether it’s happening, and we don’t know all of the details at the Department of Justice right now, this could be the beginning of a reckoning for our antitrust laws to start looking at how we’re going to grapple with the new kinds of markets that we see across the country. It would help answer the question whether our federal antitrust laws are able to restrain the business conduct of even the largest, most successful companies in the world. When you think of the breakup of AT&T, that was our last big thing that happened in the antitrust area. Really big thing. What did that lead to? Lower prices, more competition. It really worked. But we’re not able to do this right now.
  • And my hope is that we’re getting the start and the Justice Department, that things are going on at the FTC. But to really do that, they’re going to do resources to take on the legions of lawyers at the companies and that’s my first goal. What can we do for enforcement? My second, what do we have to do to make the laws work better, to look at some of the deals that have already been made? The third is what are the remedies? Do they make a difference in changing the behavior and allowing competition? I literally don’t have personal grudges against these companies like sometimes the president has expressed about various companies. I don’t. I just want our capitalist system to work. I want it to work. And to have it work you simply can’t have one company dominating areas of an industry. Our Founding Fathers started this country in part because they were rebelling against monopoly power.

Google Global Partnerships and Corporate Development President Donald Harrison stated

  • Online advertising prices in the U.S. have fallen more than 40% since 2010. According to the Progressive Policy Institute, “for every $3 that an advertiser spends on digital advertising, they would have to spend $5 on print advertising to get the same impact.” As a result, the share of U.S. GDP going to advertising in media has declined roughly 25% in recent years. The benefits of these lower prices ow directly to American businesses and consumers.
  • We help businesses grow from advertising on (1) our own sites, and (2) other publishers’ sites.
    • Advertising on Google sites and apps
    • A wide range of businesses, including many small firms, advertise on our sites and apps like Google Search and YouTube. That’s where we earn the majority of our advertising revenue.
    • We show no ads — and make no money — on the vast majority of searches. We show ads only on a small fraction of searches, typically those with commercial intent, such as searches for “sneakers” or “toaster.” We face intense competition for these types of searches. An estimated 55 percent of Americans start product searches on Amazon, not Google. And many online shoppers use Walmart, eBay, and other sites. For travel searches, many go to Expedia, Kayak, Orbitz, and TripAdvisor. Facebook, Bing, Twitter, Snap, Pinterest, and many more compete with us for a range of commercial advertisements.
    • Advertising on non-Google sites and apps
    • In addition to ads on our own properties, Google also helps businesses advertise on a wide range of other websites and mobile applications, known as “publishers.” We offer technology that (1) helps advertisers buy ad space — known as the “buy side,” and (2) helps publishers sell their ad space — known as the “sell side.” This technology is often referred to as “ad tech.”
    • The ad tech portion of our business accounts for a small fraction of our advertising revenue. And we share the majority of that revenue with publishers. Publishers get paid for every impression — each time an ad is viewed — even if the ad is never clicked. Of the revenue we retain, a large portion goes to defray the costs of running this complex and evolving business.
  • A crowded and competitive ad tech ecosystem
    • The ad tech space is crowded and competitive. Thousands of companies, large and small, work together and in competition with each other, each with different specialties and technologies. We compete with Adobe, Amazon, AT&T, Comcast, Facebook, News Corporation, Oracle, and Verizon, as well as leaders like Index Exchange, Magnite, MediaMath, OpenX, The Trade Desk, and many more.
  • Google shares billions of dollars with publishers, more than the industry average.
    • Even as online ad prices and ad tech fees have fallen, benefiting businesses and consumers, Google has helped publishers make more money from ads. In 2018, we paid more than $14 billion to the publishing partners in our ad network — up from $10 billion in 2015.
    • In 2019, when both advertisers and publishers used our tools, publishers kept over 69 percent of the ad revenue — more than the industry average. And when publishers use our tools to sell directly to advertisers, they keep even more of the revenue.

Chalice Custom Algorithms Chief Executive Officer Adam Heimlich contended

  • In 2016, Google combined search and display data, breaking a promise made to American regulators. Google also broke the industry’s privacy standard by linking consumers’ names, from Gmail, to the ID numbers assigned to browsers for exchange transactions.
  • Continuously, from 2016, Google came up with new ways to pollute the exchange ecosystem they’d previously seemed to embrace. Pollution came in the form of restrictions and exclusions that made the open web less efficient for buyers and sellers.
  • Google took YouTube, Google’s most valuable display property, off the exchanges, while making it available through an exclusive “pipe” from Google’s exchange bidder. Google excluded data providers from its websites and measurement partners from its platforms. Google’s selling platform denied publishers’ demand for a unified, exchange- vs-exchange action. To keep publishers from getting rid of Google’s software, Google funnels exclusive display demand from its search platform through it. Google weaponized new privacy laws to restrict advertisers’ and publishers’ access to their own ad data in Google tools.
  • Google tightened ties among its products until the shady broker was no longer one among a set of competitors: Google became the only display company not hobbled by the exclusions and restrictions it’d placed on everyone else. The power to interoperate among buy-side, sell-side and measurement software went from being a feature of the exchange ecosystem to a capability exclusive to Google.
  • Now, progress on innovation is squeezed to the margins of the industry, and new adtech is rare. The majority of advertisers have stagnated or regressed.
  • There’s more at stake than most people realize. The more efficient the ad market, the more likely it is that superior new products will find customers and thrive. When the ad exchanges function properly, the size advantage from flooding the airwaves is offset by quieter voices speaking directly to whoever’s most open to any given improvement. It tilts the incentives of every business toward innovation.
  • Google is dominating display by breaking interoperability and subtracting the efficiencies of a symmetrical market. Pre-2016, under intense competitive pressure, ad exchanges were becoming more transparent and privacy-respectful as the ecosystem grew. Google could have coped with these developments without using its market power destructively: There was nothing to stop Google from exiting the arena or competing within its open standards. Whether or not Google competes with other big tech firms is irrelevant to the harms they’ve caused publishers, measurement companies, platforms and small businesses like mine in the ~$50B open web display market.
  • It was efficient when publishers, platforms, measurement tools and service providers all interoperated. Innovators of a great new product or service could access a global marketplace of thousands of buyers and sellers quickly at low cost. Small businesses with great ideas had a shorter ramp to success.
  • Now, funding for new adtech startups has been drying up and the pace of innovation slowed down. The number-one concern I hear from potential investors is Google’s domination of the market my company operates in. For years, they’ve been breaking existing efficiencies and preventing the development of new ones.
  • Many expect Google to successfully mislead regulators about its conduct in the open web, and its harmful effects. I’m grateful for the opportunity to help scrutinize Google’s claims. For the sake of competition, the innovation competition drives and the benefits innovation brings, Google should be forced to either exit the ad exchange market or compete within its open standards.

Omidyar Network Beneficial Technology Senior Advisor David Dinielli stated

  • [U]nder current law, there is a strong case to be made that Google has illegally monopolized, or illegally maintained a monopoly in, the market for digital advertising on what is termed the “open web,” i.e., advertising that appears on websites as users traverse the internet.
  • Through a variety of conduct described herein, Google now occupies every layer the “ad tech stack”—a term that describes the various functions that serve to match website publishers with the advertisers who seek to deliver targeted ads to consumers who are viewing those websites. In antitrust parlance, website publishers provide the “supply” of ad space, and advertisers create the “demand” for that space. The market for this sort of advertising is unique and appears on its face dysfunctional from an antitrust standpoint: Google—through its various ad tech tools – represents both the suppliers and the purchasers and also conducts the real-time auctions that match buyers and sellers and determine the price. Moreover, Google appears to have engaged in a multitude of anti-competitive acts, such as making the ad space on YouTube (which it owns) available exclusively through its own ad tech tools, that were designed to cement its lock on this market and exclude competitors. As my co-author and I said in a recent paper about the digital advertising market, “all roads lead through Google.”
  • Google has asserted that the digital advertising market is vibrant and competitive, and that publishers and advertisers have many options in buying and selling advertising space. Of course, it is not surprising that there are other some other actors in this market, given the significant profits to be made. But a recent report from the United Kingdom’s Competition and Markets Authority (“CMA”) explained, based on an extensive factual investigation, that Google holds a dominant position—as high as 90%—in every layer of the ad tech stack. Moreover, a monopolization case in the U.S. does not require proof that the alleged monopolist hold 100% of a particular market—which would make it literally a monopolist—but rather that it has “monopoly power” and that it has engaged in anticompetitive conduct to obtain or maintain that power rather than competing on the merits. Google’s conduct as described herein surely fits that standard.
  • Digital advertising is complex and the tools and processes that allow for near- instantaneous placement of ads every time we open a web page can seem opaque. But the consequences of unchecked power in this market are significant. If advertisers are paying higher prices than would obtain in a well-functioning market, economic theory teaches that those higher advertising prices will be passed down to consumers in the form of increased prices for goods and services. If website publishers, such as local news outlets, are being paid less than they should for their supply of advertising space, they will invest less in content creation and news gathering. Google is the winner and the rest of us are the losers. This committee therefore is right in investigating if current antitrust law is up to the task of ensuring competition in digital advertising and exploring possible legislative fixes if it is not.

Netchoice Vice President and General Counsel Carl Szabo stated

  • Among the many Google products and services that consumers love are Google Search, YouTube, Gmail, and Google Drive—all amazingly useful, and all free. To many critics of “Big Tech,” however, when consumers enthusiastically choose these free-of-charge products, it amounts to proof that something must be wrong. Every successful new service or product that proves a winner with consumers is deemed by these critics to be just another antitrust violation.
  • But Google’s greatest successes are being won in markets with the greatest competition. In the digital ads market, for example, Google faces fierce competitive pressure. You would never know that listening to the critics.
  • For starters, Google is no monopoly. It’s wildly popular with consumers, yes. And true, it’s also very popular with investors. But the company faces competition from all corners, including from other tech platforms such as Facebook and Amazon (which are simultaneously and thus illogically also dubbed monopolies).
  • Far from being evidence of any unlawful conduct, Google’s success under these conditions offers abundant proof that it is meeting and exceeding the fundamental test that has been the bedrock of antitrust law for the last 40 decades: are consumers benefitting? There can be little doubt on this point, for Google’s users vote daily with their choices. In order to dismiss this as irrelevant, the critics are now arguing that antitrust enforcement should simply abandon the consumer welfare standard, enabling them to attack “bigness” per se. This would undermine the very purpose of antitrust law since its inception more than a century ago.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Photo by Morning Brew on Unsplash

PRC Response To U.S. Clean Networks

The PRC responds to  the U.S.’ Clean Networks with call for international, multilateral standards

In a speech given by the People’s Republic of China’s (PRC) Foreign Minister Wang Yi, the PRC proposed international, multilateral cooperation in addressing data security around the globe. In doing, Wang took some obvious shots at recent policies announced by the United States (U.S.) and longer term actions such as surveillance by the National Security Agency (NSA). The PRC floated a “Global Initiative on Data Security” that would, on its face, seem to argue against actions being undertaken by Beijing against the U.S. and some of its allies. For example, this initiative would bar the stealing of “important data,” yet the PRC stands accused of hacking Australia’s Parliament. Nonetheless, the PRC is likely seeking to position itself as more internationalist than the U.S., which under President Donald Trump has become more isolationist and unilateralist in its policies. The PRC is also calling for the rule of law, especially around “security issues,” most likely a reference to the ongoing trade/national security dispute between the two nations playing out largely in their technology sectors.

Wang’s speech came roughly a month after the U.S. Department of State unveiled its Clean Networks program, an initiative aimed at countering the national security risks posed by PRC technology companies, hardware, software, and apps (see here for more analysis.) He even went so far as to condemn unilateral actions by one nation in particular looking to institute a “clean” networks program. Wang framed this program as aiming to blunt the PRC’s competitive advantage by playing on national security fears. The Trump Administration has sought to persuade, cajole, and lean on other nations to forgo use of Huawei equipment and services in building their next generation 5G networks with some success.

And yet, since the Clean Networks program lacks much in the way of apparent enforcement mechanisms, the Department of States’s announcement may have had more to do with optics as the Trump Administration and many of its Republican allies in Congress have pinned the blame on COVID-19 on the PRC and cast the country as the primary threat to the U.S. This has played out as the Trump Administration has been choking off access to advanced semiconductors and chips to PRC firms, banned TikTok and WeChat, and order ByteDance to sell musical.ly, the app and platform that served as the fulcrum by which TikTok was launched in the U.S.

Wang asserted the PRC “believes that to effectively address the risks and challenges to data security, the following principles must be observed:

  • First, uphold multilateralism. Pursuing extensive consultation and joint contribution for shared benefits is the right way forward for addressing the deficit in global digital governance. It is important to develop a set of international rules on data security that reflect the will and respect the interests of all countries through broad-based participation. Bent on unilateral acts, a certain country keeps making groundless accusations against others in the name of “clean” network and used security as a pretext to prey on enterprises of other countries who have a competitive edge. Such blatant acts of bullying must be opposed and rejected.
  • Second, balance security and development. Protecting data security is essential for the sound growth of digital economy. Countries have the right to protect data security according to law. That said, they are also duty-bound to provide an open, fair and non-discriminatory environment for all businesses. Protectionism in the digital domain runs counter to the laws of economic development and the trend of globalization. Protectionist practices undermine the right of global consumers to equally access digital services and will eventually hold back the country’s own development.
  • Third, ensure fairness and justice. Protection of digital security should be based on facts and the law. Politicization of security issues, double standards and slandering others violate the basic norms governing international relations, and seriously disrupt and hamper global digital cooperation and development.

Wang continued, “[i]n view of the new issues and challenges emerging in this field, China would like to propose a Global Initiative on Data Security, and looks forward to the active participation of all parties…[and] [l]et me briefly share with you the key points of our Initiative:

  • First, approach data security with an objective and rational attitude, and maintain an open, secure and stable global supply chain.
  • Second, oppose using ICT activities to impair other States’ critical infrastructure or steal important data.
  • Third, take actions to prevent and put an end to activities that infringe upon personal information, oppose abusing ICT to conduct mass surveillance against other States or engage in unauthorized collection of personal information of other States.
  • Fourth, ask companies to respect the laws of host countries, desist from coercing domestic companies into storing data generated and obtained overseas in one’s own territory.
  • Fifth, respect the sovereignty, jurisdiction and governance of data of other States, avoid asking companies or individuals to provide data located in other States without the latter’s permission.
  • Sixth, meet law enforcement needs for overseas data through judicial assistance or other appropriate channels.
  • Seventh, ICT products and services providers should not install backdoors in their products and services to illegally obtain user data.
  • Eighth, ICT companies should not seek illegitimate interests by taking advantage of users’ dependence on their products.

As mentioned in the opening paragraph of this article, the U.S. and many of its allies and partners would argue the PRC has transgressed a number of these proposed rules. However, the Foreign Ministry was very clever in how they drafted and translated these principles, for in the second key principle, the PRC is proposing that no country should use “ICT activities to impair other States’ critical infrastructure.” And yet, two international media outlets reported that the African Union’s (AU) computers were transmitting reams of sensitive data to Shanghai daily between 2012 and 2017. If this claim is true, and the PRC’s government was behind the exfiltration, is it fair to say the AU’s critical infrastructure was impaired? One could argue the infrastructure was not even though there was apparently massive data exfiltration. Likewise, in the third key principle, the PRC appears to be condemning mass surveillance of other states, but just this week a PRC company was accused of compiling the personal information of more than 2.4 million worldwide, many of them in influential positions like the Prime Ministers of the United Kingdom and Australia. And yet, if this is the extent of the surveillance, it is not of the same magnitude as U.S. surveillance over the better part of the last two decades. Moreover, the PRC is not opposing a country using mass surveillance of its own people as the PRC is regularly accused of doing, especially against its Uighur minority.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Photo by Hanson Lu on Unsplash

Further Reading, Other Developments, and Coming Events (14 September)

Coming Events

  • The Senate Judiciary Committee’s Antitrust, Competition Policy & Consumer Rights Subcommittee will hold a hearing on 15 September titled “Stacking the Tech: Has Google Harmed Competition in Online Advertising?.” In their press release, Chair Mike Lee (R-UT) and Ranking Member Amy Klobuchar (D-MN) asserted:
    • Google is the dominant player in online advertising, a business that accounts for around 85% of its revenues and which allows it to monetize the data it collects through the products it offers for free. Recent consumer complaints and investigations by law enforcement have raised questions about whether Google has acquired or maintained its market power in online advertising in violation of the antitrust laws. News reports indicate this may also be the centerpiece of a forthcoming antitrust lawsuit from the U.S. Department of Justice. This hearing will examine these allegations and provide a forum to assess the most important antitrust investigation of the 21st century.
  • The United States’ Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) announced that its third annual National Cybersecurity Summit “will be held virtually as a series of webinars every Wednesday for four weeks beginning September 16 and ending October 7:”
    • September 16: Key Cyber Insights
    • September 23: Leading the Digital Transformation
    • September 30: Diversity in Cybersecurity
    • October 7: Defending our Democracy
    • One can register for the event here.
  • The House Homeland Security Committee will hold a hearing titled “Worldwide Threats to the Homeland” on 17 September with the following witnesses:
    • Chad Wolf, Department of Homeland Security
    • Christopher Wray, Director, Federal Bureau of Investigation
    • Christopher Miller, Director, National Counterterrorism Center (NCTC)
  • On 17 September, the House Energy and Commerce Committee’s Communications & technology Subcommittee will hold a hearing titled “Trump FCC: Four Years of Lost Opportunities.”
  • The House Armed Services Committee’s Intelligence and Emerging Threats and Capabilities Subcommittee will hold a hearing’ titled “Interim Review of the National Security Commission on Artificial Intelligence Effort and Recommendations” with these witnesses:
    • Dr. Eric Schmidt , Chairman, National Security Commission on Artificial Intelligence 
    • HON Robert Work, Vice Chairman, National Security Commission on Artificial Intelligence, HON Mignon Clyburn, Commissioner, National Security Commission on Artificial Intelligence 
    • Dr. José-Marie Griffiths, Commissioner, National Security Commission on Artificial Intelligence
  • On 22 September, the Federal Trade Commission (FTC) will hold a public workshop “to examine the potential benefits and challenges to consumers and competition raised by data portability.” The agency has released its agenda and explained:
    • The workshop will also feature four panel discussions that will focus on: case studies on data portability rights in the European Union, India, and California; case studies on financial and health portability regimes; reconciling the benefits and risks of data portability; and the material challenges and solutions to realizing data portability’s potential.
  • The Senate Judiciary Committee’s Antitrust, Competition Policy & Consumer Rights Subcommittee will hold a hearing on 30 September titled “Oversight of the Enforcement of the Antitrust Laws” with Federal Trade Commission Chair Joseph Simons and United States Department of Justice Antitrust Division Assistant Attorney General Makan Delhrahim.
  • The Federal Communications Commission (FCC) will hold an open meeting on 30 September and has made available its agenda with these items:
    • Facilitating Shared Use in the 3.1-3.55 GHz Band. The Commission will consider a Report and Order that would remove the existing non-federal allocations from the 3.3-3.55 GHz band as an important step toward making 100 megahertz of spectrum in the 3.45-3.55 GHz band available for commercial use, including 5G, throughout the contiguous United States. The Commission will also consider a Further Notice of Proposed Rulemaking that would propose to add a co-primary, non-federal fixed and mobile (except aeronautical mobile) allocation to the 3.45-3.55 GHz band as well as service, technical, and competitive bidding rules for flexible-use licenses in the band. (WT Docket No. 19-348)
    • Expanding Access to and Investment in the 4.9 GHz Band. The Commission will consider a Sixth Report and Order that would expand access to and investment in the 4.9 GHz (4940-4990 MHz) band by providing states the opportunity to lease this spectrum to commercial entities, electric utilities, and others for both public safety and non-public safety purposes. The Commission also will consider a Seventh Further Notice of Proposed Rulemaking that would propose a new set of licensing rules and seek comment on ways to further facilitate access to and investment in the band. (WP Docket No. 07-100)
    • Improving Transparency and Timeliness of Foreign Ownership Review Process. The Commission will consider a Report and Order that would improve the timeliness and transparency of the process by which it seeks the views of Executive Branch agencies on any national security, law enforcement, foreign policy, and trade policy concerns related to certain applications filed with the Commission. (IB Docket No. 16-155)
    • Promoting Caller ID Authentication to Combat Spoofed Robocalls. The Commission will consider a Report and Order that would continue its work to implement the TRACED Act and promote the deployment of caller ID authentication technology to combat spoofed robocalls. (WC Docket No. 17-97)
    • Combating 911 Fee Diversion. The Commission will consider a Notice of Inquiry that would seek comment on ways to dissuade states and territories from diverting fees collected for 911 to other purposes. (PS Docket Nos. 20-291, 09-14)
    • Modernizing Cable Service Change Notifications. The Commission will consider a Report and Order that would modernize requirements for notices cable operators must provide subscribers and local franchising authorities. (MB Docket Nos. 19-347, 17-105)
    • Eliminating Records Requirements for Cable Operator Interests in Video Programming. The Commission will consider a Report and Order that would eliminate the requirement that cable operators maintain records in their online public inspection files regarding the nature and extent of their attributable interests in video programming services. (MB Docket No. 20-35, 17-105)
    • Reforming IP Captioned Telephone Service Rates and Service Standards. The Commission will consider a Report and Order, Order on Reconsideration, and Further Notice of Proposed Rulemaking that would set compensation rates for Internet Protocol Captioned Telephone Service (IP CTS), deny reconsideration of previously set IP CTS compensation rates, and propose service quality and performance measurement standards for captioned telephone services. (CG Docket Nos. 13-24, 03-123)
    • Enforcement Item. The Commission will consider an enforcement action.

Other Developments

  • After Ireland’s Data Protection Commission (DPC) directed Facebook to stop transferring the personal data of European Union citizens to the United States (U.S.), the company filed suit in Ireland’s court to stop enforcement of the order and succeeded in staying the matter until the court rules on the merits of the challenge. Earlier this summer, the Court of Justice for the European Union (CJEU) struck down the adequacy decision for the agreement between the European Union (EU) and United States (U.S.) that had provided the easiest means to transfer the personal data of EU citizens to the U.S. for processing under the General Data Protection Regulation (GDPR) (i.e. the EU-U.S. Privacy Shield). In the case known as Schrems II, the CJEU also cast doubt on whether standard contractual clauses (SCC) used to transfer personal data o the U.S. would pass muster given the grounds for finding the Privacy Shield inadequate: the U.S.’s surveillance regime and lack of meaningful redress for EU citizens. Consequently, it has appeared as if data protection authorities throughout the EU would need to revisit SCCs for transfers to the U.S., and it appears the DPC was looking to stop Facebook from using its SCC. Facebook is apparently arguing in its suit that it will suffer “extremely significant adverse effects” if the DPC’s decision is implemented.
  • In a related development, the European Data Protection Board (EDPB) has established “a taskforce to look into complaints filed in the aftermath of the CJEU Schrems II judgement.” The EDPB noted the 101 identical complaints “lodged with EEA Data Protection Authorities against several controllers in the European Economic Area (EEA) member states regarding their use of Google/Facebook services which involve the transfer of personal data.” The Board added “[s]pecifically the complainants, represented by the NGO NOYB, claim that Google/Facebook transfer personal data to the U.S. relying on the EU-U.S. Privacy Shield or Standard Contractual Clauses and that according to the recent CJEU judgment in case C-311/18 the controller is unable to ensure an adequate protection of the complainants’ personal data.” The EDPB claimed “[t]he taskforce will analyse the matter and ensure a close cooperation among the members of the Board…[and] [t]his taskforce will prepare recommendations to assist controllers and processors with their duty to identify and implement appropriate supplementary measures to ensure adequate protection when transferring data to third countries.” EDPB Chair Andrea Jelinek cautioned “the implications of the judgment are wide-ranging, and the contexts of data transfers to third countries very diverse…[and] [t]herefore, there cannot be a one-size-fits-all, quick fix solution.” She added “[e]ach organisation will need to evaluate its own data processing operations and transfers and take appropriate measures.”
  • An Australian court ruled against Facebook in its efforts to dismiss a suit brought against the company for its role in retaining and providing personal data to Cambridge Analytica. A Federal Court of Australia dismissed Facebook’s filings to reverse a previous ruling that allowed the Office of the Australian Information Commissioner (OAIC) to sue Facebook’s United States and Irish entities.
    • In March, the OAIC filed suit in federal court in Australia, alleging the two companies transgressed the privacy rights of 311,127 Australians under Australia’s Privacy Act. The two companies could face liability as high as $1.7 million ASD per violation.
    • In its November 2018 report to Parliament titled “Investigation into the use of data analytics in political campaigns”, the ICO explained
      • One key strand of our investigation involved allegations that an app, ultimately referred to as ‘thisisyourdigitallife’, was developed by Dr Aleksandr Kogan and his company Global Science Research (GSR) in order to harvest the data of up to 87 million global Facebook users, including one million in the UK. Some of this data was then used by Cambridge Analytica, to target voters during the 2016 US Presidential campaign process.
    • In its July 2018 report titled “Democracy disrupted? Personal information and political influence,” the ICO explained
      • The online targeted advertising model used by Facebook is very complex, and we believe a high level of transparency in relation to political advertising is vital. This is a classic big-data scenario: understanding what data is going into the system; how users’ actions on Facebook are determining what interest groups they are placed in; and then the rules that are fed into any dynamic algorithms that enable organisations to target individuals with specific adverts and messaging.
      • Our investigation found significant fair-processing concerns both in terms of the information available to users about the sources of the data that are being used to determine what adverts they see and the nature of the profiling taking place. There were further concerns about the availability and transparency of the controls offered to users over what ads and messages they receive. The controls were difficult to find and were not intuitive to the user if they wanted to control the political advertising they received. Whilst users were informed that their data would be used for commercial advertising, it was not clear that political advertising would take place on the platform.
      • The ICO also found that despite a significant amount of privacy information and controls being made available, overall they did not effectively inform the users about the likely uses of their personal information. In particular, more explicit information should have been made available at the first layer of the privacy policy. The user tools available to block or remove ads were also complex and not clearly available to users from the core pages they would be accessing. The controls were also limited in relation to political advertising.
  • The Australian Competition & Consumer Commission (ACCC) announced it “will be examining the experiences of Australian consumers, developers, suppliers and others in a new report scrutinising mobile app stores” according to the agency’s press release. The ACCC’s inquiry comes at the same time regulators in the United States and the European Union are investigating the companies for their app store practices, which could lead to enforcement actions. The ACCC is also looking to institute a code that would require Google and Facebook to pay Australian media outlets for content used on their platforms. The ACCC stated that “[i]ssues to be examined include the use and sharing of data by apps, the extent of competition between Google and Apple’s app stores, and whether more pricing transparency is needed in Australia’s mobile apps market.” The ACCC added:
    • Consumers are invited to share their experiences with buying and using apps through a short survey. The ACCC has also released an issues paper seeking views and feedback from app developers and suppliers.
    • In the issues paper, the ACCC explained “[p]otential outcomes” could be:
      • findings regarding structural, competitive or behavioural issues affecting the supply of apps
      • increased information about competition, pricing and other practices in the supply of apps and on app marketplaces
      • ACCC action to address any conduct that raises concerns under the Competition and Consumer Act 2010, and
      • recommendations to the Government for legislative reform to address systemic issues.
  • The Government Accountability Office (GAO) found an agency has implemented spotty, incomplete privacy measures in using facial recognition technology (FRT) at ports of entry.
    • The House Homeland Security and Senate Homeland Security and Governmental Affairs asked the GAO
      • to review United States (U.S.) Customs and Border Protection (CBP) and Transportation Security Administration’s (TSA) facial recognition technology capabilities for traveler identity verification. This report addresses (1) the status of CBP’s testing and deployment of facial recognition technology at ports of entry, (2) the extent to which CBP’s use of facial recognition technology has incorporated privacy principles consistent with applicable laws and policies, (3) the extent to which CBP has assessed the accuracy and performance of its facial recognition capabilities at ports of entry, and (4) the status of TSA’s testing of facial recognition capabilities and the extent to which TSA’s facial recognition pilot tests incorporated privacy principles.
    • The GAO noted:
      • Most recently, in 2017, we reported that CBP had made progress in testing biometric exit capabilities, including facial recognition technology, but challenges continued to affect CBP’s efforts to develop and implement a biometric exit system, such as differences in the logistics and infrastructure among ports of entry. As we previously reported, CBP had tested various biometric technologies in different locations to determine which type of technology could be deployed on a large scale without disrupting legitimate travel and trade, while still meeting its mandate to implement a biometric entry-exit system. Based on the results of its testing, CBP concluded that facial recognition technology was the most operationally feasible and traveler-friendly option for a comprehensive biometric solution. Since then, CBP has prioritized testing and deploying facial recognition technology at airports (referred to as air exit), with seaports and land ports of entry to follow. These tests and deployments are part of CBP’s Biometric Entry-Exit Program.
      • As part of TSA’s mission to protect the nation’s transportation systems and to ensure freedom of movement for people and commerce, TSA has been exploring facial recognition technology for identity verification at airport checkpoints. Since 2017, TSA has conducted a series of pilot tests—some in partnership with CBP—to assess the feasibility of using facial recognition technology to automate traveler identity verification at airport security checkpoints. In April 2018, TSA signed a policy memorandum with CBP on the development and implementation of facial recognition capabilities at airports.
    • The GAO made recommendations to CBP:
      • The Commissioner of CBP should ensure that the Biometric Entry-Exit Program’s privacy notices contain complete and current information, including all of the locations where facial recognition is used and how travelers can request to opt out as appropriate. (Recommendation 1)
      • The Commissioner of CBP should ensure that the Biometric Entry-Exit Program’s privacy signage is consistently available at all locations where CBP is using facial recognition. (Recommendation 2)
      • The Commissioner of CBP should direct the Biometric Entry-Exit Program to develop and implement a plan to conduct privacy audits of its commercial partners’, contractors’, and vendors’ use of personally identifiable information. (Recommendation 3)
      • The Commissioner of CBP should develop and implement a plan to ensure that the biometric air exit capability meets its established photo capture requirement. (Recommendation 4)
      • The Commissioner of CBP should develop a process by which Biometric Entry-Exit program officials are alerted when the performance of air exit facial recognition falls below established thresholds. (Recommendation 5)
  • The United States (U.S.) Agency for Global Media (USAGM) is being sued by an entity it funds and oversees because
    • Previously, the United States Court of Appeals for the District of Columbia enjoined USAGM from “taking any action to remove or replace any officers or directors of the OTF,” pending the outcome of the suit which is being expedited.
    • Additionally, USAGM CEO and Chair of the Board Michael Pack is being accused in two different letters of seeking to compromise the integrity and independence of two organizations he oversees. There have been media accounts of the Trump Administration’s remaking of USAGM in ways critics contend are threatening the mission and effectiveness of the Open Technology Fund (OTF), a U.S. government non-profit designed to help dissidents and endangered populations throughout the world. The head of the OTF has been removed, evoking the ire of Members of Congress, and other changes have been implemented that are counter to the organization’s mission. Likewise, there are allegations that politically-motivated policy changes seek to remake the Voice of America (VOA) into a less independent entity.
      • In a letter to Pack, OTF argued that a number of recent actions Pack has undertaken have violated “firewall protections” in the organization’s grant agreement. They further argue that Pack is conflicted and should turn over the investigation to the United States (U.S.) Department of State’s Office of the Inspector General (OIG). OTF alleged the following:
        • 1. Attempts to compromise and undermine OTF’s independence: USAGM has repeatedly attempted to undermine OTF’s independence over the past several months.
        • 2. Attempts to compromise and undermine integrity: USAGM has also attempted to undermine the integrity of OTF by publicly making numerous false and misleading claims about OTF to the internet freedom community, the general public, and even to Congress.
        • 3. Attempts to compromise and undermine security: USAGM has attempted to undermine the security of OTF, our staff, and our project partners -many of whom operate in highly sensitive environments -by
          • 1) attempting to gain unauthorized and unsupervised access to our office space and
          • 2) by requesting vast amounts of sensitive information and documentation with no apparent grant-related purpose, and no regard for the security of that information and documentation
        • 4. Attempts to compromise and undermine privacy: Closely related to USAGM’s attempts to undermine OTF’s security, USAGM has also attempted to undermine the privacy of OTF’s staff and partners by requesting that OTF provide Personally Identifiable Information(PII) without a clearly articulated grant-related purpose, and with no guarantee that the PII will be handled in a secure manner.
        • 5. Attempts to compromise and undermine effectiveness: USAGM’s actions have undermined the effectiveness of OTF by:
          • 1) freezing and subsequently withholding $19,181,791 in congressionally appropriated funding from OTF, forcing OTF to issue stop-work orders to 49 of our 60 internet freedom projects;
          • 2) providing unjustified, duplicative, overbroad, and unduly burdensome requests for information and documentation, without any clear grant-related purpose, and with clearly unreasonable deadlines;
          • 3) attempting to divert and redirect funding obligated by USAGM to OTF in an effort to duplicate OTF’s work; and
          • 4) threatening to terminate OTF’s Grant Agreement.
    • OTF asserted
      • These actions individually serve to seriously undermine OTF’s organizational and programmatic effectiveness. In their combined aggregate they threaten to dismantle OTF’s basic ability to effectively carry out its congressionally mandated mission to the detriment of USAGM and the cause of internet freedom globally
    • A group of VOA journalists wrote the entity’s acting director, asserting that Pack’s actions risk crippling programs and projects for some countries that are considered national security priorities.” They added:
      • He has ordered the firing of contract journalists, with no valid reason, by cancelling their visas, forcing them back to home countries where the lives of some of them may be in jeopardy. Now the purge appears to be expanding to include U.S. permanent residents and even U.S. citizens, with Mr. Pack recklessly expressing that being a journalist is “a great cover for a spy.
  • The Cyberspace Solarium Commission (CSC) issued its latest white paper to address a continuing problem for the United States’ government: how to attract or train a sufficient cyber workforce when private sector salaries are generally better. In “Growing A Stronger Federal Cyber Workforce,” the CSC claimed “Currently more than one in three public-sector cyber jobs sits open…[and] [f]illing these roles has been a persistent and intractable problem over the past decade, in large part due to a lack of coordination and leadership.” The CSC averred “[i]n the context of this pervasive challenge, the fundamental purpose of this paper is to outline the elements required for a coherent strategy that enables substantive and coordinated investment in cyber workforce development and calls for a sustained investment in that strategy.” The CSC then proceeds to lay out “five elements to guide development of a federal cyber workforce strategy:
    • Organize: Federal departments and agencies must have flexible tools for organizing and managing their workforce that can adapt to each organization’s individual mission while also providing coherence across the entirety of the federal government. To appropriately organize the federal cyber workforce, the CSC recommends properly identifying and utilizing cyber-specific occupational classifications to allow more tailored workforce policies, building a federal cyber service to provide clear and agile hiring authorities and other personnel management tools, and establishing coordination structures to provide clear leadership for federal workforce development e orts.
    • Recruit: Federal leaders must focus on the programs that make public service an attractive prospect to talented individuals. In many ways, the federal government’s greatest tool for recruitment is the mission and unique learning opportunities inherent in federal work. To capitalize on these advantages, the government should invest in existing programs such as CyberCorps: Scholarship for Service and the Centers of Academic Excellence, while also working to mitigate recruitment barriers that stem from the personnel security clearance process.
    • Develop: e federal government, like all cyber employers, cannot expect every new employee to have hands-on experience, a four-year degree, and a list of industry certifications. Rather, the federal government will be stronger if it draws from a broad array of educational backgrounds and creates opportunities for employees to gain knowledge and experience as they work. is e ort will call for many innovative approaches, among which the Commission particularly recommends apprenticeship programs and upskilling opportunities to support cyber employee development.
    • Retain: Federal leaders should take a nuanced view of retention, recognizing that enabling talent to move flexibly between the public and private sectors enables a stronger cyber workforce overall. However, federal employers can take steps to encourage their employees to increase the time they spend in public service. Improving pay flexibility is a major consideration, but continuing the development of career pathways and providing interesting career development opportunities like rotational and exchange programs also can be critical. Of particular note, federal employers can increase retention of underrepresented groups through the removal of inequities and barriers to advancement in the workplace.
    • Stimulate growth: e federal government cannot simply recruit a larger share of the existing national talent pool. Rather, leaders must take steps to grow the talent pool itself in order to increase the numbers of those available for federal jobs. To promote growth of the talent pool nationwide, the federal government must first coordinate government efforts working toward this goal. Executive branch and congressional leaders should also invest in measures to promote diversity across the national workforce and incentivize research to provide a greater empirical understanding of cyber workforce dynamics. Finally, federal leaders must work to increase the military cyber workforce, which has a significant impact on the national cyber workforce because it serves as both a source and an employer of cyber talent.

Further Reading

  • Oracle reportedly wins deal for TikTok’s US operations as ‘trusted tech partner’” By Tom Warren and Nick Statt – The Verge. ByteDance chose Oracle over Microsoft but not for buying its operations in the United States (U.S.), Australia, Canada, and New Zealand. Now, Oracle is proposing to be TikTok’s trusted technology partner, which seems to be hosting TikTok’s operations in the U.S. and managing its data as a means of allaying the concerns of the U.S. government about access by the People’s Republic of China (PRC).
  • Why Do Voting Machines Break on Election Day?” By Adrianne Jeffries – The Markup. This piece seeks to debunk the hype by explaining that most voting issues are minor and easily fixed, which may well be a welcome message in the United States (U.S.) given the lies and fretting about the security and accuracy of the coming election. Nonetheless, the mechanical and systemic problems encountered by some Americans do speak to the need to update voting laws and standards. Among other problems are the high barriers to entry for firms making and selling voting machines.
  • Twitter steps up its fight against election misinformation” By Elizabeth Dwoskin – The Washington Post. Twitter and Google announced policy changes like Facebook did last week to help tamp down untrue claims and lies about voting and elections in the United States. Twitter will take a number of different approaches to handling lies and untrue assertions. If past is prologue, President Donald Trump may soon look to test the limits of this policy as he did shortly after Facebook announced its policy changes. Google will adjust searches on election day to place respected, fact oriented organizations at the top of search results.
  • China’s ‘hybrid war’: Beijing’s mass surveillance of Australia and the world for secrets and scandal” By Andrew Probyn and Matthew Doran – ABC News; “Zhenhua Data leak: personal details of millions around world gathered by China tech company” By Daniel Hurst in Canberra, Lily Kuo in Beijing and Charlotte Graham-McLay in Wellington – The Guardian. A massive database leaked to to an American shows the breadth and range of information collected by a company in the People’s Republic of China (PRC) alleged to be working with the country’s military and security services. Zhenhua Data is denying any wrongdoing or anything untoward, but the database contains information on 2.4 million people, most of whom live in western nations in positions of influence and power such as British and Australian prime Ministers Boris Johnson and Scott Morrison. Academics claim this sort of compilation of information from public and private sources is unprecedented and would allow the PRC to run a range of influence operations.
  • Europe Feels Squeeze as Tech Competition Heats Up Between U.S. and China” By Steven Erlanger and Adam Satariano – The New York Times. Structural challenges in the European Union (EU) and a lack of large technology companies have left the EU is a delicate position. It seeks to be the world’s de facto regulator but is having trouble keeping with the United States and the People’s Republic of China, the two dominant nations in technology.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by PixelAnarchy from Pixabay

Pending Legislation In U.S. Congress, Part V

Congress may well pass IoT legislation this year, and the two bills under consideration take different approaches.

Continuing our look at bills Congress may pass this year leads us to an issue area that has received attention but no legislative action; the Internet of Things (IoT). Many Members are aware and concerned about the lax or nonexistent security standards for many such devices, which leaves them open to attack or being used as part of a larger bot network to attack other internet connected devices. There are two bills with significant odds of being enacted, one better than the other, for it is a more modest bill and it is attached to the Senate’s FY 2021 National Defense Authorization Act. However, the other bill is finally coming to the House floor today, which may shake loose its companion bill in the Senate.

As the United States (U.S.) Departments of Commerce and Homeland Security explained in “A Report to the President on Enhancing the Resilience of the Internet and Communications Ecosystem Against Botnets and Other Automated, Distributed Threats, insecure IoT poses huge threats to the rest of the connected world:

The Distributed Denial of Service (DDoS) attacks launched from the Mirai botnet in the fall of 2016, for example, reached a level of sustained traffic that overwhelmed many common DDoS mitigation tools and services, and even disrupted a Domain Name System (DNS) service that was a commonly used component in many DDoS mitigation strategies. This attack also highlighted the growing insecurities in—and threats from—consumer-grade IoT devices. As a new technology, IoT devices are often built and deployed without important security features and practices in place. While the original Mirai variant was relatively simple, exploiting weak device passwords, more sophisticated botnets have followed; for example, the Reaper botnet uses known code vulnerabilities to exploit a long list of devices, and one of the largest DDoS attacks seen to date recently exploited a newly discovered vulnerability in the relatively obscure MemCacheD software.

Later in the report, as part of one of the proposed goals, the departments asserted:

When market incentives encourage manufacturers to feature security innovations as a balanced complement to functionality and performance, it increases adoption of tools and processes that result in more secure products. As these security features become more popular, increased demand will drive further research.

However, I would argue there are no such market incentives at this point, for most people looking to buy and use IoT are not even thinking about security except in the most superficial ways. Moreover, manufacturers and developers of IoT have not experienced the sort of financial liability or regulatory action that might change the incentive structure. In May, the Federal Trade Commission (FTC) reached “a settlement with a Canadian company related to allegations it falsely claimed that its Internet-connected smart locks were designed to be “unbreakable” and that it took reasonable steps to secure the data it collected from users.”

As mentioned, one of the two major IoT bills stands a better chance of enactment. The “Developing Innovation and Growing the Internet of Things Act” (DIGIT Act) (S. 1611) would establish the beginnings of a statutory regime for the regulation of IoT at the federal level. The bill is sponsored by Senators Deb Fischer (R-NE), Cory Gardner (R-CO), Brian Schatz (D-HI), and Cory Booker (D-NJ) and is substantially similar to legislation (S. 88) the Senate passed unanimously in the last Congress the House never took up. In January, the Senate passed the bill by unanimous consent but the House has yet to take up the bill. S.1611was then added as an amendment to the “National Defense Authorization Act for Fiscal Year 2021“ (S.4049) in July. Its inclusion in an NDAA passed by a chamber of Congress dramatically increases the chances of enactment. However, it is possible the stakeholders in the House that have stopped this bill from advancing may yet succeed in stripping it out of a final NDAA.

Under this bill, the Secretary of Commerce must “convene a working group of Federal stakeholders for the purpose of providing recommendations and a report to Congress relating to the aspects of the Internet of Things, including”

identify any Federal regulations, statutes, grant practices, budgetary or jurisdictional challenges, and other sector-specific policies that are inhibiting, or could inhibit, the development or deployment of the Internet of Things;

  • consider policies or programs that encourage and improve coordination among Federal agencies that have responsibilities that are relevant to the objectives of this Act;
  • consider any findings or recommendations made by the steering committee and, where appropriate, act to implement those recommendations;
  • examine—
    • how Federal agencies can benefit from utilizing the Internet of Things;
    • the use of Internet of Things technology by Federal agencies as of the date on which the working group performs the examination;
    • the preparedness and ability of Federal agencies to adopt Internet of Things technology as of the date on which the working group performs the examination and in the future; and
    • any additional security measures that Federal agencies may need to take to—
      • safely and securely use the Internet of Things, including measures that ensure the security of critical infrastructure; and
      • enhance the resiliency of Federal systems against cyber threats to the Internet of Things

S.1611 requires this working group to have representatives from specified agencies such as the National Telecommunications and Information Administration, the National Institute of Standards and Technology, the Department of Homeland Security, the Office of Management and Budget, the Federal Trade Commission, and others. Nongovernmental stakeholders would also be represented on this body. Moreover, a steering committee would be established inside the Department of Commerce to advise this working group on a range of legal, policy, and technical issues. Within 18 months of enactment of S.1611, the working group would need to submit its recommendations to Congress that would then presumably inform additional legislation regulating IoT.  Finally, the Federal Communications Commission (FCC) would report to Congress on “future spectrum needs to enable better connectivity relating to the Internet of Things” after soliciting input from interested parties.

As noted, there is another IoT bill in Congress that may make it to the White House. In June 2019 the Senate and House committees of jurisdictions marked up their versions of the “Internet of Things (IoT) Cybersecurity Improvement Act of 2019” (H.R. 1668/S. 734), legislation that would tighten the federal government’s standards with respect to buying and using IoT. In what may augur enactment of this legislation, the House will take up its version today. However, new language in the amended bill coming to the floor making clear that the IoT standards for the federal government would not apply to “national security systems” (i.e. most of the Department of Defense, Intelligence Community, and other systems) suggests the roadblock that may have stalled this legislation for 15 months. It is reasonable to deduce that the aforementioned agencies made their case to the bill’s sponsors or allies in Congress that these IoT standards would somehow harm national security if made applicable to the defense IoT.

The bill text as released in March for both bills was identical signaling agreement between the two chambers’ sponsors, but the process of marking up the bills has resulted in different versions, requiring negotiation on a final bill. The House Oversight and Reform Committee marked up and reported out H.R. 1668 after adopting an amendment in the nature of a substitute that narrowed the scope of the bill and is more directive than the bill initially introduced in March. The Senate Homeland Security and Governmental Affairs Committee marked up S. 734 a week later, making their own changes from the March bill. The March version of the legislation unified two similar bills from the 115th Congress of the same title: the “Internet of Things (IoT) Cybersecurity Improvement Act of 2017” (S. 1691) and the “Internet of Things (IoT) Federal Cybersecurity Improvement Act of 2018” (H.R. 7283).

Per the Committee Report for S. 734, the purpose of bill

is to proactively mitigate the risks posed by inadequately-secured IoT devices through the establishment of minimum security standards for IoT devices purchased by the Federal Government. The bill codifies the ongoing work of the National Institute of Standards and Technology (NIST) to develop standards and guidelines, including minimum-security requirements, for the use of IoT devices by Federal agencies. The bill also directs the Office of Management and Budget (OMB), in consultation with the Department of Homeland Security (DHS), to issue the necessary policies and principles to implement the NIST standards and guidelines on IoT security and management. Additionally, the bill requires NIST, in consultation with cybersecurity researchers and industry experts, to publish guidelines for the reporting, coordinating, publishing, and receiving of information about Federal agencies’ security vulnerabilities and the coordinate resolutions of the reported vulnerabilities. OMB will provide the policies and principles and DHS will develop and issue the procedures necessary to implement NIST’s guidelines on coordinated vulnerability disclosure for Federal agencies. The bill includes a provision allowing Federal agency heads to waive the IoT use and management requirements issued by OMB for national security, functionality, alternative means, or economic reasons.

In general, this bill seeks to leverage the federal government’s ability to set standards through acquisition processes to ideally drive the development of more secure IoT across the U.S. The legislation would require the National Institute of Standards and Technology (NIST), the Office of Management and Budget (OMB), and the Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) to work together to institute standards for IoT owned or controlled by most federal agencies. As mentioned, the latest version of this bill explicitly exclude “national security systems.” These standards would need to focus on secure development, identity management, patching, and configuration management and would be made part of Federal Acquisition Regulations (FAR), making them part of the federal government’s approach to buying and utilizing IoT. Thereafter, civilian federal agencies and contractors would need to use and buy IoT that meets the new security standards. Moreover, NIST would need to create and implement a process for the reporting of vulnerabilities in information systems owned or operated by agencies, including IoT naturally. However, the bill would seem to make contractors and subcontractors providing IoT responsible for sharing vulnerabilities upon discovery and then sending around fixes and patches when developed. And yet, this would seem to overlap with the recently announced Trump Administration vulnerabilities disclosure process (see here for more analysis) and language in the bill could be read as enshrining in statute the basis for the recently launched initiative even though future Administrations would have flexibility to modify or revamp as necessary.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by lea hope bonzer from Pixabay

Pending Legislation In U.S. Congress, Part IV

There is an even chance that Congress further narrows the Section 230 liability shield given criticism of how tech companies have wielded this language.

This year, Congress increased its focus on Section 230 of the Communications Act of 1934 that gives companies like Facebook, Twitter, Google, and others blanket immunity from litigation based on the content others post. Additionally, these platforms cannot be sued for “good faith” actions to take down or restrict material considered “to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.” Many Republicans are claiming both that these platforms are biased against conservative content (a claim not borne out by the evidence we have) and are not doing enough to find and remove material that exploits children. Many Democrats are arguing the platforms are not doing enough to remove right wing hate speech and agree, in some part, regarding material that exploits children.

Working in the background of any possible legislation to narrow Section 230 is an executive order issued by the President directing two agencies to investigate “online censorship” even though the Supreme Court of the United States has long held that a person or entity does not have First Amendment rights visa vis private entities. Finally, the debate over encryption is also edging its way into Section 230 by a variety of means, as the Trump Administration, especially the United States Department of Justice (DOJ) has been pressuring tech companies to address end-to-end encryption on devices and apps. One means of pressure is threatening to remove Section 230 liability protection to garner compliance on encryption issues.

In late July, the Senate Judiciary Committee met, amended and reported out the “Eliminating Abusive and Rampant Neglect of Interactive Technologies Act of 2020” (EARN IT Act of 2020) (S.3398), a bill that would change 47 USC 230 by narrowing the liability shield and potentially making online platforms liable to criminal and civil actions for having child sexual materials on their platforms. The bill as introduced in March was changed significantly when a manager’s amendment was released and then further changed at the markup. The Committee reported out the bill unanimously, sending it to the full Senate, perhaps signaling the breadth of support for the legislation. It is possible this could come before the full Senate this year. If passed, the EARN IT Act of 2020 would represent a second piece of legislation to change Section 230 in the last two years with enactment of “Allow States and Victims to Fight Online Sex Trafficking Act of 2017” (P.L. 115-164). There is, at present, no House companion bill.

In advance of the markup, two of the sponsors, Judiciary Committee Chair Lindsey Graham (R-SC) and Senator Richard Blumenthal (D-CT) released a manager’s amendment to the EARN IT Act. The bill would still establish a National Commission on Online Child Sexual Exploitation Prevention (Commission) that would design and recommend voluntary “best practices” applicable to technology companies such as Google, Facebook, and many others to address “the online sexual exploitation of children.”

Moreover, instead of creating a process under which the DOJ, Department of Homeland Security (DHS), and the Federal Trade Commission (FTC) would accept or reject these standards, as in the original bill, the DOJ would merely have to publish them in the Federal Register. Likewise, the language establishing a fast track process for Congress to codify these best practices has been stricken, too as well as the provisions requiring certain technology companies to certify compliance with the best practices.

Moreover, the revised bill also lacks the safe harbor against lawsuits based on having “child sexual abuse material” on their platform for following the Commission’s best practices. Therefore, instead of encouraging technology companies to use the best practices in exchange for continuing to enjoy liability protection, the language creating this safe harbor in the original bill has been stricken. Now the manager’s amendment strikes liability protection under 47 USC 230 for these materials except if a platform is acting as a Good Samaritan in removing these materials. Consequently, should a Facebook or Google fail to find and take down these materials in an expeditious fashion, then they would face federal and state liability to civil and criminal lawsuits.

However, the Committee adopted an amendment offered by Senator Patrick Leahy (D-VT) that would change 47 USC 230 by making clear that the use of end-to-end encryption does not make providers liable for child sexual exploitation laws and abuse material. Specifically, no liability would attach because the provider

  • utilizes full end-to-end encrypted messaging   services,   device   encryption,   or   other   encryption services;
  • does  not  possess  the  information  necessary to decrypt a communication; or
  • fails to take an action that would otherwise  undermine  the  ability  of  the  provider  to  offer  full  end-to-end  encrypted  messaging  services, device encryption, or other encryption services.

Moreover, in advance of the first hearing to markup the EARN IT Act of 2020, key Republican stakeholders released a bill that would require device manufacturers, app developers, and online platforms to decrypt data if a federal court issues a warrant based on probable cause. Critics of the EARN IT Act of 2020 claimed the bill would force big technology companies to choose between weakening encryption or losing their liability protection under Section 230. They likely see this most recent bill as another shot across the bow of technology companies, many of which continue to support and use end-to-end encryption even though the United States government and close allies are pressuring them on the issue. However, unlike the EARN IT Act of 2020, this latest bill does not have any Democratic cosponsors.

Graham and Senators Tom Cotton (R-AR) and Marsha Blackburn (R-TN) introduced the “Lawful Access to Encrypted Data Act” (S.4051) that would require the manufacturers of devices such as smartphones, app makers, and platforms to decrypt a user’s data if a federal court issues a warrant to search a device, app, or operating system.

The assistance covered entities must provide includes:

  • isolating the information authorized to be searched;
  • decrypting or decoding information on the electronic device or remotely stored electronic information that is authorized to be searched, or otherwise providing such information in an intelligible format, unless the independent actions of an unaffiliated entity make it technically impossible to do so; and
  • providing technical support as necessary to ensure effective execution of the warrant for the electronic devices particularly described by the warrant.


The DOJ would be able to issue “assistance capability directives” that would require the recipient to prepare or maintain the ability to aid a law enforcement agency that obtained a warrant that needs technical assistance to access data. Recipients of such orders can file a petition in federal court in Washington, DC to modify or set aside the order on only three grounds: it is illegal, it does meet the requirements of the new federal regulatory structure, or “it is technically impossible for the person to make any change to the way the hardware, software, or other property of the person behaves in order to comply with the directive.” If a court rules against the recipient of such an order, it must comply, and if any recipient of such an order does not comply, a court may find it in contempt of court, allowing for a range of punishments until the contempt is cured. The bill also amends the “Foreign Intelligence Surveillance Act” (FISA) to require the same decryption and assistance in FISA activities, which are mostly surveillance of people outside the United States. The bill would focus on those device manufacturers that sell more than 1 million devices and those platforms and apps with more than 1 million users, meaning obviously companies like Apple, Facebook, Google, and others. The bill also tasks the DOJ with conducting a prize competition “to incentivize and encourage research and innovation into solutions providing law enforcement access to encrypted data pursuant to legal process”

In response to the EARN IT Act, a bicameral group of Democrats released legislation to dramatically increase funding for the United States’ government to combat the online exploitation of children that has served as an alternate proposal to  a bill critics claim would force technology companies to give way on encryption under pain of losing the Section 230 liability shield. The “Invest in Child Safety Act” (H.R.6752/S.3629) would require $5 billion in funding outside the appropriations process to bolster current efforts to fight online exploitation and abuse. This bill was introduced roughly two months after the EARN IT Act of 2020, and in their press release, Senators Ron Wyden (D-OR), Kirsten Gillibrand (D-NY), Bob Casey D-PA) and Sherrod Brown (D-OH) stated

The Invest in Child Safety Act would direct $5 billion in mandatory funding to investigate and target the pedophiles and abusers who create and share child sexual abuse material online. And it would create a new White House office to coordinate efforts across federal agencies, after DOJ refused to comply with a 2008 law requiring coordination and reporting of those efforts. It also directs substantial new funding for community-based efforts to prevent children from becoming victims in the first place.  

Representatives Anna Eshoo (D-CA), Kathy Castor (D-FL), Ann M. Kuster (D-NH), Eleanor Holmes Norton (D-DC), Alcee L. Hastings (D-FL), and Deb Haaland (D-NM) introduced the companion bill in the House.

The bill would establish in the Executive Office of the President an Office to Enforce and Protect Against Child Sexual Exploitation headed by a Senate confirmed Director who would coordinate efforts across the U.S. government to fight child exploitation. Within six months of the appointment of the first Director, he or she would need to submit to Congress “an enforcement and protection strategy” and thereafter send an annual report as well. The DOJ and Federal Bureau of Investigation would receive additional funding to bolster and improve their efforts in this field.

In June, Senator Josh Hawley (R-MO) introduced the “Limiting Section 230 Immunity to Good Samaritans Act” (S.3983) that is cosponsored by Senators Marco Rubio (R-FL), Kelly Loeffler (R-GA), Mike Braun (R-IN) and Tom Cotton (R-AR). The bill would amend the liability shield in 47 U.S.C. 230 to require large social media platforms like Facebook and Twitter to update their terms of service so that they must operate under “good faith” or face litigation with possible monetary damages for violating these new terms of service. Hawley’s bill would add a definition of “good faith” to the statute, which echoes one of the recommendations made by the DOJ. In relevant part, the new terms of service would bar so-called “edge providers” from “intentional[]  selective  enforcement  of  the  terms  of  service  of  the  interactive  computer  service,  including  the  intentionally  selective  enforcement  of  policies  of  the  provider  relating  to  restricting  access to or availability of material.” If such “selective enforcement” were to occur, then edge providers could be sued but the plaintiffs would have to show the edge provider actually knew they were breaching the terms of service by selectively enforcing its platform rules.

The focus of such alleged “selective enforcement” arise from allegations that conservative material posted on Twitter and Facebook is being targeted in ways that liberal material is not, including being taken down. This claim has been leveled by many Republican stakeholders. And now they are proposing providing affected people with the right to sue; however, it is not clear whether these Republicans have changed their minds on allowing private rights of action against technology companies as a means of enforcing laws. To date, many Republicans have opposed private rights of action for data breaches or violations of privacy.

In early July, Senator Brian Schatz (D-HI) and Senate Majority Whip John Thune (R-SD) introduced the “Platform Accountability and Consumer Transparency (PACT) Act” (S.4066) that would reform Section 230. Schatz and Thune are offering their bill as an alternative to the EARN IT Act of 2020. Schatz and Thune serve as the Ranking Member and Chair of the Communications, Technology, Innovation and the Internet Subcommittee of the Senate Commerce, Science, and Transportation Committee and are thus key stakeholders on any legislation changing Section 230.

Under the PACT Act, so-called “interactive computer services” (the term of art used in Section 230) would need to draft and publish “acceptable use polic[ies]” that would inform users of what content may be posted, a breakdown of the process by which the online platform reviews content to make sure it is in compliance with policy, and spell out the process people may use to report potentially policy-violating content, illegal content, and illegal activity. The PACT Act defines each of the three terms:

  • ‘‘illegal activity’’ means activity conducted by an information content provider that has been determined by a Federal or State court to violate Federal criminal or civil law.
  • ‘‘illegal content’’ means information provided by an information content provider that has been determined by a Federal or State court to violate—
    • Federal criminal or civil law; or
    • State defamation law.
  • “potentially policy-violating content’’ means content that may violate the acceptable use policy of the provider of an interactive computer service.

The first two definitions will pose problems in practice, for if one state court determines content is illegal but another does not, then how must an online platform respond to comply with the reformed Section 230. The same would be true of illegal activity. Consequently, online platforms may be forced to monitor content state to state, hardly a practical system and one that would favor existing market entrants while proving a barrier to entry for new entrants. And, then based on different state or federal court rulings are online platforms to then allow or take down content on the basis of where the person posting the content lives?

In any event, after receiving notice, online platforms would have 24 hours to remove illegal content or activity and two weeks for potentially policy-violating content to review the notice and determine if the content actually violates the platform’s policies. In the latter case, the platform would be required to notify the person that posted the content and allow them an appeal if the online platform decides to take down the content because it violated its policies based on a user complaint. There would be a different standard for small business providers, requiring them to act on the three categories of information within a reasonable period of time after receiving notice. And, telecommunications and cloud networks and other entities would be exempted from this reform to Section 230 altogether.

However, Section 230’s liability shield would be narrowed with respect to illegal content and activity. If a provider knows of the illegal content and activity but does not remove it within 24 hours, then they would lose the shield from lawsuits. So, if Facebook fails to take down a posting urging someone to assassinate the President, a federal crime, within 24 hours of being notified it was posted, it could be sued. However, Facebook and similar companies would not have an affirmative duty to locate and remove illegal content and activity, however, and could continue to enjoy Section 230 liability if there is either type of content on its platform so long as there is no notice provided. And yet, Section 230 would be narrowed overall as the provision making clear that all federal criminal and civil laws and regulations are outside the liability protection. Currently, this provision only pertains to federal criminal statutes. And, state attorneys general would be able to enforce federal civil laws if the lawsuit could also be brought on the basis of a civil law in the attorney general’s state.

Interactive computer services must publish a quarterly transparency report including the total number of instances in which illegal content, illegal activity, or potentially policy-violating content was flagged and the number of times action was taken, among other data. Additionally, they would need to identify the number of times they demonetized or deprioritized content. These reports would be publicly available.

The FTC would be explicitly empowered to act under the bill. Any violations of the process by which an online platform reviews notices of potentially policy-violating content, appeals, and transparency reports would be violations of an FTC regulation defining an unfair or deceptive act or practice, allowing the agency to seek civil fines for first violations. But, this authority is circumscribed by a provision barring the FTC from reviewing “any action or decision by a provider of an interactive computer service related to the application of the acceptable use policy of the provider.” This limitation would seem to allow an online platform to remove content on its own initiative if it violates the platform’s policies without the FTC being able to review such decisions. This would provide ample incentive for Facebook, Twitter, Reddit, and others to police their platforms so that they could avoid FTC action. The FTC’s jurisdiction would be widened to include non-profits regarding how they manage removing content based on a user complaint the same way for profit entities would be subject to the agency’s scrutiny.

The National Institute of Technology and Standards (NIST) would need to develop “a voluntary framework, with input from relevant experts, that consists of non-binding standards, guidelines, and best practices to manage risk and shared challenges related to, for the purposes of this Act, good faith moderation practices by interactive computer service providers.”

This week, Senate Commerce, Science, and Transportation Committee Chair Roger Wicker (R-MS), Graham, and Blackburn introduced the latest Section 230 bill, the “Online Freedom and Viewpoint Diversity Act” (S.4534) that would essentially remove liability protection for social media platforms and others that choose to correct, label, or remove material, mainly political material. A platform’s discretion would be severely limited as to when and under what circumstances it could take down content. This bill would seem tailored to conservatives who believe Twitter, Facebook, etc. are biased against them and their viewpoints.

In May, after Twitter factchecked two of his Tweets regarding false claims made about mail voting in California in response to the COVID-19 pandemic, President Donald Trump signed a long rumored executive order (EO) seen by many as a means of cowing social media platforms: the “Executive Order on Preventing Online Censorship.” This EO directed federal agencies to act, and one has by asking the Federal Communications Commission (FCC) to start a rulemaking, which has been initiated. However, there is at least one lawsuit pending to enjoin action on the EO that could conceivably block implementation.

In the EO, the President claimed

Section 230 was not intended to allow a handful of companies to grow into titans controlling vital avenues for our national discourse under the guise of promoting open forums for debate, and then to provide those behemoths blanket immunity when they use their power to censor content and silence viewpoints that they dislike.  When an interactive computer service provider removes or restricts access to content and its actions do not meet the criteria of subparagraph (c)(2)(A), it is engaged in editorial conduct.  It is the policy of the United States that such a provider should properly lose the limited liability shield of subparagraph (c)(2)(A) and be exposed to liability like any traditional editor and publisher that is not an online provider.

Consequently, the EO directs that “all executive departments and agencies should ensure that their application of section 230(c) properly reflects the narrow purpose of the section and take all appropriate actions in this regard.”

With respect to specific actions, the Department of Commerce’s the National Telecommunications and Information Administration (NTIA) was directed to file a petition for rulemaking with the FCC to clarify the interplay between clauses of Section 230, notably whether the liability shield that protects companies like Twitter and Facebook for content posted on an online platform also extends to so-called “editorial decisions,” presumably actions like Twitter’s in factchecking Trump regarding mail balloting. The NTIA was also to ask the FCC to define better the conditions under which an online platform may take down content in good faith that are “deceptive, pretextual, or inconsistent with a provider’s terms of service; or taken after failing to provide adequate notice, reasoned explanation, or a meaningful opportunity to be heard.” The NTIA was directed to also ask the FCC to promulgate any other regulations necessary to effectuate the EO.

The FTC must consider whether online platforms are violating Section 5 of the FTC Act barring unfair or deceptive practices, which “may include practices by entities covered by section 230 that restrict speech in ways that do not align with those entities’ public representations about those practices.” As of yet, the FTC has not done so, and in remarks before Congress, FTC Chair Joseph Simons has opined that doing so is outside the scope of the agency’s mission. Consequently, there has been talk in Washington that the Trump Administration is looking for a new FTC Chair.

Following the directive in the EO, on 27 July, the NTIA filed a petition with the FCC, asking the agency to start a rulemaking to clarify alleged ambiguities in 47 USC 230 regarding the limits of the liability shield for the content others post online versus the liability protection for “good faith” moderation by the platform itself.

The NTIA asserted “[t]he FCC should use its authorities to clarify ambiguities in section 230 so as to make its interpretation appropriate to the current internet marketplace and provide clearer guidance to courts, platforms, and users…[and] urges the FCC to promulgate rules addressing the following points:

  1. Clarify the relationship between subsections (c)(1) and (c)(2), lest they be read and applied in a manner that renders (c)(2) superfluous as some courts appear to be doing.
  2. Specify that Section 230(c)(1) has no application to any interactive computer service’s decision, agreement, or action to restrict access to or availability of material provided by another information content provider or to bar any information content provider from using an interactive computer service.
  3. Provide clearer guidance to courts, platforms, and users, on what content falls within (c)(2) immunity, particularly section 230(c)(2)’s “otherwise objectionable” language and its requirement that all removals be done in “good faith.”
  4. Specify that “responsible, in whole or in part, for the creation or development of information” in the definition of “information content provider,” 47 U.S.C.
    § 230(f)(3), includes editorial decisions that modify or alter content, including but not limited to substantively contributing to, commenting upon, editorializing about, or presenting with a discernible viewpoint content provided by another information content provider.
  5. Mandate disclosure for internet transparency similar to that required of other internet companies, such as broadband service providers.

NTIA argued that

  • Section 230(c)(1) has a specific focus: it prohibits “treating” “interactive computer services,” i.e., internet platforms, such as Twitter or Facebook, as “publishers.” But, this provision only concerns “information” provided by third parties, i.e., “another internet content provider”68 and does not cover a platform’s own content or editorial decisions.
  • Section (c)(2) also has a specific focus: it eliminates liability for interactive computer services that act in good faith “to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.”

In early August, the FCC asked for comments on the NTIA petition, and comments were due by 2 September. Over 2500 comments have been filed, and a cursory search turned up numerous form letter comments drafted by a conservative organization that were then submitted by members and followers.

Finally, the House’s “FY 2021 Financial Services and General Government Appropriations Act” (H.R. 7668) has a provision that would bar either the FTC or FCC from taking certain actions related to Executive Order 13925, “Preventing Online Censorship.” It is very unlikely Senate Republicans, some of whom have publicly supported this Executive Order, will allow this language into the final bill funding the agencies.

There has been other executive branch action on Section 230. In mid-June, the DOJ released “a set of reform proposals to update the outdated immunity for online platforms under Section 230” according to a department press release. While these proposals came two weeks after President Donald Trump’s “Executive Order on Preventing Online Censorship” signed after Twitter fact checked two tweets that were not true (see here for more detail and analysis), the DOJ launched its review of 47 U.S.C. 230 in February 2020.

The DOJ explained “[t]he Section 230 reforms that the Department of Justice identified generally fall into four categories:

1) Incentivizing Online Platforms to Address Illicit Content. The first category of potential reforms is aimed at incentivizing platforms to address the growing amount of illicit content online, while preserving the core of Section 230’s immunity for defamation.

  1. Bad Samaritan Carve-Out. First, the Department proposes denying Section 230 immunity to truly bad actors. The title of Section 230’s immunity provision—“Protection for ‘Good Samaritan’ Blocking and Screening of Offensive Material”—makes clear that Section 230 immunity is meant to incentivize and protect responsible online platforms. It therefore makes little sense to immunize from civil liability an online platform that purposefully facilitates or solicits third-party content or activity that would violate federal criminal law.
  2. Carve-Outs for Child Abuse, Terrorism, and Cyber-Stalking. Second, the Department proposes exempting from immunity specific categories of claims that address particularly egregious content, including (1) child exploitation and sexual abuse, (2) terrorism, and (3) cyber-stalking. These targeted carve-outs would halt the over-expansion of Section 230 immunity and enable victims to seek civil redress in causes of action far afield from the original purpose of the statute.
  3. Case-Specific Carve-Outs for Actual Knowledge or Court Judgments. Third, the Department supports reforms to make clear that Section 230 immunity does not apply in a specific case where a platform had actual knowledge or notice that the third party content at issue violated federal criminal law or where the platform was provided with a court judgment that content is unlawful in any respect.

2) Clarifying Federal Government Civil Enforcement Capabilities. A second category of reform would increase the ability of the government to protect citizens from illicit online conduct and activity by making clear that the immunity provided by Section 230 does not apply to civil enforcement by the federal government, which is an important complement to criminal prosecution.

3) Promoting Competition. A third reform proposal is to clarify that federal antitrust claims are not covered by Section 230 immunity. Over time, the avenues for engaging in both online commerce and speech have concentrated in the hands of a few key players. It makes little sense to enable large online platforms (particularly dominant ones) to invoke Section 230 immunity in antitrust cases, where liability is based on harm to competition, not on third-party speech.

4) Promoting Open Discourse and Greater Transparency. A fourth category of potential reforms is intended to clarify the text and original purpose of the statute in order to promote free and open discourse online and encourage greater transparency between platforms and users.

  1. Replace Vague Terminology in (c)(2). First, the Department supports replacing the vague catch-all “otherwise objectionable” language in Section 230 (c)(2) with “unlawful” and “promotes terrorism.” This reform would focus the broad blanket immunity for content moderation decisions on the core objective of Section 230—to reduce online content harmful to children—while limiting a platform’s ability to remove content arbitrarily or in ways inconsistent with its terms or service simply by deeming it “objectionable.”
  2. Provide Definition of Good Faith. Second, the Department proposes adding a statutory definition of “good faith,” which would limit immunity for content moderation decisions to those done in accordance with plain and particular terms of service and accompanied by a reasonable explanation, unless such notice would impede law enforcement or risk imminent harm to others. Clarifying the meaning of “good faith” should encourage platforms to be more transparent and accountable to their users, rather than hide behind blanket Section 230 protections.
  3. Continue to Overrule Stratton Oakmont to Avoid the Moderator’s Dilemma. Third, the Department proposes clarifying that a platform’s removal of content pursuant to Section 230 (c)(2) or consistent with its terms of service does not, on its own, render the platform a publisher or speaker for all other content on its service.

While the DOJ did not release legislative language to affect these changes, it is possible to suss out the DOJ’s purposes in making these recommendations. The Department clearly believes that the Section 230 liability shield deprives companies like Facebook of a number of legal and financial incentives to locate, takedown, or block material such as child pornography. The New York Times published articles last year (see here and here) about the shortcomings critics have found in a number of online platforms’ efforts to find and remove this material. If the companies faced civil liability for not taking down the images, the rationale seems to go, then they would devote much greater resources to doing so. Likewise, with respect to terrorist activities and cyber-bullying, the DOJ seems to think this policy change would have the same effect.

Some of the DOJ’s other recommendations seem aimed at solving an issue often alleged by Republicans and conservatives: that their speech is more heavily policed and censored than others on the political spectrum. The recommendations call for removing the word “objectionable” from the types of material a provider may remove or restrict in good faith and adding “unlawful” and “promotes terrorism.” The recommendations would also call for a statutory definition of “good faith,” which dovetails with an action in the EO for an Administration agency to petition the Federal Communications Commission (FCC) to conduct a rulemaking to better define this term.

Some consider the Department’s focus on Section 230 liability a proxy for its interest in having technology companies drop default end-to-end encryption and securing their assistance in accessing any communications on such platforms. If this were true, the calculation seems to be technology companies would prefer to be shielded from financial liability over ensuring users communications and transactions are secured via encryption.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Gerd Altmann from Pixabay

Further Reading, Other Developments, and Coming Events (10 September)

Coming Events

  • The Federal Communications Commission (FCC) will hold a forum on 5G Open Radio Access Networks on 14 September. The FCC asserted
    • Chairman [Ajit] Pai will host experts at the forefront of the development and deployment of open, interoperable, standards-based, virtualized radio access networks to discuss this innovative new approach to 5G network architecture. Open Radio Access Networks offer an alternative to traditional cellular network architecture and could enable a diversity in suppliers, better network security, and lower costs.
  • The Senate Judiciary Committee’s Antitrust, Competition Policy & Consumer Rights Subcommittee will hold a hearing on 15 September titled “Stacking the Tech: Has Google Harmed Competition in Online Advertising?.” In their press release, Chair Mike Lee (R-UT) and Ranking Member Amy Klobuchar (D-MN) asserted:
    • Google is the dominant player in online advertising, a business that accounts for around 85% of its revenues and which allows it to monetize the data it collects through the products it offers for free. Recent consumer complaints and investigations by law enforcement have raised questions about whether Google has acquired or maintained its market power in online advertising in violation of the antitrust laws. News reports indicate this may also be the centerpiece of a forthcoming antitrust lawsuit from the U.S. Department of Justice. This hearing will examine these allegations and provide a forum to assess the most important antitrust investigation of the 21st century.
  • The United States’ Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) announced that its third annual National Cybersecurity Summit “will be held virtually as a series of webinars every Wednesday for four weeks beginning September 16 and ending October 7:”
    • September 16: Key Cyber Insights
    • September 23: Leading the Digital Transformation
    • September 30: Diversity in Cybersecurity
    • October 7: Defending our Democracy
    • One can register for the event here.
  • On 22 September, the Federal Trade Commission (FTC) will hold a public workshop “to examine the potential benefits and challenges to consumers and competition raised by data portability.”
  • The Senate Judiciary Committee’s Antitrust, Competition Policy & Consumer Rights Subcommittee will hold a hearing on 30 September titled ““Oversight of the Enforcement of the Antitrust Laws” with Federal Trade Commission Chair Joseph Simons and United States Department of Justice Antitrust Division Assistant Attorney General Makan Delhrahim.
  • The Federal Communications Commission (FCC) will hold an open meeting on 30 September, but an agenda is not available at this time.

Other Developments

  • Top Senate Democrats asked the Secretary of the Treasury to impose sanctions on officials and others in the Russian Federation for interfering in the 2020 United States election. In their letter, they urged Secretary Steven Mnuchin “to draw upon the conclusions of the Intelligence Community to identify and target for sanctions all those determined to be responsible for ongoing election interference, including any actors within the government of the Russian Federation, any Russian actors determined to be directly responsible, and those acting on their behalf or providing material or financial support for their efforts.” Given that Mnuchin is unlikely to displease President Donald Trump through agreeing that Russians are again interfering in a presidential election, it is probable that Senate Democrats are seeking to further their line of attack on Republicans that they are unwilling to defend the U.S. and its elections from Russia. They called on Mnuchin to use the authorities granted by Congress in the “Countering America’s Adversaries Through Sanctions Act” (P.L. 115-44) and Executive Order 13848 “Imposing Certain Sanctions in the Event of Foreign Interference in a United States Election.”
  • Epic Games has returned to court in an attempt to force Apple to put its popular multiplayer game, Fortnite back into the App Store. At present, those on iOS devices cannot download and play the newest version of the game released a few weeks ago. Even though Epic Games lost its request for a temporary restraining order that would order Apple to put the game back, it has filed for a preliminary injunction:
    • (1) restraining Defendant Apple Inc. (“Apple”) from removing, de-listing, refusing to list or otherwise making unavailable the app Fortnite or any other app on Epic’s Team ID ’84 account in Apple’s Developer Program, including any update of such an app, from the App Store on the basis that Fortnite offers in-app payment processing through means other than Apple’s In-App Purchase (“IAP”) or on any pretextual basis;
    • (2) restraining Apple from taking any adverse action against Epic, including but not limited to restricting, suspending, or terminating any other Apple Developer Program account of Epic or its affiliates, on the basis that Epic enabled in-app payment processing in Fortnite through means other than IAP or on the basis of the steps Epic took to do so;
    • (3) restraining Apple from removing, disabling, or modifying Fortnite or any code, script, feature, setting, certification, version or update thereof on any iOS user’s device; and
    • (4) requiring Apple to restore Epic’s Team ID ’84 account in Apple’s Developer Program.
    •  Epic Games asserts:
      • This motion is made on the grounds that: (1) Epic is likely to succeed on the merits of its claims that Apple’s conduct violates the Sherman Act; (2) absent a preliminary injunction, Epic is likely to suffer irreparable harm; (3) the balance of harms tips sharply in Epic’s favor; and (4) the public interest supports an injunction.
    • Considering that the judge ruled against Epic Games’ claim of irreparable harm in the motion for a temporary restraining order on the grounds that self-inflicted harm (i.e. Epic Game escalated by putting its own pay option on Fortnite to foil Apple’s 30% take on in-game sales and no public interest being present, one wonders if the company will prevail on this motion.
  • Apple filed a countersuit against Epic Games, arguing the latter breached its contract with the former and now must pay damages. In contrast, Epic Games is not suing for any monetary damages, surely a tactical decision to help its case in court and among interested observers.
    • Apple sought to portray Epic Games’ lawsuit this way:
      • Epic’s lawsuit is nothing more than a basic disagreement over money. Although Epic portrays itself as a modern corporate Robin Hood, in reality it is a multi-billion dollar enterprise that simply wants to pay nothing for the tremendous value it derives from the App Store. Epic’s demands for special treatment and cries of “retaliation” cannot be reconciled with its flagrant breach of contract and its own business practices, as it rakes in billions by taking commissions on game developers’ sales and charging consumers up to $99.99 for bundles of “V-Bucks.”
      • Epic decided that it would like to reap the benefits of the App Store without paying anything for them. Armed with the apparent view that Epic is too successful to play by the same rules as everyone else—and notwithstanding a public proclamation that Epic “w[ould] not accept special revenue sharing or payment terms just for ourselves”1—Epic CEO Tim Sweeney emailed Apple executives on June 30, 2020, requesting a “side letter” that would exempt Epic from its existing contractual obligations, including the App Store Review Guidelines (the “Guidelines”) that apply equally to all Apple developers. Among other things, Mr. Sweeney demanded a complete end-run around “Apple’s fees”—specifically, Epic wished to continue taking full advantage of the App Store while allowing consumers to pay Epic instead, leaving Apple to receive no payment whatsoever for the many services it provides developers and consumers.
    • Apple contended “[t]his Court should hold Epic to its contractual promises, award Apple compensatory and punitive damages, and enjoin Epic from engaging in further unfair business practices.”
  • The General Services Administration (GSA) released a draft Data Ethics Framework as part of implementing the Trump Administration’s Federal Data Strategy.
    • GSA noted
      • The Federal Data Strategy, delivered in December 2019, recognized the importance of ethics in its founding Principles. When the Federal Data Strategy team created the 2020 Action Plan, they specifically tasked the General Services Administration (GSA) with developing a Data Ethics Framework (Framework)in Action 14to help agency employees, managers, and leaders make ethical decisions as they acquire, manage, and use data.
      • The resulting Framework is intended to be a “living” resource and to be regularly updated by the CDO Council and ICSP. The Framework incorporates the input and terminology from stakeholders representing many domains, and who use different types of data in different ways. The developers of the Framework recognize that some terms may be used differently, depending on the context, type of data being used, and stage in the data lifecycle.
      • The Framework applies to all data types and data uses. The Framework consists of four parts:
        • About the Data Ethics Framework outlines the intended purpose and audience of this document
        • Data Ethics Defined explores the meaning of the term “data ethics,” as background to the Tenets provided in the following section
        • Data Ethics Tenets provides seven Tenets, or high-level principles, for using data ethically within the Federal Government
        • Data Ethics Tenets in Action describes the benefits of data ethics and contains use cases demonstrating how the Tenets can guide data activities within federal agencies and federally sponsored programs
      • The Administration claimed the 2020 Action Plan “establishes a solid foundation that will support implementation of the strategy over the next decade…[and] identifies initial actions for agencies that are essential for establishing processes, building capacity, and aligning existing efforts to better leverage data as a strategic asset.” The use of federal data holds a key place in the President’s Management Agenda (PMA) and, according to the Administration, will be a key driver in transforming how the federal government operates, particularly in relation to technology. The 2020 Action Plan lays out the steps agencies will be expected to take to realize the Administration’s 10-year Federal Data Strategy. As always, results will be informed by follow through and prioritization by the Office of Management and Budget (OMB) and buy-in from agency leadership.
      • Notably, the Administration tied the 2020 Action Plan to a number of other ongoing initiatives that rely heavily on data. The Administration said the plan “incorporates requirements of the Foundations for Evidence-Based Policymaking Act of 2018, the Geospatial Data Act of 2018, and Executive Order 13859 on Maintaining American Leadership in Artificial Intelligence.”
  • The Office of the Australian Information Commissioner (OAIC) published “its Corporate Plan for 2020-21, which sets out its strategic priorities and key activities for the next four years” according to its press release. The OAIC stated “[t]he plan identifies four strategic priorities that will help the OAIC achieve its vision to increase public trust and confidence in the protection of personal information and access to government-held information:
    • Advance online privacy protections for Australians
    • Influence and uphold privacy and information access rights frameworks
    • Encourage and support proactive release of government-held information, and
    • Contemporary approach to regulation.
    • The agency stated:
      • Over the coming year, the OAIC will continue to promote strong privacy protections for the use of personal information to prevent and manage the spread of COVID-19, including oversight of data handling within the COVIDSafe app system. 
      • Strengthening privacy protections in the online environment remains a key focus for the organisation, while privacy law reform will be a priority in 2020-21, with the Australian Government’s review of the Privacy Act an opportunity to ensure the regulatory framework can respond to new challenges in the digital environment.
      • Commissioner [Angelene] Falk said the OAIC will also enforce privacy safeguards under the Consumer Data Right and will continue its work to improve transparency and prevent harm to consumers through its oversight of the Notifiable Data Breaches scheme.
  • Ontario’s Ministry of Government and Consumer Services “launched consultations to improve the province’s privacy protection laws” and stakeholders “will have the opportunity to contribute to strengthening transparency and accountability concerning the collection, use and safeguarding of personal information online.” Ontario “is seeking advice on ways to:
    • Increase transparency for individuals, providing Ontarians with more detail about how their information is being used by businesses and organizations.
    • Enhance consent provisions allowing individuals to revoke consent at any time, and adopting an “opt-in” model for secondary uses of their information.
    • Introduce a right for individuals to request information related to them be deleted, subject to limitations (this is otherwise known as “right to erasure” or “the right to be forgotten”).
    • Introduce a right for individuals to obtain their data in a standard and portable digital format, giving them greater freedom to change service providers without losing their data (this is known as “data portability”).
    • Increase enforcement powers for the Information and Privacy Commissioner to ensure businesses comply with the law, including giving the commissioner the ability to impose penalties.
    • Introduce requirements for data that has been de-identified and derived from personal information to provide clarity of applicability of privacy protections.
    • Expand the scope and application of the law to include non-commercial organizations, including not-for-profits, charities, trade unions and political parties.
    • Create a legislative framework to enable the establishment of data trusts for privacy protective data sharing.
  • The United States (U.S.) Department of Homeland Security (DHS) Office of the Inspector General (OIG) issued “Progress and Challenges in Modernizing DHS’ Information Technology (IT) Systems and Infrastructure” and found fault with these three systems:
    • DHS-wide Human Resources IT (HRIT)
    • DHS Legacy Major IT Financial System that “[s]erves as Coast Guard and Transportation Security Agency’s (TSA) financial system of record.
    • Federal Emergency Management Agency (FEMA) Grants Management Mission Domain and Operational Environment
    • The OIG stated
      • The DHS 2019–2023 IT strategic plan included two distinct department-wide IT modernization initiatives: to adopt cloud-based computing and to consolidate data centers. However, not all components have complied with or fully embraced these efforts due to a lack of standard guidance and funding. Without consistent implementation of these efforts, DHS components remain hindered in their ability to provide personnel with more enhanced, up-to-date technology.
      • In the meantime, DHS continues to rely on deficient and outdated IT systems to perform mission-critical operations. We identified three legacy IT systems with significant operational challenges that negatively affected critical DHS functions, such as human resources and financial management, as well as disaster recovery mission operations. DHS has not made sufficient progress in replacing or augmenting these IT systems due to ineffective planning and inexperience in executing complex IT modernization efforts. Additionally, the DHS CIO has not performed mandated oversight of legacy IT to mitigate and reduce risks associated with outdated systems. Until DHS addresses these issues, it will continue to face significant challenges to accomplish mission operations efficiently and effectively
    • The OIG recommended:
      • We recommend the DHS OCIO develop department-wide guidance for implementing cloud technology and migrating legacy IT systems to the cloud. Recommendation
      • We recommend the DHS OCIO coordinate with components to develop and finalize a data center migration approach to accomplish strategic goals for reducing the footprint of DHS IT infrastructure. Recommendation
      • We recommend the DHS OCIO establish a process to assign risk ratings for major legacy IT investments, as required by the Federal Information Technology Acquisition Reform Act.
  • The University of Toronto’s Citizen Lab and the International Human Rights Program at the University of Toronto’s Faculty of Law published a report “To Surveil and Predict: A Human Rights Analysis of Algorithmic Policing in Canada” that “focuses on the human rights and constitutional law implications of the use of algorithmic policing technologies by law enforcement authorities.” The authors found:
    • The research conducted for this report found that multiple law enforcement agencies across Canada have started to use, procure, develop, or test a variety of algorithmic policing methods. These programs include using and both developing predictive policing technologies and using algorithmic surveillance tools. Additionally, some law enforcement agencies have acquired tools with the capability of algorithmic policing technology, but they are not currently using that capability because, to date, they have not decided to do so. 
    • The authors “analyze the potential impacts of algorithmic policing technologies on the following rights: the right to privacy; the right to freedoms of expression, peaceful assembly, and association; the right to equality and freedom from discrimination; the right to liberty and to be free from arbitrary detention; the right to due process; and the right to a remedy.”
  • The United States (U.S.) Department of Homeland Security (DHS) issued “the Electromagnetic Pulse (EMP) Program Status Report as part of an update on efforts underway in support of Executive Order (E.O.) 13865 on Coordinating National Resilience to Electromagnetic Pulses…[that] establishes resilience and security standards for U.S. critical infrastructure as a national priority.”
    • DHS stated
      • E.O.13865 states, “An electromagnetic pulse (EMP) has the potential to disrupt, degrade, and damage technology and critical infrastructure systems. Human-made or naturally occurring EMPs can affect large geographic areas, disrupting elements critical to the Nation’s security and economic prosperity, and could adversely affect global commerce and stability. The federal government must foster sustainable, efficient, and cost-effective approaches to improving the Nation’s resilience to the effects of EMPs.”
      • In accordance with E.O.13865, the Department has identified initial critical infrastructure and associated functions that are at greatest risk from an EMP and is focusing efforts on the development and implementation of evidence-based and independently-tested EMP protection and mitigation technologies and resilience best practices. Initial efforts within the Department, working across the federal interagency, have focused on risk management to both the Energy and Communications Sectors.
  • Two United States Magistrate Judges denied three requests for a geofence warrant to serve on Google to obtain cell phone data from an area of Chicago for three forty-five minutes periods on three different days. The courts took the unusual step of unsealing the opinions for the proceedings which are not adversarial because the person or people suspected of being involved with the alleged crime are presumably unaware and therefore cannot contest the warrant application. If Google took an adversarial position, there is no indication in the decisions the company did so. However, Google did state in a filing that “[b]etween 2017 and 2018, Google saw a 1,500% increase in geofence requests…[and] [b]etween 2018 and 2019, that figure shot up another 500%.”
    • Moreover, one wonders if prosecutors did not also seek similar warrant requests from other companies such as telecommunications providers. Nonetheless, the judges ruled the geofence warrant requests violated the Fourth Amendment to the U.S. Constitution in a number of ways and suggested that narrower, more particular requests might have been legal.
    • In the first denial, the magistrate judge explained:
      • As to the first geofence request, the government has probable cause to believe that the suspect received the stolen pharmaceuticals from a commercial enterprise located within the designated geofence area during the designated forty-five minute interval in the early afternoon hours on the day of the first geofence request. The geofence, which has a 100-meter radius, is in a densely populated city, and the area contains restaurants, various commercial establishments, and at least one large residential complex, complete with a swimming pool, workout facilities, and other amenities associated with upscale urban living.
      • The second and third geofence requests focus on the same commercial enterprise where the government has probable cause to believe that the suspect shipped some of the stolen pharmaceuticals to a buyer, who purchased the pharmaceuticals from the suspect at the government’s direction. Again, the government’s requested geofence is a I00-meter radius area extending from the commercial establishment where the suspect shipped the pharmaceuticals and covers two separate dates for forty-five minute intervals in the early afternoon hours. This geofence includes medical offices and other single and multi-floor commercial establishments that are likely to have multiple patrons during the early afternoon hours.
      • The warrant application contemplates that the information will be obtained in three stages: (l) Google will be required to disclose to the government an anonymized list of devices that specifies information including the corresponding unique device ID, timestamp, coordinates, and data source, if available, of the devices that reported their location within the geofence during the forty-five minute periods; (2) the government will then review the list to prioritize the devices about which it wishes to obtain associated information; and (3) Google will then be required to disclose to the government the information identifying the Google account(s) for those devices about which the government further inquiries. The warrant application includes no criteria or limitations as to which cellular telephones government agents can seek additional information.

Further Reading

  • A Saudi Prince’s Attempt to Silence Critics on Twitter” By Bradley Hope and Justin Scheck – WIRED. Considering the United States Department of Justice indictments against three Saudi nationals in November 2019 and resulting news stories (“Why Do We Tolerate Saudi Money in Tech?” – The New York Times and “Former Twitter employees charged with spying for Saudi Arabia by digging into the accounts of kingdom critics” – The Washington Post), one would think what news is there in this excerpt on a book. But we learn that Twitter’s anti-establishment stance led the company’s lawyers to suspend the Saudi Twitter employee who the target of a U.S. investigation which allowed him to flee the U.S. Government lawyers were livid. The bigger issue is foreign operatives infiltrated social media platforms and then reaping information about selected people, especially dissidents.
  • When Algorithms Give Real Students Imaginary Grades” By Meredith Broussard – The New York Times. The International Baccalaureate (IB) program used an algorithm to hand out grades this past spring when in-person exams were cancelled. It did not go well as you might imagine. The same was true in the United Kingdom for its A-level exams, causing a furor there. The case id made for never using algorithms in education or related fields.
  • Wheely ride-hailing app writes to UK privacy watchdog over Moscow data demands” By Simon Goodley – The Guardian. A British ride-sharing company wrote the United Kingdom’s data protection authority about data requests made by the Moscow Department of Transportation (MDOT) on individual riders. Wheely made the case to the Information Commissioner’s Office (ICO) that it could not hand over the data under the General Data Protection Regulation (GDPR) unlike some of the app’s rivals who apparently complied with the demand. It is not clear whether the company’s GDPR obligations would apply in another jurisdiction. It may possible Wheely is trying to smear the other companies in the U.K.
  • Deepfake porn is now mainstream. And major sites are cashing in” By Matt BurgessWired. Through the use of artificial intelligence technology, people are making fake pornography in which actresses’ faces are affixed to women’s bodies that are engaged in sexual acts. These deepfake porn videos are soaring in popularity, and there are often not good options for taking them down or taking legal action. This is another area in which technology has outpaced policy and law.
  • Most cyber-security reports only focus on the cool threats” By Catalin Cimpanu – ZDNet. Turns out that commercial threat reports are issued with an eye towards generating business and considering that governments and huge contractors have the deepest pockets, the issues of concern are covered while other less lucrative areas like threats to civil society are largely ignored. These reports also influence policymakers and give them a distorted picture of cyber threats.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Pending Legislation In U.S. Congress, Part III

Even though it is agreed Congress should revamp election security laws, there is no agreement on how.

Election security is a subject much on the minds of lawmakers and policymakers in Washington in this election; however, how the Congress should respond is a matter much disagreed upon. House Democrats have passed a number of bills to address a range of problems in the United States (U.S.) electoral system, but Republicans have generally rejected their proposed policy solutions, in no small part because of White House opposition. Moreover, President Donald Trump has steadfastly opposed any legislation intended to address future Russian interference in elections. Consequently, the prospects of any election security legislation being enacted are virtually nil even if lawmakers have steadily increased the amount of money the federal government is providing states to shore up security through the Election Assistance Commission’s grant program. These bills are nonetheless worthy of notice, for if Democrats capture the White House and Senate, it is very likely they will make a run at enacting election security legislation along the lines of some of these bills.  

In July, a deal was struck to add the “Intelligence Authorization Act for Fiscal Year 2021” (S.3905) to the “National Defense Authorization Act for Fiscal Year 2021“ (S.4049) but without an election security bill included in the package as reported out of the Senate Intelligence Committee: the “Foreign Influence Reporting in Elections Act” (FIRE Act) (S.2242). The sponsor of the FIRE Act, Senate Intelligence Committee Ranking Member Mark Warner (D-VA), went to the Senate floor to protest the striking of his bill and to announce his plans to offer it as an amendment and force a vote:

The  committee  voted  14  to 1 to  pass an intel authorization bill that included  the  FIRE Act,  the  act  that  I  just described, so that if a foreign government interferes or offers you assistance  or  offers  you  dirt,  you  don’t  say  thanks;  you  call  the  FBI.  So  you  can  imagine  my  surprise  and  frustration  when  I  learned  of  a  backroom  deal  to  strip  the  FIRE  Act  out  of  the  Intelligence   Committee’s   legislation   because  of  a  supposed  turf  war  with  another committee. I  am  back  again  today  because  the  security  of  our  elections  cannot  wait.  Let’s  not  hide  behind  process  or  jurisdictional  boundaries.  The  stakes  are  far  too  high  to  continue  the  partisan  blockade  of  election  security  legislation  that  we  have  seen  over  the  last  3  years. If,  behind  closed  doors,  my  Republican  colleagues  want  to  strip  this  legislation  out  of  the  NDAA,  then  I  am  going  to  offer  it  up  as  an  amendment  to  force  an  up-or-down  vote  and  put  every   Member   of   this   body   on   the   record: Are you for election security or are you for allowing foreign entities to interfere  and  offer  assistance  with  no  requirement to report?

Prior to its inclusion in the FY 2021 Intelligence Authorization Act, Warner had asked unanimous consent to take up the FIRE Act multiple times but was met with Republican objections each time. And there are other election security bills Republicans have continued to block, including:

  • The “Duty To Report Act” (S.1247) “to require reporting to the Federal Election Commission and the Federal Bureau of Investigation of offers by foreign nationals to make prohibited contributions, donations, expenditures, or disbursements.”
  • The “Senate Cybersecurity Protection Act” (S.890) “to protect the personal technology devices and accounts of Senators and covered employees from cyber attacks and hostile information collection activities, and for other purposes.”
  • The “Securing America’s Federal Elections Act” (SAFE Act) (H.R.2722) (see below)
  • The “Secure Elections Act of 2019” (S.1540) (see below)

The “Secure Elections Act of 2019” (S.1540) was cosponsored by 40 Democrats but has not been acted upon by the Senate. In her press release, primary sponsor, Senator Amy Klobuchar (D-MN), claimed the bill would do the following:

  • Require states use paper ballots.
  • Establish cybersecurity standards for voting systems vendors.
  • Fund grants for states to improve and maintain the security of their election systems, to provide cybersecurity training to election officials, and to implement post-election risk limiting audits.
  • Require the Director of National Intelligence to assess threats to election systems 180 days before an election and require the Department of Homeland Security and the Election Assistance Commission to issue recommendations to address threats.
  • Require the testing of voting systems nine months before an election.
  • Require the President to produce a national strategy for protecting democratic institutions.
  • Create a National Commission to Protect United States Democratic Institutions.

Yet, last summer, the Senate took up and passed two election-related bills addressing facets of the cybersecurity challenges. In July 2019, the Senate passed the “Defending the Integrity of Voting Systems Act” (S. 1321) by unanimous consent that would “make it a federal crime to hack any voting systems used in a federal election” according to the Senate Judiciary Committee’s website. In June 2019, the Senate also passed the “Defending Elections against Trolls from Enemy Regimes (DETER) Act” (S. 1328) that “will make “improper interference in U.S. elections” a violation of U.S. immigration law, and violators would be barred from obtaining a visa to enter the United States. The House has yet to act on these bills, and Democratic Leadership is likely not to let them come to the floor to maximize their leverage in getting their bills through the Senate.

In February 2019, the House passed the “For the People Act” (H.R. 1) by a 234-193 vote, a House Democratic priority bill that would seek to bolster the cybersecurity of election systems across the country, among other policy goals. If this bill were enacted as written, there would be significant changes to current regulation. However, it was unlikely the Senate will take up this bill as written, and any measure in the Senate regarding election security would be more circumscribed. And, Senate Republicans blocked efforts to take up this bill.

Regarding the cybersecurity of election systems, the bill includes a process by which cybersecurity standards would be established for election infrastructure vendors and would also authorize grants for states and localities to upgrade and secure their election systems. For example, “qualified election infrastructure vendors” must agree “to ensure that the election infrastructure will be developed and maintained in a manner that is consistent with the cybersecurity best practices issued by the Technical Guidelines Development Committee” and to promptly report cybersecurity incidents to the Department of Homeland Security (DHS) and the Election Assistance Commission (EAC).

The bill would authorize $1.7 billion in funding for the EAC to make grants to states for a number of purposes, including “to carry out voting system security improvements” to undertake the following

(1) The acquisition of goods and services from qualified election infrastructure vendors by purchase, lease, or such other arrangements as may be appropriate.

(2) Cyber and risk mitigation training.

(3) A security risk and vulnerability assessment of the State’s election infrastructure which is carried out by a provider of cybersecurity services under a contract entered into between the chief State election official and the provider.

(4) The maintenance of election infrastructure, including addressing risks and vulnerabilities which are identified under either of the security risk and vulnerability assessments described in paragraph (3), except that none of the funds provided under this part may be used to renovate or replace a building or facility which is used primarily for purposes other than the administration of elections for public office.

(5) Providing increased technical support for any information technology infrastructure that the chief State election official deems to be part of the State’s election infrastructure or designates as critical to the operation of the State’s election infrastructure.

(6) Enhancing the cybersecurity and operations of the information technology infrastructure described in paragraph (4).

(7) Enhancing the cybersecurity of voter registration systems.

The package requires “qualified election infrastructure vendors” (i.e. “any person who provides, supports, or maintains, or who seeks to provide, support, or maintain, election infrastructure on behalf of a State, unit of local government, or election agency”) to meet these requirements:

  • [T]o ensure that the election infrastructure will be developed and maintained in a manner that is consistent with the cybersecurity best practices issued by the Technical Guidelines Development Committee.
  • [T]o maintain its information technology infrastructure in a manner that is consistent with the cybersecurity best practices issued by the Technical Guidelines Development Committee.
  • Reporting cybersecurity incidents “involving any of the goods and services provided by the vendor” to the EAC and DHS within three days of discovery
  • “[T]o permit independent security testing by the [EAC]…and by the Secretary of the goods and services provided by the vendor pursuant to a grant”

H.R. 1 would also change the Department of Homeland Security’s organic statute to make “election infrastructure” a critical infrastructure sector. In January 2017, then Secretary of Homeland Security Jeh Johnson expanded the Government Facilities Sector to include an Election Infrastructure Subsector. However, if H.R. 1 were enacted, then the election sector would be the 17th critical infrastructure sector and a future Secretary could not rescind this designation as one may with Johnson’s addition of state and local elections systems to the Government Facilities sector.

The Committee Report detailed the addition of the “Honest Ads Act” to H.R. 1:

  • The Honest Ads Act updates the rules that apply to online political advertising by incorporating disclosure and disclaimer concepts that apply to traditional media, while providing regulatory flexibility for new forms of digital advertising. This will help ensure that voters make informed decisions at the ballot box and to know who is spending money on digital political advertisements that they view.
  • It also expands the definition of public communication to include paid internet or paid digital communications, and amends the definition of electioneering communication to include certain digital or internet communications placed or promoted for a fee online.
  • Finally, the bill requires that large online platforms (defined to include those with 50,000,000 or more unique monthly United States visitors) maintain public databases of political ad purchases. This is a concept that already applies to broadcasters, who must maintain public files of political advertisements. The online data- bases maintained by the platforms will provide the public with in- formation about the purchasers of online political ads, including how the audience is targeted. Political advertisements are defined to include those that communicate messages relating to political matters of national importance, including about candidates, elections, and national legislative issues of public importance.
  • Finally, the Honest Ads Act requires all broadcasters, cable or satellite television and online platforms to take reasonable efforts to ensure that political advertising is not purchased by foreign nationals, directly or indirectly.

Thereafter, House Democrats brought pieces of H.R. 1 to the House floor for separate votes in attempt to push Senate Republicans to take up the bill and if they do not, to put them on the record as opposing the reforms House Democrats think are necessary, including bolstering the cybersecurity of voting systems. In late June 2019, the House considered and passed the “Securing America’s Federal Elections (SAFE) Act of 2019” (H.R. 2722) also largely along a party-line vote. In the Committee Report, the House Administration Committee explained the bill:

  • H.R. 2722 provides critical resources to states and localities to bolster election infrastructure, including necessary funds to replace aging voting equipment with voter-verified paper ballot voting systems and implement additional cybersecurity protocols. The bill also helps states and localities plan for future elections by providing ongoing maintenance funding on a biannual basis. The legislation provides grant programs for states to implement required risk-limiting audits, a best practice audit system that confirms election outcomes with a high degree of confidence.

The House took up this bill and passed it by a 225-184 vote, but the Senate has not considered it.

The House took up and passed its third major bill on election security in 2019 the “Stopping Harmful Interference in Elections for a Lasting Democracy Act” (SHIELD Act) (H.R. 4617), that addresses two of the technological facets of foreign disinformation campaigns aimed at U.S. elections according to the House Administration Committee’s summary:

  • Helps prevent foreign interference in future elections by improving transparency of online political advertisements.
    • Russia attempted to influence the 2016 presidential election by buying and placing political ads on platforms such as Facebook, Twitter and Google. The content and purchasers of those online advertisements were a mystery to the public because of outdated laws that have failed to keep up with evolving technology. The SHIELD Act takes steps to prevent hidden, foreign disinformation campaigns in our elections by ensuring that political ads sold online are covered by the same rules as ads sold on TV, radio, and satellite.
  • Prohibits deceptive practices about voting procedures.
    • Independent experts have identified voter suppression tactics the Russians used on social media, including malicious misdirection designed to create confusion about voting rules. The SHIELD Act incorporates the Deceptive Practices and Voter Intimidation Prevention Act to prohibit anyone from providing false information about voting rules and qualifications for voting, provides mechanisms for disseminating correct information, and establishes strong penalties for voter intimidation.

The House passed H.R. 4617 by a 227-181 vote with all Republicans present voting no and one Democrat joining them. Again, the Senate did not take up the bill.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

EDPB Releases Guidance

The EDPB tries to clear up who is and is not a controller or processor and wades into the world of social media and targeting.

The European Data Protection Board (EDPB) has released for comment two sets of guidelines for input to elucidate portions of the General Data Protection Regulation (GDPR).

In the draft Guidelines 07/2020 on the concepts of controller and processor in the GDPR, the EDPB is looking to update guidance issued by its forerunner body on the predecessor data protection regime regarding who is a controller, joint controller, and processor. Any clarification of these definitions would obviously change how entities will be regulated under the GDPR, and ideally, harmonize data protection and processing regulation across the EU. There is the suggestion in the document that there is not currently standard construction of these definitions, causing the same entity to be regulated differently depending on the jurisdiction within the European Economic Area (EEA). The EDPB noted the guidelines were put together with the input of stakeholders, and it is possible that more input from a broader audience will result in a modified product.

This draft guidance is built on the principle of accountability as enshrined in the GDPR, meaning controllers and processors must not only comply with the GDPR but be able to demonstrate compliance with the GDPR. In fact, the EDPB asserts “[t]he aim of incorporating the accountability principle into the GDPR and making it a central principle was to emphasize that data controllers must implement appropriate and effective measures and be able to demonstrate compliance.” Both this emphasis and statement might mean that the EU encountered challenges with respect to entities accepting accountability in data protection under the GDPR’s forerunner, Directive 95/46/EC. Moreover, the need to precisely, or as precisely as possible, define who is and is not a controller, joint controller, or processor is crucial to apportioning responsibility and culpability for noncompliance. Therefore, these guidelines will be a crucial starting point for both data protection authorities (DPA) and the entities collecting and processing the personal data of EU persons. Moreover, the EDPB proposes to go beyond labels in determining who is a controller or processor by looking at what an entity is actually doing. By the same token, the Board makes clear the term controller should not be confused with this term in other legal contexts and should be interpreted broadly to ensure the greatest possible data protection.

The EDPB claimed “[t]he main aim is to clarify the meaning of the concepts and to clarify the different roles and the distribution of responsibilities between these actors.” The EDPB stated

The Article 29 Working Party issued guidance on the concepts of controller/processor in its opinion 1/2010 (WP169) in order to provide clarifications and concrete examples with respect to these concepts. Since the entry into force of the GDPR, many questions have been raised regarding to what extent the GDPR brought changes to the concepts of controller and processor and their respective roles. Questions were raised in particular to the substance and implications of the concept of joint controllership (e.g. as laid down in Article 26 GDPR) and to the specific obligations for processors laid down in Chapter IV (e.g. as laid down in Article 28 GDPR). Therefore, and as the EDPB recognizes that the concrete application of the concepts needs further clarification, the EDPB now deems it necessary to give more developed and specific guidance in order to ensure a consistent and harmonised approach throughout the EU and the EEA. The present guidelines replace the previous opinion of Working Party 29 on these concepts (WP169).

The EDPB summarized the concepts of these terms and the interplay between entities:

  • Controller
    • In principle, there is no limitation as to the type of entity that may assume the role of a controller but in practice it is usually the organisation as such, and not an individual within the organisation (such as the CEO, an employee or a member of the board), that acts as a controller.
    • A controller is a body that decides certain key elements of the processing. Controllership may be defined by law or may stem from an analysis of the factual elements or circumstances of the case. Certain processing activities can be seen as naturally attached to the role of an entity (an employer to employees, a publisher to subscribers or an association to its members). In many cases, the terms of a contract can help identify the controller, although they are not decisive in all circumstances.
    • A controller determines the purposes and means of the processing, i.e. the why and how of the processing. The controller must decide on both purposes and means. However, some more practical aspects of implementation (“non-essential means”) can be left to the processor. It is not necessary that the controller actually has access to the data that is being processed to be qualified as a controller.
  • Joint controllers
    • The qualification as joint controllers may arise where more than one actor is involved in the processing. The GDPR introduces specific rules for joint controllers and sets a framework to govern their relationship. The overarching criterion for joint controllership to exist is the joint participation of two or more entities in the determination of the purposes and means of a processing operation. Joint participation can take the form of a common decision taken by two or more entities or result from converging decisions by two or more entities, where the decisions complement each other and are necessary for the processing to take place in such a manner that they have a tangible impact on the determination of the purposes and means of the processing. An important criterion is that the processing would not be possible without both parties’ participation in the sense that the processing by each party is inseparable, i.e. inextricably linked. The joint participation needs to include the determination of purposes on the one hand and the determination of means on the other hand.
  • Processor
    • A processor is a natural or legal person, public authority, agency or another body, which processes personal data on behalf of the controller. Two basic conditions for qualifying as processor exist: that it is a separate entity in relation to the controller and that it processes personal data on the controller’s behalf.
    • The processor must not process the data otherwise than according to the controller’s instructions. The controller’s instructions may still leave a certain degree of discretion about how to best serve the controller’s interests, allowing the processor to choose the most suitable technical and organisational means. A processor infringes the GDPR, however, if it goes beyond the controller’s instructions and starts to determine its own purposes and means of the processing. The processor will then be considered a controller in respect of that processing and may be subject to sanctions for going beyond the controller’s instructions.
  • Relationship between controller and processor
    • A controller must only use processors providing sufficient guarantees to implement appropriate technical and organisational measures so that the processing meets the requirements of the GDPR. Elements to be taken into account could be the processor’s expert knowledge (e.g. technical expertise with regard to security measures and data breaches); the processor’s reliability; the processor’s resources and the processor’s adherence to an approved code of conduct or certification mechanism.
    • Any processing of personal data by a processor must be governed by a contract or other legal act which shall be in writing, including in electronic form, and be binding. The controller and the processor may choose to negotiate their own contract including all the compulsory elements or to rely, in whole or in part, on standard contractual clauses.
    • The GDPR lists the elements that have to be set out in the processing agreement. The processing agreement should not, however, merely restate the provisions of the GDPR; rather, it should include more specific, concrete information as to how the requirements will be met and which level of security is required for the personal data processing that is the object of the processing agreement.
  • Relationship among joint controllers
    • Joint controllers shall in a transparent manner determine and agree on their respective responsibilities for compliance with the obligations under the GDPR. The determination of their respective responsibilities must in particular regard the exercise of data subjects’ rights and the duties to provide information. In addition to this, the distribution of responsibilities should cover other controller obligations such as regarding the general data protection principles, legal basis, security measures, data breach notification obligation, data protection impact assessments, the use of processors, third country transfers and contacts with data subjects and supervisory authorities.
    • Each joint controller has the duty to ensure that they have a legal basis for the processing and that the data are not further processed in a manner that is incompatible with the purposes for which they were originally collected by the controller sharing the data.
    • The legal form of the arrangement among joint controllers is not specified by the GDPR. For the sake of legal certainty, and in order to provide for transparency and accountability, the EDPB recommends that such arrangement be made in the form of a binding document such as a contract or other legal binding act under EU or Member State law to which the controllers are subject.
    • The arrangement shall duly reflect the respective roles and relationships of the joint controllers vis-à- vis the data subjects and the essence of the arrangement shall be made available to the data subject.
    • Irrespective of the terms of the arrangement, data subjects may exercise their rights in respect of and against each of the joint controllers. Supervisory authorities are not bound by the terms of the arrangement whether on the issue of the qualification of the parties as joint controllers or the designated contact point.

In the Guidelines 08/2020 on the targeting of social media users, the Board explained that the genesis of this guidance came from the EDPB itself. These guidelines are, in a sense, a more targeted version of the other draft guidelines the EDPB has issued for comment in that they seek to clarify the responsibilities, joint and otherwise, of social media companies and others operating in the targeted advertising universe. Consequently, these would apply to companies like Facebook, Twitter, and other social media platforms and virtually any entity using such a platform to send a targeted advertisement to a user or group of users. However, the EDPB makes clear its concern with respect to these practices is not confined to the commercial world and explains at some length its concern that EU persons could be targeted with political materials, a common practice of the Russian Federation in a number of countries, including the EU in all likelihood. The Board stated “[t]he main aim of these guidelines is therefore to clarify the roles and responsibilities among the social media provider and the targeter, a term defined as those “that use social media services in order to direct specific messages at a set of social media users on the basis of specific parameters or criteria.”

The EDPB asserted

  • As part of their business model, many social media providers offer targeting services. Targeting services make it possible for natural or legal persons (“targeters”) to communicate specific messages to the users of social media in order to advance commercial, political, or other interests. A distinguishing characteristic of targeting is the perceived fit between the person or group being targeted and the message that is being delivered. The underlying assumption is that the better the fit, the higher the reception rate (conversion) and thus the more effective the targeting campaign (return on investment).
  • Mechanisms to target social media users have increased in sophistication over time. Organisations now have the ability to target individuals on the basis of a wide range of criteria. Such criteria may have been developed on the basis of personal data which users have actively provided or shared, such as their relationship status. Increasingly, however, targeting criteria are also developed on the basis of personal data which has been observed or inferred, either by the social media provider or by third parties, and collected (aggregated) by the platform or by other actors (e.g., data brokers) to support ad-targeting options. In other words, the targeting of social media users involves not just the act of “selecting” the individuals or groups of individuals that are the intended recipients of a particular message (the ‘target audience’), but rather it involves an entire process carried out by a set of stakeholders which results in the delivery of specific messages to individuals with social media accounts.
  • The combination and analysis of data originating from different sources, together with the potentially sensitive nature of personal data processed in the context of social media, creates risks to the fundamental rights and freedoms of individuals. From a data protection perspective, many risks relate to the possible lack of transparency and user control. For the individuals concerned, the underlying processing of personal data which results in the delivery of a targeted message is often opaque. Moreover, it may involve unanticipated or undesired uses of personal data, which raise questions not only concerning data protection law, but also in relation to other fundamental rights and freedoms. Recently, social media targeting has gained increased public interest and regulatory scrutiny in the context of democratic decision making and electoral processes.

The EDPB added

  • Taking into account the case law of the CJEU, as well as the provisions of the GDPR regarding joint controllers and accountability, the present guidelines offer guidance concerning the targeting of social media users, in particular as regards the responsibilities of targeters and social media providers. Where joint responsibility exists, the guidelines will seek to clarify what the distribution of responsibilities might look like between targeters and social media providers on the basis of practical examples.
  • The main aim of these guidelines is therefore to clarify the roles and responsibilities among the social media provider and the targeter. In order to do so, the guidelines also identify the potential risks for the rights and freedoms of individuals (section 3), the main actors and their roles (section 4), and tackles the application of key data protection requirements (such as lawfulness and transparency, DPIA, etc.) as well as key elements of arrangements between social media providers and the targeters.

The EDPB explained the main two means by which targeting occurs: “[s]ocial media users may be targeted on the basis of provided, observed or inferred data, as well as a combination thereof:

  1. a)  Targeting individuals on the basis of provided data – “Provided data” refers to information actively provided by the data subject to the social media provider and/or the targeter. For example:
    • A social media user might indicate his or her age in the description of his or her user profile. The social media provider, in turn, might enable targeting on the basis of this criterion.
    • A targeter might use information provided by the data subject to the targeter in order to target that individual specifically, for example by means of customer data (such as an e- mail address list), to be matched with data already held on the social media platform, leading to all those users who match being targeted with advertising.
  2. b)  Targeting on the basis of observed data – Targeting of social media users can also take place on the basis of observed data. Observed data are data provided by the data subject by virtue of using a service or device. For example, a particular social media user might be targeted on the basis of:
    • his or her activity on the social media platform itself (for instance the content that the user has shared, consulted or liked);
    • the use of devices on which the social media’s application is executed (for instance GPS coordinates, mobile telephone number);
    • data obtained by a third-party application developer by using the application programming interfaces (APIs) or software development kits (SDKs) offered by social media providers;
    • data collected through third-party websites that have incorporated social plugins or pixels;
    • data collected through other third parties (e.g. parties with whom the data subject has  interacted, purchased a product, subscribed to loyalty cards, …); or
    • data collected through services offered by companies owned or operated by the social media provider.

The EDPB added

Targeting on the basis of inferred data – “Inferred data” or “derived data” are created by the data controller on the basis of the data provided by the data subject or as observed by the controller. For example, a social media provider or a targeter might infer that an individual is likely to be interested in a certain activity or product on the basis of his or her web browsing behaviour and/or network connections.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Further Reading, Other Developments, and Coming Events (8 September)

Here is today’s Further Reading, Other Developments, and Coming Events.

Coming Events

  • The United States-China Economic and Security Review Commission will hold a hearing on 9 September on “U.S.-China Relations in 2020: Enduring Problems and Emerging Challenges” to “evaluate key developments in China’s economy, military capabilities, and foreign relations, during 2020.”
  • On 10 September, the General Services Administration (GSA) will have a webinar to discuss implementation of Section 889 of the “John S. McCain National Defense Authorization Act (NDAA) for FY 2019” (P.L. 115-232) that bars the federal government and its contractors from buying the equipment and services from Huawei, ZTE, and other companies from the People’s Republic of China.
  • The Federal Communications Commission (FCC) will hold a forum on 5G Open Radio Access Networks on 14 September. The FCC asserted
    • Chairman [Ajit] Pai will host experts at the forefront of the development and deployment of open, interoperable, standards-based, virtualized radio access networks to discuss this innovative new approach to 5G network architecture. Open Radio Access Networks offer an alternative to traditional cellular network architecture and could enable a diversity in suppliers, better network security, and lower costs.
  • The Senate Judiciary Committee’s Antitrust, Competition Policy & Consumer Rights Subcommittee will hold a hearing on 15 September titled “Stacking the Tech: Has Google Harmed Competition in Online Advertising?.” In their press release, Chair Mike Lee (R-UT) and Ranking Member Amy Klobuchar (D-MN) asserted:
    • Google is the dominant player in online advertising, a business that accounts for around 85% of its revenues and which allows it to monetize the data it collects through the products it offers for free. Recent consumer complaints and investigations by law enforcement have raised questions about whether Google has acquired or maintained its market power in online advertising in violation of the antitrust laws. News reports indicate this may also be the centerpiece of a forthcoming antitrust lawsuit from the U.S. Department of Justice. This hearing will examine these allegations and provide a forum to assess the most important antitrust investigation of the 21st century.
  • The United States’ Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) announced that its third annual National Cybersecurity Summit “will be held virtually as a series of webinars every Wednesday for four weeks beginning September 16 and ending October 7:”
    • September 16: Key Cyber Insights
    • September 23: Leading the Digital Transformation
    • September 30: Diversity in Cybersecurity
    • October 7: Defending our Democracy
    • One can register for the event here.
  • On 22 September, the Federal Trade Commission (FTC) will hold a public workshop “to examine the potential benefits and challenges to consumers and competition raised by data portability.”
  • The Senate Judiciary Committee’s Antitrust, Competition Policy & Consumer Rights Subcommittee will hold a hearing on 30 September titled ““Oversight of the Enforcement of the Antitrust Laws” with Federal Trade Commission Chair Joseph Simons and United States Department of Justice Antitrust Division Assistant Attorney General Makan Delhrahim.
  • The Federal Communications Commission (FCC) will hold an open meeting on 30 September, but an agenda is not available at this time.

Other Developments

  • The National Institute of Standards and Technology (NIST) announced a 15 and 16 September webinar to discuss its Draft Outline of Cybersecurity Profile for the Responsible Use of Positioning, Navigation, and Timing (PNT) Services. NIST stated it “seeks insight and feedback on this Annotated Outline to improve the PNT cybersecurity profile, which is scheduled for publication in February 2021…[and] [a]reas needing more input include feedback on the description of systems that use PNT services and the set of standards, guidelines, and practices addressing systems that use PNT services.” NIST explained that “[t]hrough the Profile development process, NIST will engage the public and private sectors on multiple occasions to include a request for information, participation in workshops, solicitation of feedback on this annotated outline, and public review and comment on the draft Profile.” The agency added “[t]he Profile development process is iterative and, in the end state, will identify and promote the responsible use of PNT services from a cybersecurity point of view.”
    • In June, NIST released a request for information (RFI) “about public and private sector use of positioning, navigation, and timing (PNT) services, and standards, practices, and technologies used to manage cybersecurity risks, to systems, networks, and assets dependent on PNT services.” This RFI is being undertaken per direction in a February executive order (EO) to serve as the foundation for the Trump Administration’s efforts to lessen the reliance of United States’ (U.S.) critical infrastructure on current PNT systems and services. Specifically, the EO seeks to build U.S. capacity to meet and overcome potential disruption or manipulation of the PNT systems and services used by virtually every key sector of the public and private sectors of the U.S.
    • NIST explained “Executive Order 13905, Strengthening National Resilience Through Responsible Use of Positioning, Navigation, and Timing Services, was issued on February 12, 2020 and seeks to protect the national and economic security of the United States from disruptions to PNT services that are vital to the functioning of technology and infrastructure, including the electrical power grid, communications infrastructure and mobile devices, all modes of transportation, precision agriculture, weather forecasting, and emergency response.” The EO directed NIST “to develop and make available, to at least the appropriate agencies and private sector users, PNT profiles.” NIST said “[r]esponses to this RFI will inform NIST’s development of a PNT profile, using the NIST Framework for Improving Critical Infrastructure Cybersecurity (NIST Cybersecurity Framework), that will enable the public and private sectors to identify systems, networks, and assets dependent on PNT services; identify appropriate PNT services; detect the disruption and manipulation of PNT services; and manage the associated cybersecurity risks to the systems, networks, and assets dependent on PNT services.”
    • The EO defines the crucial term this RFI uses: “PNT profile” means a description of the responsible use of PNT services—aligned to standards, guidelines, and sector-specific requirements—selected for a particular system to address the potential disruption or manipulation of PNT services.
    • In April, the Department of Homeland Security (DHS) released a Congressionally required report, “Report on Positioning, Navigation, and Timing (PNT) Backup and Complementary Capabilities to the Global Positioning System (GPS)” as required by Section 1618 of the “2017 National Defense Authorization Act (NDAA) (P.L. 114–328) that was due in December 2017. DHS offered “recommendations to address the nation’s PNT requirements and backup or complementary capability gaps.”
  • Switzerland’s Federal Data Protection and Information Commissioner (FDPIC) has reversed itself and decided that the Swiss-U.S. Privacy Shield does not provide adequate protection for Swiss citizens whose data is transferred for processing into the United States (U.S.) However, it does not appear that there will be any practical effect as of yet. The FDPIC determined that the agreement “does not provide an adequate level of protection for data transfer from Switzerland to the US pursuant to the Federal Act on Data Protection (FADP).” This decision comes two months after the Court of Justice of the European Union (CJEU) struck down the European Union-U.S. Privacy Shield. The FDPIC noted this determination followed “his annual assessment of the Swiss-US Privacy Shield regime and recent rulings on data protection by the CJEU.” The FDPIC also issued a policy paper explaining the determination. The FDPIC added
    • As a result of this assessment, which is based on Swiss law, the FDPIC has deleted the reference to ‘adequate data protection under certain conditions’ for the US in the FDPIC’s list of countries. Since the FDPIC’s assessment has no influence on the continued existence of the Privacy Shield regime, and those concerned can invoke the regime as long as it is not revoked by the US, the comments on the Privacy Shield in the list of countries will be retained in an adapted form.
  • The United States Department of Defense (DOD) released its statutorily required annual report on the People’s Republic of China (PRC) that documented the rising power of the nation, especially with respect to cybersecurity and information warfare. The Pentagon noted
    • 2020 marks an important year for the People’s Liberation Army (PLA) as it works to achieve important modernization milestones ahead of the Chinese Communist Party’s (CCP) broader goal to transform China into a “moderately prosperous society” by the CCP’s centenary in 2021. As the United States continues to respond to the growing strategic challenges posed by the PRC, 2020 offers a unique opportunity to assess both the continuity and changes that have taken place in the PRC’s strategy and armed forces over the past two decades.
    • Regarding Cyberwarfare, the DOD asserted
      • The development of cyberwarfare capabilities is consistent with PLA writings, which identify Information Operations (IO) – comprising cyber, electronic, and psychological warfare – as integral to achieving information superiority and as an effective means for countering a stronger foe. China has publicly identified cyberspace as a critical domain for national security and declared its intent to expedite the development of its cyber forces.
      • The PRC presents a significant, persistent cyber espionage and attack threat to military and critical infrastructure systems. China seeks to create disruptive and destructive effects—from denial-of- service attacks to physical disruptions of critical infrastructure— to shape decision-making and disrupt military operations in the initial stages of a conflict by targeting and exploiting perceived weaknesses of militarily superior adversaries. China is improving its cyberattack capabilities and has the ability to launch cyberattacks—such as disruption of a natural gas pipeline for days to weeks—in the United States.
      • PLA writings note the effectiveness of IO and cyberwarfare in recent conflicts and advocate targeting C2 and logistics networks to affect an adversary’s ability to operate during the early stages of conflict. Authoritative PLA sources call for the coordinated employment of space, cyber, and EW as strategic weapons to “paralyze the enemy’s operational system of systems” and “sabotage the enemy’s war command system of systems” early in a conflict. Increasingly, the PLA considers cyber capabilities a critical component in its overall integrated strategic deterrence posture, alongside space and nuclear deterrence. PLA studies discuss using warning or demonstration strikes—strikes against select military, political, and economic targets with clear “awing effects”—as part of deterrence. Accordingly, the PLA probably seeks to use its cyberwarfare capabilities to collect data for intelligence and cyberattack purposes; to constrain an adversary’s actions by targeting network-based logistics, C2, communications, commercial activities, and civilian and defense critical infrastructure; or, to serve as a force-multiplier when coupled with kinetic attacks during armed conflict.
      • The PLA’s ongoing structural reforms may further change how the PLA organizes and commands IO, particularly as the Strategic Support Force (SSF) evolves over time. By consolidating cyber and other IO-related elements, the SSF likely is generating synergies by combining national-level cyber reconnaissance, attack, and defense capabilities in its organization.
    • The DOD also noted the PLA’s emphasis on intelligentized warfare:
      • The PLA sees emerging technologies as driving a shift to “intelligentized” warfare from today’s “informatized” way of war. PLA strategists broadly describe intelligentized warfare as the operationalization of artificial intelligence (AI) and its enabling technologies, such as cloud computing, big data analytics, quantum information, and unmanned systems, for military applications. These technologies, according to PRC leaders—including Chairman Xi Jinping— represent a “Revolution in Military Affairs” for which China must undertake a whole-of-government approach to secure critical economic and military advantages against advanced militaries.
  • The United States’ (U.S.) Citizenship and Immigration Services (USCIS) of the Department of Homeland Security (DHS) is proposing a rule “to amend DHS regulations concerning the use and collection of biometrics in the enforcement and administration of immigration laws by USCIS, U.S. Customs and Border Protection (CBP), and U.S. Immigration and Customs Enforcement (ICE).”
    • USCIS further explained:
    • First, DHS proposes that any applicant, petitioner, sponsor, beneficiary, or individual filing or associated with an immigration benefit or request, including United States citizens, must appear for biometrics collection without regard to age unless DHS waives or exempts the biometrics requirement.
    • Second, DHS proposes to authorize biometric collection, without regard to age, upon arrest of an alien for purposes of processing, care, custody, and initiation of removal proceedings.
    • Third, DHS proposes to define the term biometrics.
    • Fourth, this rule proposes to increase the biometric modalities that DHS collects, to include iris image, palm print, and voice print.
    • Fifth, this rule proposes that DHS may require, request, or accept DNA test results, which include a partial DNA profile, to prove the existence of a claimed genetic relationship and that DHS may use and store DNA test results for the relevant adjudications or to perform any other functions necessary for administering and enforcing immigration and naturalization laws.
    • Sixth, this rule would modify how VAWA and T nonimmigrant petitioners demonstrate good moral character, as well as remove the presumption of good moral character for those under the age of 14. 
    • Lastly, DHS proposes to further clarify the purposes for which biometrics are collected from individuals filing immigration applications or petitions, to include criminal history and national security background checks; identity enrollment, verification, and management; secure document production, and to administer and enforce immigration and naturalization laws.

Further Reading

  • State aid helps China tech leaders shrug off US sanctions” By Kenji Kawase – Nikkei Asian Review. A number of companies placed on the United States’ no-trade list have received generous subsidies from their government in Beijing. The People’s Republic of China (PRC) sees the health of a number of these companies as vital to its long term development and is willing to prop them up. Some companies have received multiples of their net profit to keep them afloat.
  • Facebook Says Trump’s Misleading Post About Mail-In Voting Is OK. Employees Say It’s Not.” By Craig Silverman and Ryan Mac – BuzzFeed News. There is more internal dissension at Facebook even after the company’s announcement it would not accept political advertising the last week of the election and correct misinformation about voting. Within hours of this policy change, President Donald Trump encouraged voters to possibly vote twice, which many Facebook employees saw as a violation of the new policy. The company disagreed and appended a claim from a bipartisan think tank study finding that mail-in voting is largely fraud free.
  • Why Facebook’s Blocking of New Political Ads May Fall Short” By Davey Alba and Sheera Frenkel – The New York Times. This piece explains in detail why Facebook’s new policy to combat political misinformation is likely to fall quite short of addressing the problem.
  • Student arrested for cyberattack against Miami schools used ‘easy to prevent’ program” By Colleen Wright and David Ovalle – Miami Herald. The United States’ fourth largest school district fell victim to a distributed denial of service attack launched by a 16-year-old student using more than a decade old tools downloaded from the internet. This unnamed hacker foiled the Miami-Dade school district’s first three days of online classes, raising questions about the cybersecurity of the school system if such an old attack succeeded so easily and how safe the personal information of students is in this school system and others around the country.
  • Trump and allies ratchet up disinformation efforts in late stage of campaign” By Ashley Parker – The Washington Post. It has been apparent for some that President Donald Trump and a number of his Republican allies are intentionally or recklessly spreading false information to try to help his campaign cover ground against frontrunner former Vice President Joe Biden. The goal is to so muddy the waters that the average person will neither be able to discern the truth of a claim not be concerned about doing so. This approach is the very same Russia’s leader Vladimir Putin has successfully executed in pushing his country into a post-truth world. Experts are warning that a continuation of this trend in the United States (U.S.) could wreak potentially irreparable harm.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by wal_172619 from Pixabay