Preview of Senate Democratic Chairs

It’s not clear who will end up where, but new Senate chairs will change focus and agenda of committees and debate over the next two years.

With the victories of Senators-elect Rafael Warnock (D-GA) and Jon Ossoff (D-GA), control of the United States Senate will tip to the Democrats once Vice President-elect Kamala Harris (D) is sworn in and can break the 50-50 tie in the chamber in favor of the Democrats. With the shift in control, new chairs will take over committees key to setting the agenda over the next two years in the Senate. However, given the filibuster, and the fact that Senate Republicans will exert maximum leverage through its continued use, Democrats will be hamstrung and forced to work with Republicans on matters such as federal privacy legislation, artificial intelligence (AI), the Internet of Things (IOT), cybersecurity, data flows, surveillance, etc. just as Republicans have had to work with Democrats over the six years they controlled the chamber. Having said that, Democrats will be in a stronger position than they had been and will have the power to set the agenda in committee hearings, being empowered to call the lion’s share of witnesses and to control the floor agenda. What’s more, Democrats will be poised to confirm President-elect Joe Biden’s nominees at agencies like the Federal Communications Commission (FCC), Federal Trade Commission (FTC), the Department of Justice (DOJ), and others, giving the Biden Administration a free hand in many areas of technology policy.

All of that being said, this is not meant to be an exhaustive look at all the committees of jurisdiction and possible chairs. Rather, it seeks to survey likely chairs on selected committees and some of their priorities for the next two years. Subcommittee chairs will also be important, but until the cards get shuffled among the chairs, it will not be possible to see where they land at the subcommittee level.

When considering the possible Democratic chairs of committees, one must keep in mind it is often a matter of musical chairs with the most senior members getting first choice. And so, with Senator Patrick Leahy (D-VT) as the senior-most Democratic Senator, he may well choose to leave the Appropriations Committee and move back to assume the gavel of the Judiciary Committee. Leahy has long been a stakeholder on antitrust, data security, privacy, and surveillance legislation and would be in a position to influence what bills on those and other matters before the Senate look like. If Leahy does not move to the chair on Judiciary, he may still be entitled to chair a subcommittee and exert influence.

If Leahy stays put, then current Senate Minority Whip Dick Durbin (D-IL) would be poised to leapfrog Senator Dianne Feinstein (D-CA) to chair Judiciary after Feinstein was persuaded to step aside on account of her lackluster performance in a number of high-profile hearings in 2020. Durbin has also been active on privacy, data security, and surveillance issues. The Judiciary Committee will be central to a number of technology policies, including Foreign Intelligence Surveillance Act reauthorization, privacy legislation, Section 230 reform, antitrust, and others. On the Republican side of the dais, Senator Lindsey Graham (R-SC) leaving the top post because of term limit restrictions imposed by Republicans, and Senator Charles Grassley (R-IA) is set to replace him. How this changes the 47 USC 230 (Section 230) debate is not immediately clear. And yet, Grassley and three colleagues recently urged the Trump Administration in a letter to omit language in a trade agreement with the United Kingdom (UK) that mirrors the liability protection Section 230. Senators Rob Portman (R-OH), Mark R. Warner (D-VA), Richard Blumenthal (D-CT), and Grassley argued to U.S. Trade Representative Ambassador Robert Lighthizer that a “safe harbor” like the one provided to technology companies for hosting or moderating third party content is outdated, not needed in a free trade agreement, contrary to the will of both the Congress and UK Parliament, and likely to be changed legislatively in the near future. It is likely, however, Grassley will fall in with other Republicans propagating the narrative that social media is unfairly biased against conservatives, particularly in light of the recent purge of President Donald Trump for his many, repeated violations of policy.

The Senate Judiciary Committee will be central in any policy discussions of antitrust and anticompetition in the technology realm. But it bears note the filibuster (and the very low chances Senate Democrats would “go nuclear” and remove all vestiges of the functional supermajority requirement to pass legislation) will give Republicans leverage to block some of the more ambitious reforms Democrats might like to enact (e.g. the House Judiciary Committee’s October 2020 final report that calls for nothing less than a complete remaking of United States (U.S.) antitrust policy and law; see here for more analysis.)

It seems Senator Sherrod Brown (D-OH) will be the next chair of the Senate Banking, Housing, and Urban Development Committee which has jurisdiction over cybersecurity, data security, privacy, and other issues in the financial services sector, making it a player on any legislation designed to encompass the whole of the United States economy. Having said that, it may again be the case that sponsors of, say, privacy legislation decide to cut the Gordian knot of jurisdictional turf battles by cutting out certain committees. For example, many of the privacy bills had provisions making clear they would deem financial services entities in compliance with the Financial Services Modernization Act of 1999 (P.L. 106-102) (aka Gramm-Leach-Bliley) to be in compliance with the new privacy regime. I suppose these provisions may have been included on the basis of the very high privacy and data security standards Gramm-Leach-Bliley has brought about (e.g. the Experian hack), or sponsors of federal privacy legislation made the strategic calculation to circumvent the Senate Banking Committee as much as they can. Nonetheless, this committee has sought to insert itself into the policymaking process on privacy last year as Brown and outgoing Chair Mike Crapo (R-ID) requested “feedback” in February 2019 “from interested stakeholders on the collection, use and protection of sensitive information by financial regulators and private companies.” Additionally, Brown released what may be the most expansive privacy bill from the perspective of privacy and civil liberties advocates, the “Data Accountability and Transparency Act of 2020” in June 2020 (see here for my analysis.) Therefore, Brown may continue to push for a role in federal privacy legislation with a gavel in his hands.

In a similar vein, Senator Patty Murray (D-WA) will likely take over the Senate Health, Education, Labor, and Pensions (HELP) Committee which has jurisdiction over health information privacy and data security through the Health Insurance Portability and Accountability Act of 1996 (HIPAA) and the Health Information Technology for Economic and Clinical Health Act of 2009 (HITECH Act). Again, as with the Senate Banking Committee and Gramm-Leach-Bliley, most of the privacy bills exempt HIPAA-compliant entities. And yet, even if her committee is cut out of a direct role in privacy legislation, Murray will still likely exert influence through oversight of and possible legislation changing HIPAA regulations and the Department of Health and Human Services (HHS) enforcement and rewriting of these standards for most of the healthcare industry. For example, HHS is rushing a rewrite of the HIPAA regulations at the tail end of the Trump Administration, and Murray could be in a position to inform how the Biden Administration and Secretary of Health and Human Services-designate Xavier Berra handles this rulemaking. Additionally, Murray may push the Office of Civil Rights (OCR), the arm of HHS that writes and enforces these regulations, to prioritize matters differently.

Senator Maria Cantwell (D-WA) appears to be the next chair of the Senate Commerce, Science, and Transportation Committee and arguably the largest technology portfolio in the Senate. It is the primary committee of jurisdiction for the FCC, FTC, National Telecommunications and Information Administration (NTIA), the National Institute of Standards and Technology (NIST), and the Department of Commerce. Cantwell may exert influence on which people are nominated to head and staff those agencies and others. Her committee is also the primary committee of jurisdiction for domestic and international privacy and data protection matters. And so, federal privacy legislation will likely be drafted by this committee, and legislative changes so the U.S. can enter into a new personal data sharing agreement with the European Union (EU) would also likely involve her and her committee.

Cantwell and likely next Ranking Member Roger Wicker (R-MS) agree on many elements of federal privacy law but were at odds last year on federal preemption and whether people could sue companies for privacy violations. Between them, they circulated three privacy bills. In September 2020, Wicker and three Republican colleagues introduced the “Setting an American Framework to Ensure Data Access, Transparency, and Accountability (SAFE DATA) Act” (S.4626) (see here for more analysis). Wicker had put out for comment a discussion draft, the “Consumer Data Privacy Act of 2019” (CDPA) (See here for analysis) in November 2019 shortly after the Ranking Member on the committee, Senator Maria Cantwell (D-WA) and other Democrats had introduced their privacy bill, the “Consumer Online Privacy Rights Act“ (COPRA) (S.2968) (See here for more analysis).

Cantwell could also take a leading role on Section 230, but her focus, of late, seems to be on how technology companies are wreaking havoc to traditional media. released a report that she has mentioned during her opening statement at the 23 September hearing aimed at trying to revive data privacy legislation. She and her staff investigated the decline and financial troubles of local media outlets, which are facing a cumulative loss in advertising revenue of up to 70% since 2000. And since advertising revenue has long been the life blood of print journalism, this has devastated local media with many outlets shutting their doors or radically cutting their staff. This trend has been exacerbated by consolidation in the industry, often in concert with private equity or hedge funds looking to wring the last dollars of value from bargain basement priced newspapers. Cantwell also claimed that the overwhelming online advertising dominance of Google and Facebook has further diminished advertising revenue and other possible sources of funding through a variety of means. She intimates that much of this content may be illegal under U.S. law, and the FTC may well be able to use its Section 5 powers against unfair and deceptive acts and its anti-trust authority to take action. (see here for more analysis and context.) In this vein, Cantwell will want her committee to play in any antitrust policy changes, likely knowing massive changes in U.S. law are not possible in a split Senate with entrenched party positions and discipline.

Senator Jack Reed (D-RI) will take over the Senate Armed Services Committee and its portfolio over national security technology policy that includes the cybersecurity, data protection and supply chain of national security agencies and their contractors, AI, offensive and defensive U.S. cyber operations, and other realms. Much of the changes Reed and his committee will seek to make will be through the annual National Defense Authorization Act (NDAA) (see here and here for the many technology provisions in the FY 2021 NDAA.) Reed may also prod the Department of Defense (DOD) to implement or enforce the Cybersecurity Maturity Model Certification (CMMC) Framework differently than envisioned and designed by the Trump Administration. In December 2020, a new rule took effect designed to drive better cybersecurity among U.S. defense contractors. This rule brings together two different lines of effort to require the Defense Industrial Base (DIB) to employ better cybersecurity given the risks they face by holding and using classified information, Federal Contract Information (FCI) and Controlled Unclassified Information (CUI). The Executive Branch has long wrestled with how to best push contractors to secure their systems, and Congress and the White House have opted for using federal contract requirements in that contractors must certify compliance. However, the most recent initiative, the CMMC Framework will require contractors to be certified by third party assessors. And yet, it is not clear the DOD has wrestled with the often-misaligned incentives present in third party certification schemes.

Reed’s committee will undoubtedly delve deep into the recent SolarWinds hack and implement policy changes to avoid a reoccurrence. Doing so may lead the Senate Armed Services Committee back to reconsidering the Cyberspace Solarium Commission’s (CSC) March 2020 final report and follow up white papers, especially their views embodied in “Building a Trusted ICT Supply Chain.”

Senator Mark Warner (D-VA) will likely take over the Senate Intelligence Committee. Warner has long been a stakeholder on a number of technology issues and would be able to exert influence on the national security components of such issues. He and his committee will almost certainly play a role in the Congressional oversight of and response to the SolarWinds hack. Likewise, his committee shares jurisdiction over FISA with the Senate Judiciary Committee and over national security technology policy with the Armed Services Committee.

Senator Amy Klobuchar (D-MN) would be the Senate Democratic point person on election security from her perch at the Senate Rules and Administration Committee, which may enable her to more forcefully push for the legislative changes she has long advocated for. In May 2019, Klobuchar and other Senate Democrats introduced the “Election Security Act” (S. 1540), the Senate version of the stand-alone measure introduced in the House that was taken from the larger package, the “For the People Act” (H.R. 1) passed by the House.

In August 2018, the Senate Rules and Administration Committee postponed indefinitely a markup on a compromise bill to provide states additional assistance in securing elections from interference, the “The Secure Elections Act” (S.2593). Reportedly, there was concern among state officials that a provision requiring audits of election results would be in effect an unfunded mandate even though this provision was softened at the insistence of Senate Republican leadership. However, a Trump White House spokesperson indicated in a statement that the Administration opposed the bill, which may have posed an additional obstacle to Committee action. However, even if the Senate had passed its bill, it was unlikely that the Republican controlled House would have considered companion legislation (H.R. 6663).

Senator Gary Peters (D-MI) may be the next chair of the Senate Homeland Security and Governmental Affairs Committee, and if so, he will continue to face the rock on which many the bark of cybersecurity legislation has been dashed: Senator Ron Johnson (R-WI). So significant has Johnson’s opposition been to bipartisan cybersecurity legislation from the House, some House Republican stakeholders have said so in media accounts not bothering to hide in anonymity. And so whatever Peters’ ambitions may be to shore up the cybersecurity of the federal government as his committee will play a role in investigating and responding to the Russian hack of SolarWinds and many federal agencies, he will be limited by whatever Johnson and other Republicans will allow to move through the committee and through the Senate. Of course, Peters’ purview would include the Department of Homeland Security and the Cybersecurity and Infrastructure Security Agency (CISA) and its remit to police the cybersecurity practices of the federal government. Peters would also have in his portfolio the information technology (IT) practices of the federal government, some $90 billion annually across all agencies.

Finally, whether it be Leahy or Durbin at the Senate Appropriations Committee, this post allows for immense influence in funding and programmatic changes in all federal programs through the power of the purse Congress holds.

Sponsors Take A New Run At Privacy Law in Washington State

Perhaps the third time is the charm? Legislators seek to pass a privacy law in Washington state for the third year in a row.

A group of Senators in Washington state’s Senate have introduced a slightly altered version of a privacy bill they floated last summer. A committee of jurisdiction will hold a hearing on 14 January 2021 on SB 5062. Of course, this would mark the third year in a row legislators have tried to enact the Washington privacy act. The new bill (SB 5062) tracks closely with the two bills produced by the Washington Senate and House last year lawmakers could not ultimately reconcile. However, there are no provisions on facial recognition technology, which was largely responsible for sinking a privacy bill in Washington State two years ago. The sponsors have also taken the unusual step of appending language covering the collection and processing of personal data to combat infectious diseases like COVID-19.

I analyzed the discussion draft that Washington State Senator Reuven Carlyle (D-Seattle) released over the summer, and so I will not recite everything about the new bill. It should suffice to highlight the differences between the discussion draft and the introduced legislation. Big picture, the bill still uses the concepts of data controllers and processors most famously enshrined in the European Union’s (EU) General Data Protection Regulation (GDPR). Like other privacy bills, generally, people in Washington State would not need to consent before an entity could collect and process its information. People would be able to opt out of some activities, but most could data collection and processing could still occur as it presently does.

The date on which the bill would take effect was pushed aback from 120 days in the discussion draft to 31 July 2022 in the introduced bill. While SB 5062 would cover non-profits, institutions of higher education, airlines, and others unlike the discussion draft, the effective date for the bill to cover would be 31 July 2026. The right of a person to access personal data a controller is processing is narrowed slightly in that it would no longer be the personal data the controller has but rather categories of personal data. The time controllers would have to respond to a certain class of request would be decreased from 45 to 15 days. This class includes requests to opt out of targeted advertising, the sale of personal data, and any profiling in furtherance of decisions with legal effects. Section 106’s requirement that processors have reasonable security measures has been massaged, rephrased and possibly weakened a bit.

One of the activities controllers and processors could undertake without meeting the requirements of the act was removed. Notably, they will no longer be able to “conduct internal research solely to improve or repair products, services, or technology.” There is also a clarification that using any of the exemptions in Section 110 does not make an entity a controller for purposes of the bill. There is a new requirement that the State Office of Privacy and Data Protection must examine current technology that allows for mass or global opt out or opt in and then report to the legislature. Finally, two of the Congressional stakeholders on privacy and data security hail from Washington state, and consideration and possible passage of a state law may limit their latitude on a federal bill they could support. Senator Maria Cantwell (D-WA) and Representative Cathy McMorris Rodgers (R-WA), who are the ranking members of the Senate Commerce, Science, and Transportation Committee and House Energy and Commerce Committee respectively, are expected to be involved in drafting their committee’s privacy bills, and a Washington state statute may affect their positions in much the same the “California Consumer Privacy Act” (CCPA) (AB 375) has informed a number of California Members’ position on privacy legislation, especially with respect to bills being seen as weaker than the CCPA.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2021. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Kranich17 from Pixabay

Further Reading, Other Developments, and Coming Events (12 January 2021)

Further Reading

  • Biden’s NSC to focus on global health, climate, cyber and human rights, as well as China and Russia” By Karen DeYoung — The Washington Post. Like almost every incoming White House, the Biden team has announced a restructuring of the National Security Council (NSC) to better effectuate the President-elect’s policy priorities. To not surprise, the volume on cybersecurity policy will be turned up. Other notable change is plans to take “cross-cutting” approaches to issues that will likely meld foreign and domestic and national security and civil issues, meaning there could be a new look on offensive cyber operations, for example. It is possible President Biden decides to put the genie back in the bottle, so to speak, by re-imposing an interagency decision-making process as opposed to the Trump Administration’s approach of delegating discretion to the National Security Agency/Cyber Command head. Also, the NSC will focus on emerging technology, a likely response to the technology arms race the United States finds itself in against the People’s Republic of China.
  • Exclusive: Pandemic relief aid went to media that promoted COVID misinformation” By Caitlin Dickson — yahoo! news. The consulting firm Alethea Group and the nonprofit Global Disinformation Index are claiming the COVID stimulus Paycheck Protection Program (PPP) provided loans and assistance to five firms that “were publishing false or misleading information about the pandemic, thus profiting off the infodemic” according to an Alethea Group vice president. This report follows an NBC News article claiming that 14 white supremacist and racist organizations have also received PPP loans. The Alethea Group and Global Disinformation Index named five entities who took PPP funds and kept spreading pandemic misinformation: Epoch Media Group, Newsmax Media, The Federalist, Liftable Media, and Prager University.
  • Facebook shuts Uganda accounts ahead of vote” — France24. The social media company shuttered a number of Facebook and Instagram accounts related to government officials in Uganda ahead of an election on account of “Coordinated Inauthentic Behaviour” (CIB). This follows the platform shutting down accounts related to the French Army and Russia seeking to influence events in Africa. These and other actions may indicate the platform is starting to pay the same attention to the non-western world as at least one former employee has argued the platform was negligent at best and reckless at worst in not properly resourcing efforts to police CIB throughout the Third World.
  • China tried to punish European states for Huawei bans by adding eleventh-hour rule to EU investment deal” By Finbarr Bermingham — South China Morning Post. At nearly the end of talks on a People’s Republic of China (PRC)-European Union (EU) trade deal, PRC negotiators tried slipping in language that would have barred entry to the PRC’s cloud computing market to any country or company from a country that restricts Huawei’s services and products. This is alternately being seen as either standard Chinese negotiating tactics or an attempt to avenge the thwarting of the crown jewel in its telecommunications ambitions.
  • Chinese regulators to push tech giants to share consumer credit data – sources” By Julie Zhu — Reuters. Ostensibly in a move to better manage the risks of too much unsafe lending, tech giants in the People’s Republic of China (PRC) will soon need to share data on consumer loans. It seems inevitable that such data will be used by Beijing to further crack down on undesirable people and elements within the PRC.
  • The mafia turns social media influencer to reinforce its brand” By Miles Johnson — The Financial Times. Even Italy’s feared ’Ndrangheta is creating and curating a social media presence.

Other Developments

  • President Donald Trump signed an executive order (EO) that bans eight applications from the People’s Republic of China on much the same grounds as the EOs prohibiting TikTok and WeChat. If this EO is not rescinded by the Biden Administration, federal courts may block its implementation as has happened with the TikTok and WeChat EOs to date. Notably, courts have found that the Trump Administration exceeded its authority under the International Emergency Economic Powers Act (IEEPA), which may also be an issue in the proposed prohibition on Alipay, CamScanner, QQ Wallet, SHAREit, Tencent QQ, VMate, WeChat Pay, and WPS Office. Trump found:
    • that additional steps must be taken to deal with the national emergency with respect to the information and communications technology and services supply chain declared in Executive Order 13873 of May 15, 2019 (Securing the Information and Communications Technology and Services Supply Chain).  Specifically, the pace and pervasiveness of the spread in the United States of certain connected mobile and desktop applications and other software developed or controlled by persons in the People’s Republic of China, to include Hong Kong and Macau (China), continue to threaten the national security, foreign policy, and economy of the United States.  At this time, action must be taken to address the threat posed by these Chinese connected software applications.
    • Trump directed that within 45 days of issuance of the EO, there shall be a prohibition on “any transaction by any person, or with respect to any property, subject to the jurisdiction of the United States, with persons that develop or control the following Chinese connected software applications, or with their subsidiaries, as those transactions and persons are identified by the Secretary of Commerce (Secretary) under subsection (e) of this section: Alipay, CamScanner, QQ Wallet, SHAREit, Tencent QQ, VMate, WeChat Pay, and WPS Office.”
  • The Government Accountability Office (GAO) issued its first statutorily required annual assessment of how well the United States Department of Defense (DOD) is managing its major information technology (IT) procurements. The DOD spent more than $36 billion of the $90 billion the federal government was provided for IT in FY 2020. The GAO was tasked with assessing how well the DOD did in using iterative development, managing costs and schedules, and implementing cybersecurity measures. The GAO found progress in the first two realms but a continued lag in deploying long recommended best practices to ensure the security of the IT the DOD buys or builds. Nonetheless, the GAO focused on 15 major IT acquisitions that qualify as administrative (i.e. “business”) and communications and information security (i.e. “non-business.”) While there were no explicit recommendations made, the GAO found:
    • Ten of the 15 selected major IT programs exceeded their planned schedules, with delays ranging from 1 month for the Marine Corps’ CAC2S Inc 1 to 5 years for the Air Force’s Defense Enterprise Accounting and Management System-Increment 1.
    • …eight of the 10 selected major IT programs that had tested their then-current technical performance targets reported having met all of their targets…. As of December 2019, four programs had not yet conducted testing activities—Army’s ACWS, Air Force’s AFIPPS Inc 1, Air Force’s MROi, and Navy ePS. Testing data for one program, Air Force’s ISPAN Inc 4, were classified.
    • …officials from the 15 selected major IT programs we reviewed reported using software development approaches that may help to limit risks to cost and schedule outcomes. For example, major business IT programs reported using COTS software. In addition, most programs reported using an iterative software development approach and using a minimum deployable product. With respect to cybersecurity practices, all the programs reported developing cybersecurity strategies, but programs reported mixed experiences with respect to conducting cybersecurity testing. Most programs reported using operational cybersecurity testing, but less than half reported conducting developmental cybersecurity testing. In addition, programs that reported conducting cybersecurity vulnerability assessments experienced fewer increases in planned program costs and fewer schedule delays. Programs also reported a variety of challenges associated with their software development and cybersecurity staff.
    • 14 of the 15 programs reported using an iterative software development approach which, according to leading practices, may help reduce cost growth and deliver better results to the customer. However, programs also reported using an older approach to software development, known as waterfall, which could introduce risk for program cost growth because of its linear and sequential phases of development that may be implemented over a longer period of time. Specifically, two programs reported using a waterfall approach in conjunction with an iterative approach, while one was solely using a waterfall approach.
    • With respect to cybersecurity, programs reported mixed implementation of specific practices, contributing to program risks that might impact cost and schedule outcomes. For example, all 15 programs reported developing cybersecurity strategies, which are intended to help ensure that programs are planning for and documenting cybersecurity risk management efforts.
    • In contrast, only eight of the 15 programs reported conducting cybersecurity vulnerability assessments—systematic examinations of an information system or product intended to, among other things, determine the adequacy of security measures and identify security deficiencies. These eight programs experienced fewer increases in planned program costs and fewer schedule delays relative to the programs that did not report using cybersecurity vulnerability assessments.
  • The United States (U.S.) Department of Energy gave notice of a “Prohibition Order prohibiting the acquisition, importation, transfer, or installation of specified bulk-power system (BPS) electric equipment that directly serves Critical Defense Facilities (CDFs), pursuant to Executive Order 13920.” (See here for analysis of the executive order.) The Department explained:
    • Executive Order No. 13920 of May 1, 2020, Securing the United States Bulk-Power System (85 FR 26595 (May 4, 2020)) (E.O. 13920) declares that threats by foreign adversaries to the security of the BPS constitute a national emergency. A current list of such adversaries is provided in a Request for Information (RFI), issued by the Department of Energy (Department or DOE) on July 8, 2020 seeking public input to aid in its implementation of E.O. 13920. The Department has reason to believe, as detailed below, that the government of the People’s Republic of China (PRC or China), one of the listed adversaries, is equipped and actively planning to undermine the BPS. The Department has thus determined that certain BPS electric equipment or programmable components subject to China’s ownership, control, or influence, constitute undue risk to the security of the BPS and to U.S. national security. The purpose of this Order is to prohibit the acquisition, importation, transfer, or subsequent installation of such BPS electric equipment or programmable components in certain sections of the BPS.
  • The United States’ (U.S.) Department of Commerce’s Bureau of Industry and Security (BIS) added the People’s Republic of China’s (PRC) Semiconductor Manufacturing International Corporation (SMIC) to its Entity List in a move intended to starve the company of key U.S. technology needed to manufacture high end semiconductors. Therefore, any U.S. entity wishing to do business with SMIC will need a license which the Trump Administration may not be likely to grant. The Department of Commerce explained in its press release:
    • The Entity List designation limits SMIC’s ability to acquire certain U.S. technology by requiring U.S. exporters to apply for a license to sell to the company.  Items uniquely required to produce semiconductors at advanced technology nodes—10 nanometers or below—will be subject to a presumption of denial to prevent such key enabling technology from supporting China’s military-civil fusion efforts.
    • BIS also added more than sixty other entities to the Entity List for actions deemed contrary to the national security or foreign policy interest of the United States.  These include entities in China that enable human rights abuses, entities that supported the militarization and unlawful maritime claims in the South China Sea, entities that acquired U.S.-origin items in support of the People’s Liberation Army’s programs, and entities and persons that engaged in the theft of U.S. trade secrets.
    • As explained in the Federal Register notice:
      • SMIC is added to the Entity List as a result of China’s military-civil fusion (MCF) doctrine and evidence of activities between SMIC and entities of concern in the Chinese military industrial complex. The Entity List designation limits SMIC’s ability to acquire certain U.S. technology by requiring exporters, reexporters, and in-country transferors of such technology to apply for a license to sell to the company. Items uniquely required to produce semiconductors at advanced technology nodes 10 nanometers or below will be subject to a presumption of denial to prevent such key enabling technology from supporting China’s military modernization efforts. This rule adds SMIC and the following ten entities related to SMIC: Semiconductor Manufacturing International (Beijing) Corporation; Semiconductor Manufacturing International (Tianjin) Corporation; Semiconductor Manufacturing International (Shenzhen) Corporation; SMIC Semiconductor Manufacturing (Shanghai) Co., Ltd.; SMIC Holdings Limited; Semiconductor Manufacturing South China Corporation; SMIC Northern Integrated Circuit Manufacturing (Beijing) Co., Ltd.; SMIC Hong Kong International Company Limited; SJ Semiconductor; and Ningbo Semiconductor International Corporation (NSI).
  • The United States’ (U.S.) Department of Commerce’s Bureau of Industry and Security (BIS) amended its Export Administration Regulations “by adding a new ‘Military End User’ (MEU) List, as well as the first tranche of 103 entities, which includes 58 Chinese and 45 Russian companies” per its press release. The Department asserted:
    • The U.S. Government has determined that these companies are ‘military end users’ for purposes of the ‘military end user’ control in the EAR that applies to specified items for exports, reexports, or transfers (in-country) to the China, Russia, and Venezuela when such items are destined for a prohibited ‘military end user.’
  • The Australia Competition and Consumer Commission (ACCC) rolled out another piece of the Consumer Data Right (CDR) scheme under the Competition and Consumer Act 2010, specifically accreditation guidelines “to provide information and guidance to assist applicants with lodging a valid application to become an accredited person” to whom Australians may direct data holders share their data. The ACCC explained:
    • The CDR aims to give consumers more access to and control over their personal data.
    • Being able to easily and efficiently share data will improve consumers’ ability to compare and switch between products and services and encourage competition between service providers, leading to more innovative products and services for consumers and the potential for lower prices.
    • Banking is the first sector to be brought into the CDR.
    • Accredited persons may receive a CDR consumer’s data from a data holder at the request and consent of the consumer. Any person, in Australia or overseas, who wishes to receive CDR data to provide products or services to consumers under the CDR regime, must be accredited
  • Australia’s government has released its “Data Availability and Transparency Bill 2020” that “establishes a new data sharing scheme for federal government data, underpinned by strong safeguards to mitigate risks and simplified processes to make it easier to manage data sharing requests” according to the summary provided in Parliament by the government’s point person. In the accompanying “Explanatory Memorandum,” the following summary was provided:
    • The Bill establishes a new data sharing scheme which will serve as a pathway and regulatory framework for sharing public sector data. ‘Sharing’ involves providing controlled access to data, as distinct from open release to the public.
    • To oversee the scheme and support best practice, the Bill creates a new independent regulator, the National Data Commissioner (the Commissioner). The Commissioner’s role is modelled on other regulators such as the Australian Information Commissioner, with whom the Commissioner will cooperate.
    • The data sharing scheme comprises the Bill and disallowable legislative instruments (regulations, Minister-made rules, and any data codes issued by the Commissioner). The Commissioner may also issue non-legislative guidelines that participating entities must have regard to, and may release other guidance as necessary.
    • Participants in the scheme are known as data scheme entities:
      • Data custodians are Commonwealth bodies that control public sector data, and have the right to deal with that data.
      • Accredited users are entities accredited by the Commissioner to access to public sector data. To become accredited, entities must satisfy the security, privacy, infrastructure and governance requirements set out in the accreditation framework.
      • Accredited data service providers (ADSPs) are entities accredited by the Commissioner to perform data services such as data integration. Government agencies and users will be able to draw upon ADSPs’ expertise to help them to share and use data safely.
    • The Bill does not compel sharing. Data custodians are responsible for assessing each sharing request, and deciding whether to share their data if satisfied the risks can be managed.
    • The data sharing scheme contains robust safeguards to ensure sharing occurs in a consistent and transparent manner, in accordance with community expectations. The Bill authorises data custodians to share public sector data with accredited users, directly or through an ADSP, where:
      • Sharing is for a permitted purpose – government service delivery, informing government policy and programs, or research and development;
      • The data sharing principles have been applied to manage the risks of sharing; and
      • The terms of the arrangement are recorded in a data sharing agreement.
    • Where the above requirements are met, the Bill provides limited statutory authority to share public sector data, despite other Commonwealth, State and Territory laws that prevent sharing. This override of non-disclosure laws is ‘limited’ because it occurs only when the Bill’s requirements are met, and only to the extent necessary to facilitate sharing.
  • The United Kingdom’s Competition and Markets Authority’s (CMA) is asking interested parties to provide input on the proposed acquisition of British semiconductor company by a United States (U.S.) company before it launches a formal investigation later this year. However, CMA is limited to competition considerations, and any national security aspects of the proposed deal would need to be investigated by Prime Minister Boris Johnson’s government. CMA stated:
    • US-based chip designer and producer NVIDIA Corporation (NVIDIA) plans to purchase the Intellectual Property Group business of UK-based Arm Limited (Arm) in a deal worth $40 billion. Arm develops and licenses intellectual property (IP) and software tools for chip designs. The products and services supplied by the companies support a wide range of applications used by businesses and consumers across the UK, including desktop computers and mobile devices, game consoles and vehicle computer systems.
    • CMA added:
      • The CMA will look at the deal’s possible effect on competition in the UK. The CMA is likely to consider whether, following the takeover, Arm has an incentive to withdraw, raise prices or reduce the quality of its IP licensing services to NVIDIA’s rivals.
  • The Israeli firm, NSO Group, has been accused by an entity associated with a British university of using real-time cell phone data to sell its COVID-19 contact tracing app, Fleming, in ways that may have broken the laws of a handful of nations. Forensic Architecture,  a research agency, based at Goldsmiths, University of London, argued:
    • In March 2020, with the rise of COVID-19, Israeli cyber-weapons manufacturer NSO Group launched a contact-tracing technology named ‘Fleming’. Two months later, a database belonging to NSO’s Fleming program was found unprotected online. It contained more than five hundred thousand datapoints for more than thirty thousand distinct mobile phones. NSO Group denied there was a security breach. Forensic Architecture received and analysed a sample of the exposed database, which suggested that the data was based on ‘real’ personal data belonging to unsuspecting civilians, putting their private information in risk
    • Forensic Architecture added:
      • Leaving a database with genuine location data unprotected is a serious violation of the applicable data protection laws. That a surveillance company with access to personal data could have overseen this breach is all the more concerning.
      • This could constitute a violation of the General Data Protection Regulation (GDPR) based on where the database was discovered as well as the laws of the nations where NSO Group allegedly collected personal data
    • The NSO Group denied the claims and was quoted by Tech Crunch:
      • “We have not seen the supposed examination and have to question how these conclusions were reached. Nevertheless, we stand by our previous response of May 6, 2020. The demo material was not based on real and genuine data related to infected COVID-19 individuals,” said an unnamed spokesperson. (NSO’s earlier statement made no reference to individuals with COVID-19.)
      • “As our last statement details, the data used for the demonstrations did not contain any personally identifiable information (PII). And, also as previously stated, this demo was a simulation based on obfuscated data. The Fleming system is a tool that analyzes data provided by end users to help healthcare decision-makers during this global pandemic. NSO does not collect any data for the system, nor does NSO have any access to collected data.”

Coming Events

  • On 13 January, the Federal Communications Commission (FCC) will hold its monthly open meeting, and the agency has placed the following items on its tentative agenda “Bureau, Office, and Task Force leaders will summarize the work their teams have done over the last four years in a series of presentations:
    • Panel One. The Commission will hear presentations from the Wireless Telecommunications Bureau, International Bureau, Office of Engineering and Technology, and Office of Economics and Analytics.
    • Panel Two. The Commission will hear presentations from the Wireline Competition Bureau and the Rural Broadband Auctions Task Force.
    • Panel Three. The Commission will hear presentations from the Media Bureau and the Incentive Auction Task Force.
    • Panel Four. The Commission will hear presentations from the Consumer and Governmental Affairs Bureau, Enforcement Bureau, and Public Safety and Homeland Security Bureau.
    • Panel Five. The Commission will hear presentations from the Office of Communications Business Opportunities, Office of Managing Director, and Office of General Counsel.
  • On 27 July, the Federal Trade Commission (FTC) will hold PrivacyCon 2021.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2021. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Judith Scharnowski from Pixabay

New Cybersecurity Law and Strategy Unveiled

The EU is revising and replacing a 2016 regime to govern cybersecurity across the bloc.

The European Union (EU) is floating a proposal to reform its 2016 law on cybersecurity throughout the European Union to address gaps the current regime is not addressing. This proposal was released in concert with a new cybersecurity strategy and a statutory proposal to address physical (i.e. non-cyber) infrastructure. These proposals are the latest in a line of policy changes put forth by the EU’s new leadership to make this decade the EU’s Digital Decade. It may, however, take years for these proposals to become laws. For example, the successor to the ePrivacy Directive has been held up in negotiations for the last few years.

New European Commission (EC) President Ursula von der Leyen spelled out her vision for the EU for the years of 2019 through 2024, including “A Europe fit for the digital age.” In its February 2020 “Communication: Shaping Europe’s digital future,” the EC spelled out more how von der Leyen’s vision would be effectuated:

A European cybersecurity strategy, including the establishment of a joint Cybersecurity Unit, a Review of the Security of Network and Information Systems (NIS) Directive and giving a push to the single market for cybersecurity.

To this end, in mid-December 2020, the EC and the High Representative of the Union for Foreign Affairs and Security Policy unveiled a new EU Cybersecurity Strategy and “proposals to address both cyber and physical resilience of critical entities and networks: a Directive on measures for high common level of cybersecurity across the Union (revised NIS Directive or ‘NIS 2′), and a new Directive on the resilience of critical entities.”

Let us turn to the NIS 2 first. This proposal would replace the 2016 “Directive on security of network and information systems (NIS Directive)” ((EU) 2016/1148) currently in effect throughout the EU. NIS 2 would impose new obligations and responsibilities on EU member states and essential and important entities. The nations of the EU would need to draft and implement cybersecurity frameworks/strategies, which includes setting up vulnerability disclosure programs, voluntary cybersecurity information sharing programs, a policy to address information and communications technology (ICT) supply chain risk, and cybersecurity standards for publicly bought and used ICT. EU nations would also need to name “competent” national authorities to enforce NIS 2, for the EC identified lax or non-existent enforcement of existing cybersecurity laws as a rationale for the new proposal. Consequently, such authorities must be empowered to issue binding directives, if necessary, warnings, or instructions to cease certain conduct. These authorities must also work with data protection authorities in the event of data breaches. NIS 2 also provides for administrative fines and penalties to be established in the laws of EU nations.

Additionally, all EU nations should have computer security incident response teams (CSIRTs). NIS 2 would apply to a number of public and private entities in certain sectors, which are deemed “essential:” energy; transport; banking; financial market infrastructures; health, drinking water; waste water; digital infrastructure; public administration and space. Some public and private entities would be “important” entities and subject to the NIS 2 in these sectors: postal and courier services; waste management; manufacture, production and distribution of chemicals; food production, processing and distribution; manufacturing and digital providers. Micro and small entities would largely not be swept up into NIS 2 even if they are part of one of the aforementioned sectors. However, “providers of electronic communications networks or of publicly available electronic communications services, trust service providers, Top-level domain name (TLD) name registries and public administration, and certain other entities” would be governed by NIS 2 regardless of their size.

The EU would also establish a Cooperation Group that would be tasked with helping EU nations work more harmoniously under the NIS 2. However, this new body, unlike, say the General Data Protection Regulation’s created European Data Protection Board (EDPB), would not have power to compel its members to comply with NIS 2.

Notably, NIS 2 would require that: “Member States shall ensure that essential and important entities shall take appropriate and proportionate technical and organisational measures to manage the risks posed to the security of network and information systems which those entities use in the provision of their services.” The law lists a number of elements that must go into these measures. Moreover, “essential and important entities notify, without undue delay, the competent authorities or the CSIRT…of any incident having a significant impact on the provision of their services.” The NIS 2 lays out broad criteria as to what constitutes a “significant impact:”

  • the incident has caused or has the potential to cause substantial operational disruption or financial losses for the entity concerned;
  • the incident has affected or has the potential to affect other natural or legal persons by causing considerable material or non-material losses.

In order to address ICT supply chain risk, EU countries may elect to “require essential and important entities to certify certain ICT products, ICT services and ICT processes under specific European cybersecurity certification schemes adopted ” under the legislation that created the European Union Agency for Cybersecurity (ENISA).

As noted earlier, EU nations must establish systems for essential and important entities to share information but need not compel them to do so. Article 26 provides that nation “shall ensure that essential and important entities may exchange relevant cybersecurity information among themselves including information relating to cyber threats, vulnerabilities, indicators of compromise, tactics, techniques and procedures, cybersecurity alerts and configuration tools.” EU countries must also have a system for any non-essential, non-important entities or those from sectors not covered by NIS 2 can also voluntarily submit information.

The EC argued that the NIS Directive is now outdated and is in desperate need of revision to reflect current realities:

Notwithstanding its notable achievements, the NIS Directive, which paved the way for a significant change in mind-set, in relation to the institutional and regulatory approach to cybersecurity in many Member States, has also proven its limitations. The digital transformation of society (intensified by the COVID-19 crisis) has expanded the threat landscape and is bringing about new challenges which require adapted and innovative responses. The number of cyber-attacks continues to rise, with increasingly sophisticated attacks coming from a wide range of sources inside and outside the EU.

The EC highlighted some of the limitations in how the NIS Directive has been implemented by EU member states and its failure to drive the adoption of better cyber practices by EU businesses:

The evaluation on the functioning of the NIS Directive, conducted for the purposes of the Impact Assessment, identified the following issues: (1) the low level of cyber resilience of businesses operating in the EU; (2) the inconsistent resilience across Member States and sectors; and (3) the low level of joint situational awareness and lack of joint crisis response. For example, certain major hospitals in a Member State do not fall within the scope of the NIS Directive and hence are not required to implement the resulting security measures, while in another Member State almost every single healthcare provider in the country is covered by the NIS security requirements.

The EC explained how the NIS 2 relates to a proposal released the same day to address physical infrastructure in the EU:

The proposal is therefore closely aligned with the proposal for a Directive on the resilience of critical entities, which aims at enhancing the resilience of critical entities against physical threats in a large number of sectors. The proposal aims to ensure that competent authorities under both legal acts take complementary measures and exchange information as necessary regarding cyber and non-cyber resilience, and that particularly critical operators in the sectors considered to be ‘essential’ per the proposal at hand are also subject to more general resilience-enhancing obligations with an emphasis on non-cyber risks.

The EC’s impact assessment on how well the NIS Directive is working shows limitations in scope and application, some of which may be attributed to changes in the EU and the world:

  • The scope of the NIS Directive is too limited in terms of the sectors covered, mainly due to: (i) increased digitisation in recent years and a higher degree of interconnectedness, (ii) the scope of the NIS Directive no longer reflecting all digitised sectors providing key services to the economy and society as a whole.
  • The NIS Directive is not sufficiently clear when it comes to the scope for operators of essential services and its provisions do not provide sufficient clarity regarding national competence over digital service providers. This has led to a situation in which certain types of entities have not been identified in all Member States and are therefore not required to put in place security measures and report incidents.
  • The NIS Directive allowed wide discretion to the Member States when laying down security and incident reporting requirements for operators of essential services (hereinafter called ‘OES(s)’). The evaluation shows that in some instances Member States have implemented these requirements in significantly different ways, creating additional burden for companies operating in more than one Member State.
  • The supervision and enforcement regime of the NIS Directive is ineffective. For example, Member States have been very reluctant to apply penalties to entities failing to put in place security requirements or report incidents. This can have negative consequences for the cyber resilience of individual entities.
  • The financial and human resources set aside by Member States for fulfilling their tasks (such as OES identification or supervision), and consequently the different levels of maturity in dealing with cybersecurity risks, vary greatly. This further exacerbates the differences in cyber resilience between Member States.
  • Member States do not share information systematically with one another, with negative consequences in particular for the effectiveness of the cybersecurity measures and for the level of joint situational awareness at EU level. This is also the case for information sharing among private entities, and for the engagement between the EU level cooperation structures and private entities.

The EC’s proposal contains a summary of what the new law would do:

  • The Directive, in particular: (a) lays down obligations for the Member States to adopt a national cybersecurity strategy, designate competent national authorities, single points of contact and CSIRTs; (b) provides that Member States shall lay down cybersecurity risk management and reporting obligations for entities referred to as essential entities in Annex I and important entities in Annex II; (c) provides that Member States shall lay down obligations on cybersecurity information sharing.
  • It applies to certain public or private essential entities operating in the sectors listed in Annex I (energy; transport; banking; financial market infrastructures; health, drinking water; waste water; digital infrastructure; public administration and space) and certain important entities operating in the sectors listed in Annex II (postal and courier services; waste management; manufacture, production and distribution of chemicals; food production, processing and distribution; manufacturing and digital providers). Micro and small entities within the meaning of Commission Recommendation 2003/361/EC of 6 May 2003 are excluded from the scope of the Directive, except for providers of electronic communications networks or of publicly available electronic communications services, trust service providers, Top-level domain name (TLD) name registries and public administration, and certain other entities, such as the sole provider of a service in a Member State.

The EC also released “The EU’s Cybersecurity Strategy for the Digital Decade” alongside the NIS 2 “to ensure a global and open Internet with strong guardrails to address the risks to the security and fundamental rights and freedoms of people in Europe.” The EC spelled out its dramatic plan to remake how the bloc regulates, invests in, and structures policies around cybersecurity. The EC claimed “[a]s a key component of Shaping Europe’s Digital Future, the Recovery Plan for Europe  and the EU Security Union Strategy, the Strategy will bolster Europe’s collective resilience against cyber threats and help to ensure that all citizens and businesses can fully benefit from trustworthy and reliable services and digital tools.” If the EU follows through, this strategy may have significant effects in the EU and around the world.

The EC further explained:

  • Following the progress achieved under the previous strategies, it contains concrete proposals for deploying three principal instruments –regulatory, investment and policy instruments – to address three areas of EU action – (1) resilience, technological sovereignty and leadership, (2) building operational capacity to prevent, deter and respond, and (3) advancing a global and open cyberspace. The EU is committed to supporting this strategy through an unprecedented level of investment in the EU’s digital transition over the next seven years – potentially quadrupling previous levels – as part of new technological and industrial policies and the recovery agenda
  • Cybersecurity must be integrated into all these digital investments, particularly key technologies like Artificial Intelligence (AI), encryption and quantum computing, using incentives, obligations and benchmarks. This can stimulate the growth of the European cybersecurity industry and provide the certainty needed to ease the phasing out of legacy systems. The European Defence Fund (EDF) will support European cyber defence solutions, as part of the European defence technological and industrial base. Cybersecurity is included in external financial instruments to support our partners, notably the Neighbourhood, Development and International Cooperation Instrument. Preventing the misuse of technologies, protecting critical infrastructure and ensuring the integrity of supply chains also enables the EU’s adherence to the UN norms, rules and principles of responsible state behavior.

Per the EC’s press release, the ”Directive on the resilience of critical entities” “expands both the scope and depth of the 2008 European Critical Infrastructure directive.” The EC added:

Ten sectors are now covered: energy, transport, banking, financial market infrastructures, health, drinking water, waste water, digital infrastructure, public administration and space. Under the proposed directive, Member States would each adopt a national strategy for ensuring the resilience of critical entities and carry out regular risk assessments. These assessments would also help identify a smaller subset of critical entities that would be subject to obligations intended to enhance their resilience in the face of non-cyber risks, including entity-level risk assessments, taking technical and organisational measures, and incident notification. The Commission, in turn, would provide complementary support to Member States and critical entities, for instance by developing a Union-level overview of cross-border and cross-sectoral risks, best practice, methodologies, cross-border training activities and exercises to test the resilience of critical entities.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2021. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Prawny from Pixabay

Further Reading, Other Developments, and Coming Events (11 January 2021)

Further Reading

  • Why the Russian hack is so significant, and why it’s close to a worst-case scenario” By Kevin Collier — NBC News. This article quotes experts who paint a very ugly picture for the United States (U.S.) in trying to recover from the Russian Federation’s hack. Firstly, the Russians are very good at what they do and likely built multiple backdoors in systems they would want to ensure they have access to after using SolarWinds’ update system to gain initial entry. Secondly, broadly speaking, at present, U.S. agencies and companies have two very unpalatable options: spend months hunting through their systems for any such backdoors or other issues or rebuild their systems from scratch. The ramifications of this hack will continue to be felt well into the Biden Administration.
  • The storming of Capitol Hill was organized on social media.” By Sheera Frenkel — The New York Times. As the repercussions of the riot and apparently attempted insurrection continue to be felt, one aspect that has received attention and will continue to receive attention is the role social media platforms played. Platforms used predominantly by right wing and extremist groups like Gab and Parler were used extensively to plan and execute the attack. This fact and the ongoing content moderation issues at larger platforms will surely inform the Section 230 and privacy legislation debates expected to occur this year and into the future.
  • Comcast data cap blasted by lawmakers as it expands into 12 more states” By Jon Brodkin — Ars Technica. Comcast has extended to other states its 1.2TB cap on household broadband usage, and lawmakers in Massachusetts have written the company, claiming this will hurt low-income families working and schooling children at home. Comcast claims this affects only a small class of subscribers, so-called “super users.” Such a move always seemed in retrospect as data is now the most valuable commodity.
  • Finnish lawmakers’ emails hacked in suspected espionage incident” By Shannon Vavra — cyberscoop. Another legislature of a democratic nation has been hacked, and given the recent hacks of Norway’s Parliament and Germany’s Bundestag by the Russians, it may well turn out they were behind this hack that “obtain[ed] information either to benefit a foreign state or to harm Finland” according to Finland’s National Bureau of Investigation.
  • Facebook Forced Its Employees To Stop Discussing Trump’s Coup Attempt” By Ryan Mac — BuzzFeed News. Reportedly, Facebook shut down internal dialogue about the misgivings voiced by employees about its response to the lies in President Donald Trump’s video and the platform’s role in creating the conditions that caused Trump supporters to storm the United States (U.S.) Capitol. Internally and externally, Facebook equivocated on whether it would go so far as Twitter in taking down Trump’s video and content.
  • WhatsApp gives users an ultimatum: Share data with Facebook or stop using the app” By Dan Goodin — Ars Technica. Very likely in response to coming changes to the Apple iOS that will allow for greater control of privacy, Facebook is giving WhatsApp users a choice: accept our new terms of service that allows personal data to be shared with and used by Facebook or have your account permanently deleted.
  • Insecure wheels: Police turn to car data to destroy suspects’ alibis” By Olivia Solon — NBC News. Like any other computerized, connected device, cars are increasingly a source law enforcement (and likely intelligence agencies) are using to investigate crimes. If you sync your phone via USB or Bluetooth, most modern cars will access your phone and store all sorts of personal data that can later be accessed. But, other systems in cars can tell investigators where the car was, how heavy it was (i.e. how many people), when doors opened, etc. And, there are not specific federal or state laws in the United States to mandate protection of these data.

Other Developments

  • The Federal Bureau of Investigation (FBI), the Cybersecurity and Infrastructure Security Agency (CISA), the Office of the Director of National Intelligence (ODNI), and the National Security Agency (NSA) issued a joint statement, finally naming the Russian Federation as the likely perpetrator of the massive SolarWinds hack. However, the agencies qualified the language, claiming:
    • This work indicates that an Advanced Persistent Threat (APT) actor, likely Russian in origin, is responsible for most or all of the recently discovered, ongoing cyber compromises of both government and non-governmental networks. At this time, we believe this was, and continues to be, an intelligence gathering effort.
      • Why the language is not more definitive is not clear. Perhaps the agencies are merely exercising caution about whom is blamed for the attack. Perhaps the agencies do not want to anger a White House and President averse to reports of Russian hacking for fear it will be associated with the hacking during the 2016 election that aided the Trump Campaign.
      • However, it is noteworthy the agencies are stating their belief the hacking was related to “intelligence gathering,” suggesting the purpose of the incursions was not to destroy data or launch an attack. Presumably, such an assertion is meant to allays concerns that the Russian Federation intends to attack the United States (U.S.) like it did in Ukraine and Georgia in the last decade.
    • The Cyber Unified Coordination Group (UCG) convened per Presidential Policy Directive (PPD) 41 (which technically is the FBI, CISA, and the ODNI but not the NSA) asserted its belief that
      • of the approximately 18,000 affected public and private sector customers of SolarWinds’ Orion products, a much smaller number has been compromised by follow-on activity on their systems. We have so far identified fewer than 10 U.S. government agencies that fall into this category, and are working to identify the nongovernment entities who also may be impacted.
      • These findings are, of course, preliminary, and there may be incentives for the agencies to be less than forthcoming about what they know of the scope and impact of the hacking.
  • Federal Communications Commission (FCC) Chair Ajit Pai has said he will not proceed with a rulemaking to curtail 47 USC 230 (Section 230) in response to a petition the National Telecommunications and Information Administration (NTIA) filed at the direction of President Donald Trump. Pai remarked “I do not intend to move forward with the notice of proposed rule-making at the FCC” because “in part, because given the results of the election, there’s simply not sufficient time to complete the administrative steps necessary in order to resolve the rule-making.” Pai cautioned Congress and the Biden Administration “to study and deliberate on [reforming Section 230] very seriously,” especially “the immunity provision.”  
    • In October, Pai had announced the FCC would proceed with a notice and comment rulemaking based on the NTIA’s petition asking the agency to start a rulemaking to clarify alleged ambiguities in 47 USC 230 regarding the limits of the liability shield for the content others post online versus the liability protection for “good faith” moderation by the platform itself. The NTIA was acting per direction in an executive order allegedly aiming to correct online censorship. Executive Order 13925, “Preventing Online Censorship” was issued in late May after Twitter factchecked two of President Donald Trump’s Tweets regarding false claims made about mail voting in California in response to the COVID-19 pandemic.
  • A House committee released its most recent assessment of federal cybersecurity and information technology (IT) assessment. The House Oversight Committee’s Government Operations Subcommittee released its 11th biannual scorecard under the “Federal Information Technology Acquisition Reform Act (FITARA). The subcommittee stressed this “marks the first time in the Scorecard’s history that all 24 agencies included in the law have received A’s in a single category” and noted it is “the first time that a category will be retired.” Even though this assessment is labeled the FITARA Scorecard, it is actually a compilation of different metrics borne of other pieces of legislation and executive branch programs.
    • Additionally, 19 of the 24 agencies reviewed received A’s on the Data Center Optimization Initiative (DCOI)
    • However, four agencies received F’s on Agency Chief Information Officer (CIO) authority enhancements, measures aiming to fulfill one of the main purposes of FITARA: empowering agency CIOs as a means of controlling and managing better IT acquisition and usage. It has been an ongoing struggle to get agency compliance with the letter and spirit of federal law and directives to do just this.
    • Five agencies got F’s and two agencies got D’s for failing to hit the schedule for transitioning off of the “the expiring Networx, Washington Interagency Telecommunications System (WITS) 3, and Regional Local Service Agreement (LSA) contracts” to the General Services Administration’s $50 billion Enterprise Infrastructure Solutions (EIS). The GSA explained this program in a recent letter:
      • After March 31, 2020, GSA will disconnect agencies, in phases, to meet the September 30, 2022 milestone for 100% completion of transition. The first phase will include agencies that have been “non-responsive” to transition outreach from GSA. Future phases will be based on each agency’s status at that time and the individual circumstances impacting that agency’s transition progress, such as protests or pending contract modifications. The Agency Transition Sponsor will receive a notification before any services are disconnected, and there will be an opportunity for appeal.
  • A bipartisan quartet of United States Senators urged the Trump Administration in a letter to omit language in a trade agreement with the United Kingdom (UK) that mirrors the liability protection in 47 U.S.C. 230 (Section 230). Senators Rob Portman (R-OH), Mark R. Warner (D-VA), Richard Blumenthal (D-CT), and Charles E. Grassley (R-IA) argued to U.S. Trade Representative Ambassador Robert Lighthizer that a “safe harbor” like the one provided to technology companies for hosting or moderating third party content is outdated, not needed in a free trade agreement, contrary to the will of both the Congress and UK Parliament, and likely to be changed legislatively in the near future. However, left unsaid in the letter, is the fact that Democrats and Republicans generally do not agree on how precisely to change Section 230. There may be consensus that change is needed, but what that change looks like is still a matter much in dispute.
    • Stakeholders in Congress were upset that the Trump Administration included language modeled on Section 230 in the United States-Mexico-Canada Agreement (USMCA), the modification of the North American Free Trade Agreement (NAFTA). For example, House Energy and Commerce Committee Chair Frank Pallone Jr (D-NJ) and then Ranking Member Greg Walden (R-OR) wrote Lighthizer, calling it “inappropriate for the United States to export language mirroring Section 230 while such serious policy discussions are ongoing” in Congress.
  • The Trump White House issued a new United States (U.S.) government strategy for advanced computing to replace the 2019 strategy. The “PIONEERING THE FUTURE ADVANCED COMPUTING ECOSYSTEM: A STRATEGIC PLAN” “envisions a future advanced computing ecosystem that provides the foundation for continuing American leadership in science and engineering, economic competitiveness, and national security.” The Administration asserted:
    • It develops a whole-of-nation approach based on input from government, academia, nonprofits, and industry sectors, and builds on the objectives and recommendations of the 2019 National Strategic Computing Initiative Update: Pioneering the Future of Computing. This strategic plan also identifies agency roles and responsibilities and describes essential operational and coordination structures necessary to support and implement its objectives. The plan outlines the following strategic objectives:
      • Utilize the future advanced computing ecosystem as a strategic resource spanning government, academia, nonprofits, and industry.
      • Establish an innovative, trusted, verified, usable, and sustainable software and data ecosystem.
      • Support foundational, applied, and translational research and development to drive the future of advanced computing and its applications.
      • Expand the diverse, capable, and flexible workforce that is critically needed to build and sustain the advanced computing ecosystem.
  • A federal court threw out a significant portion of a suit Apple brought against a security company, Corellium, that offers technology allowing security researchers to virtualize the iOS in order to undertake research. The United States District Court for the Southern District of Florida summarized the case:
    • On August 15, 2019, Apple filed this lawsuit alleging that Corellium infringed Apple’s copyrights in iOS and circumvented its security measures in violation of the federal Digital Millennium Copyright Act (“DMCA”). Corellium denies that it has violated the DMCA or Apple’s copyrights. Corellium further argues that even if it used Apple’s copyrighted work, such use constitutes “fair use” and, therefore, is legally permissible.
    • The court found “that Corellium’s use of iOS constitutes fair use” but did not for the DMCA claim, thus allowing Apple to proceed with that portion of the suit.
  • The Trump Administration issued a plan on how cloud computing could be marshalled to help federally funded artificial intelligence (AI) research and development (R&D). A select committee made four key recommendations that “should accelerate the use of cloud resources for AI R&D: 1)launch and support pilot projects to identify and explore the advantages and challenges associated with the use of commercial clouds in conducting federally funded AI research; (2) improve education and training opportunities to help researchers better leverage cloud resources for AI R&D; (3) catalog best practices in identity management and single-sign-on strategies to enable more effective use of the variety of commercial cloud resources for AI R&D; and (4) establish and publish best practices for the seamless use of different cloud platforms for AI R&D. Each recommendation, if adopted, should accelerate the use of cloud resources for AI R&D.”

Coming Events

  • On 13 January, the Federal Communications Commission (FCC) will hold its monthly open meeting, and the agency has placed the following items on its tentative agenda “Bureau, Office, and Task Force leaders will summarize the work their teams have done over the last four years in a series of presentations:
    • Panel One. The Commission will hear presentations from the Wireless Telecommunications Bureau, International Bureau, Office of Engineering and Technology, and Office of Economics and Analytics.
    • Panel Two. The Commission will hear presentations from the Wireline Competition Bureau and the Rural Broadband Auctions Task Force.
    • Panel Three. The Commission will hear presentations from the Media Bureau and the Incentive Auction Task Force.
    • Panel Four. The Commission will hear presentations from the Consumer and Governmental Affairs Bureau, Enforcement Bureau, and Public Safety and Homeland Security Bureau.
    • Panel Five. The Commission will hear presentations from the Office of Communications Business Opportunities, Office of Managing Director, and Office of General Counsel.
  • On 27 July, the Federal Trade Commission (FTC) will hold PrivacyCon 2021.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2021. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Gerd Altmann from Pixabay

UK and EU Defer Decision On Data Flows

Whether there will be an adequacy decision allowing the free flow of personal data under the GDPR from the EU to the recently departed UK has been punted. And, its recent status as a member of the EU notwithstanding, the UK might not get an adequacy decision.

In reaching agreement on many aspects of the United Kingdom’s (UK) exit from the European Union (EU), negotiators did not reach agreement on whether the EU would permit the personal data of EU persons to continue flowing to the UK under the easiest means possible. Instead, the EU and UK agreed to let the status quo continue until an adequacy decision is made or six months lapse. The value of data flowing between the UK and EU was valued at more than £100 billion in 2017 according to British estimates, with the majority of this trade being from the UK to the EU.

Under the General Data Protection Regulation (GDPR), the personal data of EU people can be transferred to other nations for most purposes once the European Commission (EC) has found the other nation has adequate protection equal to those granted in the EU. Of course, this has been an ongoing issue with data flows to the United States (U.S.) as two agreements (Safe Harbor and Privacy Shield) and their EC adequacy decisions were ruled illegal, in large part, because, according to the EU’s highest court, U.S. law does not provide EU persons with the same rights they have in the EU. Most recently, this occurred in 2020 when the Court of Justice of the European Union (CJEU) struck down the adequacy decision underpinning the EU-United States Privacy Shield (aka Schrems II). It bears note that transfers of personal data may occur through other means under the GDPR that may prove more resource intensive: standard data protection clauses (SCC), binding corporate rules (BCR), and others.

Nevertheless, an adequacy decision is seen as the most desirable means of transfer and the question of whether the UK’s laws are sufficient has lingered over the Brexit discussions, with some claiming that the nation’s membership in the Five Eyes surveillance alliance with the U.S. and others possibly disqualifying the UK. Given the range of thorny issues the UK and EU punted (e.g. how to handle the border between Northern Ireland and Ireland), it is not surprising that the GDPR and data flows was also punted.

The UK-EU Trade and Cooperation Agreement (TCA) explained the terms of the data flow agreement and, as noted, in the short term, the status quo will continue with data flows to the UK being treated as if it were still part of the EU. This state will persist until the EC reaches an adequacy decision or for four months with another two months of the status quo being possible in the absence of an adequacy decision so long as neither the UK nor EU object. Moreover, these provisions are operative only so long as the UK has its GDPR compliant data protection law (i.e. UK Data Protection Act 2018) in place and does exercise specified “designated powers.” The UK has also deemed EU and European Economic Area (EEA) and European Free Trade Association (EFTA) nations to be adequate for purposes of data transfers from the UK on a transitional basis.

Specifically, the TCA provides

For the duration of the specified period, transmission of personal data from the Union to the United Kingdom shall not be considered as transfer to a third country under Union law, provided that the data protection legislation of the United Kingdom on 31 December 2020, as it is saved and incorporated into United Kingdom law by the European Union (Withdrawal) Act 2018 and as modified by the Data Protection, Privacy and Electronic Communications (Amendments etc) (EU Exit) Regulations 2019 (“the applicable data protection regime”), applies and provided that the United Kingdom does not exercise the designated powers without the agreement of the Union within the Partnership Council.

The UK also agreed to notify the EU if it “enters into a new instrument which can be relied on to transfer personal data to a third country under Article 46(2)(a) of the UK GDPR or section 75(1)(a) of the UK Data Protection Act 2018 during the specified period.” However, if the EU were to object, it appears from the terms of the TCA, all the EU could do is force the UK “to discuss the relevant object.” And yet, should the UK sign a treaty allowing personal data to flow to a nation the EU deems inadequate, this could obviously adversely affect the UK’s prospects of getting an adequacy decision.

Not surprisingly, the agreement also pertains to the continued flow of personal data as part of criminal investigations and law enforcement matters but not national security matters. Moreover, these matters fall outside the scope of the GDPR and would not be affected in many ways by an adequacy decision or a lack of one. In a British government summary, it is stated that the TCA

provide[s] for law enforcement and judicial cooperation between the UK, the EU and its Member States in relation to the prevention, investigation, detection and prosecution of criminal offences and the prevention of and fight against money laundering and financing of terrorism.

The text of the TCA makes clear national security matters visa vis data flows and information sharing are not covered:

This Part only applies to law enforcement and judicial cooperation in criminal matters taking place exclusively between the United Kingdom, on the one side, and the Union and the Member States, on the other side. It does not apply to situations arising between the Member States, or between Member States and Union institutions, bodies, offices and agencies, nor does it apply to the activities of authorities with responsibilities for safeguarding national security when acting in that field.

The TCA also affirms:

  • The cooperation provided for in this Part is based on the Parties’ long-standing commitment to ensuring a high level of protection of personal data.
  • To reflect that high level of protection, the Parties shall ensure that personal data processed under this Part is subject to effective safeguards in the Parties’ respective data protection regimes…

The United Kingdom’s data protection authority (DPA), the Information Commissioner’s Office (ICO), issued an explanation of how British law enforcement entities should act in light of the TCA. The ICO explained to British entities on law enforcement-related data transfers to the UK:

  • We are now a ‘third country’ for EU data protection purposes. If you receive personal data from a law enforcement partner in the EU, this means the sender will need to comply with the transfer provisions under their national data protection law (which are likely to be similar to those in Part 3 of the DPA 2018).
  • This means the EU sender needs to make sure other appropriate safeguards are in place – probably through a contract or other binding legal instrument, or by making their own assessment of appropriate safeguards. The sender can take into account the protection provided by the DPA 2018 itself when making this assessment.
  • If you receive personal data from other types of organisations in the EU or EEA who are subject to the GDPR, the sender will need to comply with the transfer provisions of the UK GDPR. You may want to consider putting standard contractual clauses (SCCs) in place to ensure adequate safeguards in these cases. We have produced an interactive tool to help you use the SCCs.

The ICO explained for transfers from the UK to the EU (but not the EEA):

  • There is a transitional adequacy decision in place to cover transfers to EU member states and Gibraltar. This will not extend to EEA countries outside the EU, where you should continue to consider other safeguards.
  • This means you can continue to send personal data from the UK to your law enforcement partners in the EU, as long as you can show the transfer is necessary for law enforcement purposes. You can also transfer personal data to non-law enforcement bodies in the EU if you can meet some additional conditions, but you will need to notify the ICO.

Turning back to an adequacy decision and commercial transfers of personal data from the EU to the UK, in what may well be a preview of a world in which there is no adequacy decision between the UK and EU, the European Data Protection Board (EDPB) issued an “information note” in mid-December that spells out how the GDPR would be applied:

  • In the absence of an adequacy decision applicable to the UK as per Article 45 GDPR, such transfers will require appropriate safeguards(e.g., standard data protection clauses, binding corporate rules, codes of conduct…), as well as enforceable data subject rights and effective legal remedies for data subjects, in accordance with Article 46 GDPR.
  • Subject to specific conditions, it may still be possible to transfer personal data to the UK based on a derogation listed in Article 49 GDPR. However, Article 49 GDPR has an exceptional nature and the derogations it contains must be interpreted restrictively and mainly relate to processing activities that are occasional and non-repetitive.
  • Moreover, where personal data are transferred to the UK on the basis of Article 46 GDPR safeguards, supplementary measures might be necessary to bring the level of protection of the data transferred up to the EU standard of essential equivalence, in accordance with the Recommendations 01/2020 on measures that supplement transfer tools to ensure compliance with the EU level of protection of personal data.

Regarding commercial data transfers, the ICO issued a statement urging British entities to start setting up “alternative transfer mechanisms” to ensure data continues to flow from the EU to UK:

  • The Government has announced that the Treaty agreed with the EU will allow personal data to flow freely from the EU (and EEA) to the UK, until adequacy decisions have been adopted, for no more than six months.
  • This will enable businesses and public bodies across all sectors to continue to freely receive data from the EU (and EEA), including law enforcement agencies.
  • As a sensible precaution, before and during this period, the ICO recommends that businesses work with EU and EEA organisations who transfer personal data to them, to put in place alternative transfer mechanisms, to safeguard against any interruption to the free flow of EU to UK personal data.

However, even with these more restrictive means of transferring personal data to the UK exist, there will likely be legal challenges. It bears note that in light of Schrems II, EU DPAs are likely to apply a much higher level of scrutiny to SCCs, and challenges to the legality of using SCCs to transfer personal data to the U.S. have already been commenced. It also seems certain the legality of using SCCs to transfer data to the UK would be challenged, as well.

However, returning to the preliminary issue of whether the EC will give the UK an adequacy decision, there may a number of obstacles to a finding that the UK’s data protection and surveillance laws are indeed adequate under EU law[1]. Firstly, the UK’s surveillance practices in light of a recent set of CJEU rulings may prove difficult for the EC to stomach. In 2020, the CJEU handed down a pair of rulings (here and here) on the extent to which European Union (EU) nations may engage in bulk, indiscriminate collection of two types of data related to electronic communications. The CJEU found that while EU member nations may conduct these activities to combat crime or national security threats during periods limited by necessity and subject to oversight, nations may not generally require the providers of electronic communications to store and provide indiscriminate location data and traffic data in response to an actual national security danger or a prospective one. The CJEU combined three cases into two rulings that came from the UK, France, and Belgium to elucidate the reach of the Privacy and Electronic Communications Directive in relation to foundational EU laws.

The UK is, of course, one of the U.S.’s staunchest allies and partners when it comes to government surveillance of electronic communications. On this point, the CJEU summarized the beginning of the case out of the UK:

  • At the beginning of 2015, the existence of practices for the acquisition and use of bulk communications data by the various security and intelligence agencies of the United Kingdom, namely GCHQ, MI5 and MI6, was made public, including in a report by the Intelligence and Security Committee of Parliament (United Kingdom). On 5 June 2015, Privacy International, a non-governmental organisation, brought an action before the Investigatory Powers Tribunal (United Kingdom) against the Secretary of State for Foreign and Commonwealth Affairs, the Secretary of State for the Home Department and those security and intelligence agencies, challenging the lawfulness of those practices.

Secondly, the government of Prime Minister Boris Johnson may aspire to change data laws in ways the EU does not. In media accounts, unnamed EC officials were critical of the UK’s 2020 “National Data Strategy,” particularly references to “legal barriers (real and perceived)” to accessing data that “must be addressed.”

Thirdly, it may become a matter of politics. The EU has incentives to make the UK’s exit from the EU difficult to dissuade other nations from following the same path. Moreover, having previously been the second largest economy in the EU as measured by GDP, the UK may prove a formidable economy competitor, lending more weight to the view that the EU may not want to help the UK’s  businesses compete with the EU’s.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2021. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by succo from Pixabay


[1] European Union Parliament, “The EU-UK relationship beyond Brexit: options for Police Cooperation and Judicial Cooperation in Criminal Matters,” Page 8: Although the UK legal framework is currently broadly in line with the EU legal framework and the UK is a signatory to the European Convention on Human Rights (ECHR), there are substantial questions over whether the Data Protection Act fully incorporates the data protection elements required by the Charter of Fundamental Rights, concerning the use of the national security exemption from the GDPR used by the UK, the retention of data and bulk powers granted to its security services, and over its onward transfer of this data to third country security partners such as the ‘Five Eyes’ partners (Britain, the USA, Australia, New Zealand and Canada).

Social Media Reckoning

The U.S. President was widely banned on social media platforms after his role in the attack on the U.S. Capitol during the certification of President-elect Joe Biden’s win.

Because of the role President Donald Trump played directly and indirectly in the events of 6 January 2021 at the United States (U.S.) Capitol, Twitter, Facebook, and other technology companies took steps to limit Trump’s usage of their platforms, some temporarily and some permanently. Other right wing and extremist figures were also banned in a flurry over a period of a few days as well. To be sure, these decisions were greeted with praise and criticism across the political spectrum in the U.S. in much the same ways as decisions to moderate, comment upon, or block materials in the run up to the election were. These decisions will alternately be seen as further censoring of views from the right, too little too late to the many on the left, and as social media platforms seeking favor with the new government coming into power on 20 January that will be able to get its nominees through the Senate allowing for potentially greater regulation.

As a legal matter, it seems to be settled law that 47 USC 230 gives the platforms complete legal protection for removing and moderating content and users. Moreover, recent Supreme Court jurisprudence makes clear that private actors are not bound by the First Amendment’s guarantee of free speech that is binding on government entities and actors. Whether this is the proper construction of the First Amendment is a different issue. A number of precedents from the mid-20th Century allowed for the exercise and protection of free speech on private property that was functionally considered public property (e.g. neighborhoods or shopping malls.) Perhaps this is where U.S. policy and law will go, but, for now, it seems entirely legal for Twitter, Facebook, and others to ban users if they choose as they are not government actors.

As mentioned, these actions are sure to inform any action Congress and the Biden Administration consider in respect to reform of 47 USC 230 (Section 230.) There may be shared concern about the power that tech giants have in deciding which people can post material on widely used platforms. There is also likely to be disagreement about Section 230 should be reformed with conservatives consistently claiming without substantial evidence that the Twitters of the world unfairly discriminate against them and liberals decrying the widespread lack of action taken by platforms about the abuse women, minorities, and  others suffer online. It remains to be seen whether and how Democrats and Republicans can bridge their differences. In any event, President-elect Joe Biden famously asserted in an interview that he believes that Section 230 should be entirely repealed. Whether this is his White House’s policy position is not clear at this point. However, the decisions of these platforms this past week will definitely be part of any policy and political debate.

In response to the lies President Donald Trump told about the 2020 Presidential Election in a video ostensibly meant to calm the rioters who took over the U.S. Capitol, Twitter took the unprecedented step of blocking Trump for 12 hours from his account and possibly longer until he takes down the untrue, inflammatory content. Twitter’s Safety account tweeted:

As a result of the unprecedented and ongoing violent situation in Washington, D.C., we have required the removal of three @realDonaldTrump Tweets that were posted earlier today for repeated and severe violations of our Civic Integrity policy. This means that the account of @realDonaldTrump will be locked for 12 hours following the removal of these Tweets. If the Tweets are not removed, the account will remain locked.

Early on 7 January, Trump removed the tweets that led to his suspension.

In a blog posting, Twitter announced a permanent ban of Trump’s account

After close review of recent Tweets from the @realDonaldTrump account and the context around them — specifically how they are being received and interpreted on and off Twitter — we have permanently suspended the account due to the risk of further incitement of violence. 

Twitter cited these two tweets as violating their policies when read in the context of events on 6 January 2021:

  • “The 75,000,000 great American Patriots who voted for me, AMERICA FIRST, and MAKE AMERICA GREAT AGAIN, will have a GIANT VOICE long into the future. They will not be disrespected or treated unfairly in any way, shape or form!!!”
  • “To all of those who have asked, I will not be going to the Inauguration on January 20th.”

Twitter concluded: “our determination is that the two Tweets above are likely to inspire others to replicate the violent acts that took place on January 6, 2021, and that there are multiple indicators that they are being received and understood as encouragement to do so.”

Twitter also moved to permanently ban Trump allies former Lieutenant General Michael Flynn and lawyer Sidney Powell. Additionally, Google’s YouTube banned former White House advisor Steve Bannon for violating its policy that affords a certain number of strikes within a 90-day period. Bannon had hosted Rudy Giuliani after the 6 January attack, and hours later YouTube acted. The platform explained: “In accordance with our strikes system, we have terminated Steve Bannon’s channel ‘War room’ and one associated channel for repeatedly violating our Community Guidelines.”

Snapchat also announced it had locked Trump’s account. In mid 2020, this platform had started not promoting Trump’s snaps after he made statements in opposition to the protests against police brutality on Snapchat.

Facebook has also banned Trump. At first, on the day rioters stormed the Capitol, Facebook “removed from Facebook and Instagram the recent video of President Trump speaking about the protests and his subsequent post about the election results…[on the rationale that] these posts contribute to, rather than diminish, the risk of ongoing violence.” Later that day, Facebook explained “[w]e’ve assessed two policy violations against President Trump’s Page which will result in a 24-hour feature block, meaning he will lose the ability to post on the platform during that time.” On the morning of 7 January, CEO and Chairman of the Board Mark Zuckerberg extended the ban for the duration of Trump’s tenure as President:

We believe the risks of allowing President Trump to continue to use our service during this period are simply too great, so we are extending the block we have placed on his Facebook and Instagram accounts indefinitely and for at least the next two weeks.

Reddit also shut down its largest Donald Trump subreddit “r/donaldtrump” for promoting and inciting violence.

Thereafter, the app, Parler, that fancies itself as the conservative version of Twitter, was essentially shut down by Apple, Google, and Amazon for its role in the attack on the U.S. Capitol. Its estimated 8-10 million users in the U.S. skew right, a significant number of whom are white supremacists, and Many news accounts of 6 January claim the insurgents used Parler and Gab Over the weekend, Google first removed the app from its Play Store, and then Apple warned the app had 24 hours to address its terms of service violations. Thereafter Apple followed Google’s lead and banned the app. Of course, just because an app is banned from the two major app stores does not mean it cannot be used; rather, it just means new users cannot download it from the Apple and Google, but more crucially, the currently installed Parler apps cannot be updated. However, the Parler operation has been shut down by another tech giant for an indefinite amount of time.

On 9, January, Amazon Web Services (AWS) emailed Parler, letting them know they had violated the terms of service under which the former was hosting the latter’s website. Consequently, Amazon informed Parler “[r]ecently, we’ve seen a steady increase in this violent content on your website, all of which violates our terms.” Amazon added “[i]t’s clear that Parler does not have an effective process to comply with the AWS terms of service.” Parler may be able to find another server to host their operations, but this would take time. On 11 January, Parler sued Amazon, alleging AWS engaged in anti-competitive conduct in pulling its web-hosting services.

The reception in Congress has split along partisan lines. The chair of the House subcommittee with primary jurisdiction over Section 230 made clear Facebook and Twitter acted too late. Representative Jan Schakowsky (D-IL) claimed in her statement:

Today’s actions by Facebook and Twitter are too little, too late, in light of yesterday’s violence, as is almost always the case. Think–only after the Senate and White House flip, and Trump has two weeks left in office–does Facebook pretend to show the minimal amount of bravery.

Schakowsky chairs the Consumer Protection and Commerce Subcommittee of the House Energy and Commerce Committee.

The outgoing chair of the Senate Judiciary Committee Senator Lindsey Graham (R-SC) tweeted:

I’m more determined than ever to strip Section 230 protections from Big Tech (Twitter) that let them be immune from lawsuits. Big Tech are the only companies in America that virtually have absolute immunity from being sued for their actions, and it’s only because Congress gave them that protection.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2021. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Tibor Janosi Mozes from Pixabay

New Google Antitrust Suits Filed

Two new suits filed against Google by state attorneys general. If the content detailed isn’t illegal behavior, get ready for even more shocking conduct from technology companies to stymie competitors and extract the maximum of any and all rents.

Last month, two new suits were filed against Google, arguing that the company’s dominance in the search engine and online advertising markets. One suit is led by Colorado’s attorney general and the other by Texas’ attorney general. The two suits have overlapping but different foci, and it is possible these new suits get folded into the suit against Google filed by the United States (U.S.) Department of Justice (DOJ). There are also media reports that some of the states that brought these suits may be preparing yet another antitrust action against Google over allegedly anti-monopolistic behavior in how it operates its Google Play app store.

Colorado Attorney General Phil Phil Weiser and 38 other state attorneys general[1] filed their antitrust suit in the District Court of the District of Columbia “under Section 2 of the Sherman Act, 15 U.S.C. § 2, to restrain Google from unlawfully restraining trade and maintaining monopolies in markets that include general search services, general search text advertising, and general search advertising in the United States, and to remedy the effects of this conduct.” They are asking the court for a range of relief, including but not limited to permanent injunctions to stop ongoing and future anti-competitive conduct and a ;possible breakup of the company.

Weiser and his counterparts framed their argument this way:

Google, one of the largest companies in the world, has methodically undertaken actions to entrench and reinforce its general search services and search-related advertising monopolies by stifling competition. As the gateway to the internet, Google has systematically degraded the ability of other companies to access consumers. In doing so, just as Microsoft improperly maintained its monopoly through conduct directed at Netscape, Google has improperly maintained and extended its search-related monopolies through exclusionary conduct that has harmed consumers, advertisers, and the competitive process itself. Google, moreover, cannot establish business justifications or procompetitive benefits sufficient to justify its exclusionary conduct in any relevant market.

They summed up their legal argument of three forms of anticompetitive conduct of Google:

  • First, Google uses its massive financial resources to limit the number of consumers who use a Google competitor. For example, according to public estimates Google pays Apple between $8 and $12 billion per year to ensure that Google is enthroned as the default search engine on Apple devices, and it limits general search competition on Android devices with a web of restrictive contracts. Google pursues similar strategies with other devices, such as voice assistants and internet-connected cars.
  • Second, Google’s Search Ads 360 (“SA360”) service, a search advertising marketing tool used by many of the world’s most sophisticated advertisers, has long pledged to offer advertisers a “neutral” means for purchasing and comparing the performance of not only Google’s search advertising, but also that of its closest competitors. But, in reality, Google operates SA360—the single largest such tool used by advertisers—to severely limit the tool’s interoperability with a competitor, thereby disadvantaging SA360 advertisers.
  • Third, Google throttles consumers from bypassing its general search engine and going directly to their chosen destination, especially when those destinations threaten Google’s monopoly power. Google acknowledges its [REDACTED] because of the proliferation of services offered by specialized vertical providers. Specialized vertical providers, like an online travel agency who offer consumers the ability to complete a transaction then and there, do not compete in Google’s search-related markets. Nevertheless, they pose a threat to Google’s monopoly power in those markets because their success would both strengthen general search rivals with whom they partner and lower the artificially high barriers to expansion and entry that protect Google’s monopolies.

In summary, Weiser and his colleagues argued:

  • Google has willfully maintained, abused, and extended its monopoly power in general search services through (a) anticompetitive and exclusionary distribution agreements that lock up the present default positions for search access points on browsers, mobile devices, computers, and other devices as well as emerging device technology; require preinstallation and prominent placement of Google’s apps; and tie Google’s search access points to Google Play and Google APIs; (b) operation of SA360 to limit the tool’s interoperability with a competitor, disadvantaging SA360 advertisers; (c) discriminatory treatment towards specialized vertical providers in certain commercial segments that hinders consumers’ ability to find responsive information; and (d) other restrictions that drive queries to Google at the expense of search rivals.
  • Google has willfully maintained, abused, and extended its monopoly power in general search advertising through (a) anticompetitive and exclusionary distribution agreements that lock up the present default positions for search access points on browsers, mobile devices, computers, and other devices as well as emerging device technology; require preinstallation and prominent placement of Google’s apps; and tie Google’s search access points to Google Play and Google APIs; (b) operation of SA360 to limit the tool’s interoperability with a competitor, disadvantaging SA360 advertisers; (c) discriminatory treatment towards specialized vertical providers in certain commercial segments that hinders consumers’ ability to find responsive information; and (d) other restrictions that drive queries to Google at the expense of search rivals.
  • Google has willfully maintained, abused, and extended its monopoly power in general search text advertising through (a) anticompetitive and exclusionary distribution agreements that lock up the present default positions for search access points on browsers, mobile devices, computers, and other devices as well as emerging device technology; require preinstallation and prominent placement of Google’s apps; and tie Google’s search access points to Google Play and Google APIs; (b) operation of SA360 to limit the tool’s interoperability with a competitor, disadvantaging SA360 advertisers; (c) discriminatory treatment towards specialized vertical providers in certain commercial segments that hinders consumers’ ability to find responsive information; and (d) other restrictions that drive queries to Google at the expense of search rivals.

Texas Attorney General Ken Paxton and nine other attorneys general[2] filed their antitrust action in the Eastern District of Texas and dropped a bomb: they allege Google and Facebook conspired to monopolize the online advertising market after publishers have devised a system to blunt Google’s dominance. However, Paxton and his colleagues argue that Google’s illegal actions have essentially taxed Americans through higher prices and lower quality products and services because companies are forced to pay a premium to Google to advertise online.

Paxton and the attorneys general summarized their suit and the relief they think appropriate in light of Google’s conduct:

As a result of Google’s anticompetitive conduct, including its unlawful agreement with Facebook, Google has violated and continues to violate Sections 1 and 2 of the Sherman Act, 15 U.S.C. §§ 1, 2. Plaintiff States bring this action to remove the veil of Google’s secret practices and end Google’s abuse of its monopoly power in online advertising markets. Plaintiff States seek to restore free and fair competition to these markets and to secure structural, behavioral, and monetary relief to prevent Google from ever again engaging in deceptive trade practices and abusing its monopoly power to foreclose competition and harm consumers.

They summed up the harm they think Google has wrought:

Plaintiff States have sustained antitrust injury as a direct and proximate cause of Google’s unlawful conduct, in at least the following ways: (1) substantially foreclosing competition in the market for publisher ad servers, and using market power in the publisher ad server market to harm competition in the exchange market; (2) substantially foreclosing competition in the exchange market by denying rivals’ access to publisher inventory and to advertiser demand; (3) substantially foreclosing competition in the market for demand-side buying tools by creating information asymmetry and unfair auctions by virtue of Google’s market dominance in the publisher ad serving tools and exchange markets; (4) increasing barriers to entry and competition in publisher ad server, exchange, and demand-side buying tools markets; (5) harming innovation, which would otherwise benefit publishers, advertisers and competitors; (6) harming publishers’ ability to effectively monetize their content, reducing publishers’ revenues, and thereby reducing output and harming consumers; (7) reducing advertiser demand and participation in the market by maintaining opacity on margins and selling process, harming rival exchanges and buying tools; (8) increasing advertisers’ costs to advertise and reducing the effectiveness of their advertising, and thereby harming businesses’ return on the investment in delivering their products and services, reducing output, and harming consumers; (9) protecting Google’s products from competitive pressures, thereby allowing it to continue to extract high margins while shielded from significant pressure to innovate.

With regard to another possible antitrust action against Google, the suit Epic Games brought against the tech giant for taking 30% of in-app purchases as a condition of being allowed in the Play Store may shed light on what such a suit may look like.  In August Epic Games filed a suit against Google on substantially the same grounds as it is bringing against Apple. Google acted after Apple did to remove Fortnite from its Play Store once Epic Games started offering users a discounted price to buy directly from them as opposed to through Google. Epic asserted:

  • Epic brings claims under Sections 1 and 2 of the Sherman Act and under California law to end Google’s unlawful monopolization and anti-competitive restraints in two separate markets: (1) the market for the distribution of mobile apps to Android users and (2) the market for processing payments for digital content within Android mobile apps. Epic seeks to end Google’s unfair, monopolistic and anti-competitive actions in each of these markets, which harm device makers, app developers, app distributors, payment processors, and consumers.
  • Epic does not seek monetary compensation from this Court for the injuries it has suffered. Epic likewise does not seek a side deal or favorable treatment from Google for itself. Instead, Epic seeks injunctive relief that would deliver Google’s broken promise: an open, competitive Android ecosystem for all users and industry participants. Such injunctive relief is sorely needed.
  • Google has eliminated competition in the distribution of Android apps using myriad contractual and technical barriers. Google’s actions force app developers and consumers into Google’s own monopolized “app store”—the Google Play Store. Google has thus installed itself as an unavoidable middleman for app developers who wish to reach Android users and vice versa. Google uses this monopoly power to impose a tax that siphons monopoly profits for itself every time an app developer transacts with a consumer for the sale of an app or in-app digital content. And Google further siphons off all user data exchanged in such transactions, to benefit its own app designs and advertising business.
  • If not for Google’s anti-competitive behavior, the Android ecosystem could live up to Google’s promise of open competition, providing Android users and developers with competing app stores that offer more innovation, significantly lower prices and a choice of payment processors. Such an open system is not hard to imagine. Two decades ago, through the actions of courts and regulators, Microsoft was forced to open up the Windows for PC ecosystem. As a result, PC users have multiple options for downloading software unto their computers, either directly from developers’ websites or from several competing stores. No single entity controls the ecosystem or imposes a tax on all transactions. And Google, as the developer of software such as the Chrome browser, is a direct beneficiary of this competitive landscape. Android users and developers likewise deserve free and fair competition.

In late October, the DOJ and a number of states filed a long awaited antitrust suit against Google that has been rumored to be coming since late summer 2020. This anti-trust action centers on Google’s practices of making Google the default search engine on Android devices and paying browsers and other technology entities to make Google the default search engine. The DOJ and eleven state attorneys general are following in the footsteps of the European Union’s (EU) €4.34 billion fine of Google in 2018 for imposing “illegal restrictions on Android device manufacturers and mobile network operators to cement its dominant position in general internet search.” The European Commission (EC or Commission) claimed the offending behavior included:

  • has required manufacturers to pre-install the Google Search app and browser app (Chrome), as a condition for licensing Google’s app store (the Play Store);
  • made payments to certain large manufacturers and mobile network operators on condition that they exclusively pre-installed the Google Search app on their devices; and
  • has prevented manufacturers wishing to pre-install Google apps from selling even a single smart mobile device running on alternative versions of Android that were not approved by Google (so-called “Android forks”).

The EC said its “decision concludes that Google is dominant in the markets for general internet search services, licensable smart mobile operating systems and app stores for the Android mobile operating system.”

And, of course, this is only the latest anti-trust case Google has faced in the EU with the €2.42 billion fine in June 2017 “for abusing its dominance as a search engine by giving an illegal advantage to Google’s own comparison shopping service.”

Google’s antitrust and anticompetitive issues are not confined to the United States and the EU. In 2019, the Australian Competition and Consumer Commission (ACCC) announced a legal action against Google “alleging they engaged in misleading conduct and made false or misleading representations to consumers about the personal location data Google collects, keeps and uses” according to the agency’s press release. In its initial filing, the ACCC is claiming that Google mislead and deceived the public in contravention of the Australian Competition Law and Android users were harmed because those that switched off Location Services were unaware that their location information was still be collected and used by Google for it was not readily apparent that Web & App Activity also needed to be switched off.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2021. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Hebi B. from Pixabay


[1] The following states are parties to the suit: Colorado, Nebraska, Arizona, Iowa, New York, North Carolina, Tennessee, Utah, Alaska, Connecticut, Delaware, Hawaii, Idaho, Illinois, Kansas, Maine, Maryland, Minnesota, Nevada, New Hampshire, New Jersey, New Mexico, North Dakota, Ohio, Oklahoma, Oregon, Rhode Island, South Dakota, Vermont, Washington, West Virginia, and Wyoming; the Commonwealths of Massachusetts, Pennsylvania, Puerto Rico, and Virginia; the Territory of Guam; and the District of Columbia.

[2] These states sued Google: Texas, Arkansas  Idaho, Indiana, Mississippi,  Missouri,  North Dakota,  South Dakota, Utah, and the Commonwealth of Kentucky.

Further Reading, Other Developments, and Coming Events (5 January 2021)

Further Reading

  • China Used Stolen Data To Expose CIA Operatives In Africa And Europe;” “Beijing Ransacked Data as U.S. Sources Went Dark in China;” “Tech Giants Are Giving China A Vital Edge In Espionage” By Zach Dorfman — Foreign Policy. This terrifying trio of articles lays bare the 180 degree change in espionage advantage the People’s Republic of China (PRC) seems to hold over the United States (U.S.). Hacking, big data, processing, algorithms, and other technological issues play prominent roles in the PRC’s seeming advantage. It remains to be seen how the U.S. responds to the new status quo.
  • Singapore police can access COVID-19 contact tracing data for criminal investigations” By Eileen Yu — ZDNet. During questioning in Singapore’s Parliament, it was revealed the police can use existing authority to access the data on a person’s smartphone collected by the nation’s TraceTogether app. Technically, this would entail a person being asked by the police to upload their data, which is stored on devices and encrypted. Nonetheless, this is the very scenario privacy advocates have been saying is all but inevitable with COVID-19 tracing apps on phones.
  • As Understanding of Russian Hacking Grows, So Does Alarm” By David Sanger, Nicole Perlroth, and Julian Barnes — The New York Times. Like a detonated bomb, the Russian hack of United States (U.S.) public and private systems keeps getting worse in terms of damage and fallout. The scope continues to widen as it may come to pass that thousands of U.S. entities have been compromised in ways that leave them vulnerable to future attacks. Incidentally, the massive hack has tarnished somewhat the triumph of the U.S. intelligence agencies in fending off interference with the 2020 election.
  • Google workers launch unconventional union with help of Communications Workers of America” By Nitasha Tiku — The Washington Post. A new union formed in Google stopped short of seeking certification by the National Labor Relations Board (NLRB), which will block it from collective bargaining. Nonetheless, the new union will collect dues and have a board of directors. This may lead to additional unionizing efforts in union-averse Silicon Valley and throughout the tech world.
  • ‘Break up the groupthink’: Democrats press Biden to diversify his tech picks” By Cristiano Lima — Politico. Key Democratic groups in the House are pushing the Biden team to appoint people of color for key technology positions at agencies such as the Federal Trade Commission (FTC), Federal Communications Commission (FCC), the Office of Science and Technology Policy (OSTP).

Other Developments

  • The Congress overrode President Donald Trump’s veto of the FY 2021 National Defense Authorization Act (NDAA), thus enacting the annual defense and national security policy bill, which includes a number of technology provisions that will have effects in the public and private sectors. (See here and here for analysis of these provisions in the “William M. “Mac” Thornberry National Defense Authorization Act for Fiscal Year 2021” (H.R.6395).
  • A federal court dismissed a lawsuit brought by a civil liberties and privacy advocacy group to stop implementation of President Donald Trump’s executive order aimed at social media companies and their liability protection under 47 USC 230 (aka Section 230). In June, the Center for Democracy and Technology (CDT), filed suit in federal court to block enforcement of the “Executive Order (EO) on Preventing Online Censorship.” However, the United States District Court of the District of Columbia ruled that CDT is not injured by the executive order (EO) and any such lawsuit is premature. The court dismissed the lawsuit for lack of jurisdiction.
    • In its complaint, CDT argued the EO “violates the First Amendment in two fundamental respects:
      • First, the Order is plainly retaliatory: it attacks a private company, Twitter, for exercising its First Amendment right to comment on the President’s statements.
      • Second, and more fundamentally, the Order seeks to curtail and chill the constitutionally protected speech of all online platforms and individuals— by demonstrating the willingness to use government authority to retaliate against those who criticize the government.”
  • The Federal Trade Commission (FTC) reached a settlement with a company that sells emergency travel and medical services for failing “to take reasonable steps to secure sensitive consumer information such as health records,” including having a unsecured cloud database a security researcher stumbled upon with the sensitive data of more than 130,000 people. Moreover, the company claimed a certification of compliance with the Health Insurance Portability and Accountability Act (HIPAA), which turned out to be untrue. In the complaint, the FTC alleged that these and other practices “constitute unfair and/or deceptive acts or practices, in or affecting commerce in violation of Section 5(a) of the Federal Trade Commission Act.” The FTC and the company reached agreement on a consent order that will require the company’s compliance for at least 20 years.
    • In the complaint, the FTC stated that SkyMed “advertises, offers for sale, and sells nationwide a wide array of emergency travel membership plans that cover up to eighteen different emergency travel and medical evacuation services for members who sustain serious illnesses or injuries during travel in certain geographic areas.”
    • The FTC asserted a security researcher discovered SkyMed’s “database, which could be located and accessed by anyone on the internet, contained approximately 130,000 membership records with consumers’ personal information stored in plain text, including information populated in certain fields for names, dates of birth, gender, home addresses, email addresses, phone numbers, membership information and account numbers, and health information.”
    • The FTC noted the company told affected customers that it had investigated and “[t]here was no medical or payment-related information visible and no indication that the information has been misused.” This turns out to be completely false, and the company’s “investigation did not determine that consumers’ health information was neither stored on the cloud database, nor improperly accessed by an unauthorized third party.”
    • The FTC summarized the terms of the consent order and SkyMed’s obligations:
      • Under the proposed settlement, SkyMed is prohibited from misrepresenting how it secures personal data, the circumstances of and response to a data breach, and whether the company has been endorsed by or participates in any government-sponsored privacy or security program. The company also will be required to send a notice to affected consumers detailing the data that was exposed by the data breach.
      • As part of the mandated information security program, the company must identify and document potential internal and external risks and design, implement, and maintain safeguards to protect personal information it collects from those risks. In addition, SkyMed must obtain biennial assessments of its information security program by a third party, which the FTC has authority to approve, to examine the effectiveness of SkyMed’s information security program, identify any gaps or weaknesses, and monitor efforts to address these problems. The settlement also requires a senior SkyMed executive to certify annually that the company is complying with the requirements of the settlement.
  • The European Commission (EC) has communicated its vision for a new cybersecurity strategy to the European Parliament and European Council “to ensure a global and open Internet with strong guardrails to address the risks to the security and fundamental rights and freedoms of people in Europe.” The EC spelled out its dramatic plan to remake how the bloc regulates, invests in, and structures policies around cybersecurity. The EC claimed “[a]s a key component of Shaping Europe’s Digital Future, the Recovery Plan for Europe  and the EU Security Union Strategy, the Strategy will bolster Europe’s collective resilience against cyber threats and help to ensure that all citizens and businesses can fully benefit from trustworthy and reliable services and digital tools.” If the European Union (EU) follows through, this strategy may have significant effects in the EU and around the world. The EC further explained:
    • Following the progress achieved under the previous strategies, it contains concrete proposals for deploying three principal instruments –regulatory, investment and policy instruments – to address three areas of EU action – (1) resilience, technological sovereignty and leadership, (2) building operational capacity to prevent, deter and respond, and (3) advancing a global and open cyberspace. The EU is committed to supporting this strategy through an unprecedented level of investment in the EU’s digital transition over the next seven years – potentially quadrupling previous levels – as part of new technological and industrial policies and the recovery agenda
    • Cybersecurity must be integrated into all these digital investments, particularly key technologies like Artificial Intelligence (AI), encryption and quantum computing, using incentives, obligations and benchmarks. This can stimulate the growth of the European cybersecurity industry and provide the certainty needed to ease the phasing out of legacy systems. The European Defence Fund (EDF) will support European cyber defence solutions, as part of the European defence technological and industrial base. Cybersecurity is included in external financial instruments to support our partners, notably the Neighbourhood, Development and International Cooperation Instrument. Preventing the misuse of technologies, protecting critical infrastructure and ensuring the integrity of supply chains also enables the EU’s adherence to the UN norms, rules and principles of responsible state behavior.
    • With respect to actions that might be taken, the EC stated that “[t]he EU should ensure:
      • Adoption of revised NIS Directive;
      • Regulatory measures for an Internet of Secure Things
      • Through the CCCN investment in cybersecurity (notably through the Digital Europe Programme, Horizon Europe and recovery facility) to reach up to €4.5 billion in public and private investments over 2021-2027;
      • An EU network of AI-enabled Security Operation Centres and an ultra-secure communication infrastructure harnessing quantum technologies;
      • Widespread adoption of cybersecurity technologies through dedicated support to SMEs under the Digital Innovation Hubs;
      • Development of an EU DNS resolver service as a safe and open alternative for EU citizens, businesses and public administration to access the Internet; and
      • Completion of the implementation of the 5G Toolbox by the second quarter of 2021
      • Complete the European cybersecurity crisis management framework and determine the process, milestones and timeline for establishing the Joint Cyber Unit;
      •  Continue implementation of cybercrime agenda under the Security Union Strategy;
      • Encourage and facilitate the establishment of a Member States’ cyber intelligence working group residing within the EU INTCEN;
      • Advance the EU’s cyber deterrence posture to prevent, discourage, deter and respond to malicious cyber activities;
      • Review the Cyber Defence Policy Framework;
      • Facilitate the development of an EU “Military Vision and Strategy on Cyberspace as a Domain of Operations” for CSDP military missions and operations;
      • Support synergies between civil, defence and space industries; and
      • Reinforce cybersecurity of critical space infrastructures under the Space Programme.
      • Define a set of objectives in international standardisation processes, and promote these at international level;
      • Advance international security and stability in cyberspace, notably through the proposal by the EU and its Member States for a Programme of Action to Advance Responsible State Behaviour in Cyberspace (PoA) in the United Nations;
      • Offer practical guidance on the application of human rights and fundamental freedoms in cyberspace;
      • Better protect children against child sexual abuse and exploitation, as well as a Strategy on the Rights of the Child;
      • Strengthen and promote the Budapest Convention on Cybercrime, including through the work on the Second Additional Protocol to the Budapest Convention;
      • Expand EU cyber dialogue with third countries, regional and international organisations, including through an informal EU Cyber Diplomacy Network;
      • Reinforce the exchanges with the multi-stakeholder community, notably by regular and structured exchanges with the private sector, academia and civil society; and
      • Propose an EU External Cyber Capacity Building Agenda and an EU Cyber Capacity Building Board.
  • The U.S.-China  Economic  and  Security  Review  Commission released its annual report on the People’s Republic of China (PRC) per its “mandate “to monitor, investigate, and report to Congress on the national security implications of the bilateral trade and economic relationship between the United States and the People’s Republic of China.” The Commission argued:
    • Left unchecked, the PRC will continue building a new global order anathema to the interests and values that have underpinned unprecedented economic growth and stability among nations in the post-Cold War era. The past 20 years are littered with the Chinese  Communist  Party’s (CCP) broken promises. In China’s intended new order, there is little reason to believe CCP promises of “win-win” solutions, mutual respect, and peaceful coexistence. A clear understanding of the CCP’s adversarial national security and economic ambitions is essential as U.S. and allied leaders develop the policies and programs that will define the conditions of global freedom and shape our future.
    • The Commission made ten “Key Recommendations:”
      • Congress adopt the principle of reciprocity as foundational in all legislation bearing on U.S.-China relations.
      • Congress expand the authority of the Federal Trade Commission (FTC) to monitor and take foreign government subsidies into account in premerger notification processes.
      • Congress direct the U.S. Department of State to produce an annual report detailing China’s actions in the United Nations and its subordinate agencies that subvert the principles and purposes of the United Nations
      • Congress hold hearings to consider the creation of an interagency executive Committee on Technical Standards that would be responsible for coordinating U.S. government policy and priorities on international standards.
      • Congress consider establishing a “Manhattan Project”-like effort to ensure that the American public has access to safe and secure supplies of critical lifesaving and life-sustaining drugs and medical equipment, and to ensure that these supplies are available from domestic sources or, where necessary, trusted allies.
      • Congress enact legislation establishing a China Economic Data Coordination Center (CEDCC) at the Bureau of Economic Analysis at the U.S. Department of Commerce.
      • Congress direct the Administration, when sanctioning an entity in the People’s Republic of China for actions contrary to the economic and national security interests of the United States or for violations of human rights, to also sanction the parent entity.
      • Congress consider enacting legislation to make the Director of the American Institute in Taiwan a presidential nomination subject to the advice and consent of the United States Senate.
      • Congress amend the Immigration and Nationality Act to clarify that association with a foreign government’s technology transfer programs may be considered grounds to deny a nonimmigrant visa if the foreign government in question is deemed a strategic competitor of the United States, or if the applicant has engaged in violations of U.S. laws relating to espionage, sabotage, or export controls.
      • Congress direct the Administration to identify and remove barriers to receiving United States visas for Hong Kong residents attempting to exit Hong Kong for fear of political persecution.
  • The Electronic Privacy Information Center, the Center for Digital Democracy, the Campaign for a Commercial-Free Childhood, the Parent Coalition for Student Privacy, and Consumer Federation of America asked the Federal Trade Commission (FTC) “to recommend specific changes to the proposed Consent Order to safeguard the privacy interests of Zoom users” in their comments submitted regarding the FTC’s settlement with Zoom. In November, the FTC split along party lines to approve a settlement with Zoom to resolve allegations that the video messaging platform violated the FTC Act’s ban on unfair and deceptive practices in commerce. Zoom agreed to a consent order mandating a new information security program, third party assessment, prompt reporting of covered incidents and other requirements over a period of 20 years. The two Democratic Commissioners voted against the settlement and dissented because they argued it did not punish the abundant wrongdoing and will not dissuade future offenders. Commissioners Rohit Chopra and Rebecca Kelly Slaughter dissented for a variety of reasons that may be summed up: the FTC let Zoom off with a slap on the wrist. Kelly Slaughter focused on the majority’s choice to ignore the privacy implications of Zoom’s misdeeds, especially by not including any requirements that Zoom improve its faulty privacy practices.
    • The groups “recommend that the FTC modify the proposed Consent Order and require Zoom to(1) implement a comprehensive privacy program; (2) obtain regular independent privacy assessments and make those assessments available to the public; (3) provide meaningful redress for victims of Zoom’s unfair and deceptive trade practices; and (4) ensure the adequate protection and limits on the collection of children’s data.”

Coming Events

  • On 13 January, the Federal Communications Commission (FCC) will hold its monthly open meeting, and the agency has placed the following items on its tentative agenda “Bureau, Office, and Task Force leaders will summarize the work their teams have done over the last four years in a series of presentations:
    • Panel One. The Commission will hear presentations from the Wireless Telecommunications Bureau, International Bureau, Office of Engineering and Technology, and Office of Economics and Analytics.
    • Panel Two. The Commission will hear presentations from the Wireline Competition Bureau and the Rural Broadband Auctions Task Force.
    • Panel Three. The Commission will hear presentations from the Media Bureau and the Incentive Auction Task Force.
    • Panel Four. The Commission will hear presentations from the Consumer and Governmental Affairs Bureau, Enforcement Bureau, and Public Safety and Homeland Security Bureau.
    • Panel Five. The Commission will hear presentations from the Office of Communications Business Opportunities, Office of Managing Director, and Office of General Counsel.
  • On 27 July, the Federal Trade Commission (FTC) will hold PrivacyCon 2021.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2021. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Free-Photos from Pixabay

EU Regulators Settle Dispute Over Proper Punishment of Twitter For Breach

The EDPB uses its GDPR powers to manage a dispute between DPAs.

The European Data Protection Board (EDPB) concluded its first use of powers granted under the General Data Protection Regulation (GDPR) (Regulation (EU) 2016/679 of the European Parliament and of the Council) to resolve a dispute among EU regulators on how to apply the GDPR in punishing a violator. In this case, the EDPB had to referee how Twitter should be punished for a data breach arising from a bug affecting users of an Android OS. Ireland’s Data Protection Commission (DPC) and unnamed concerned supervisory agencies (CSA) disagreed about how Twitter should be fined for the GDPR breach, and so an unused article of the GDPR was triggered that put the EDPB in charge of resolving the dispute. The EDPB considered the objections raised by other EU agencies and found that the DPC needed to recalculate its fine that was set to be a maximum of $300,000 of a possible $69.2 million. Thereafter, the DPC revised and decided that “an administrative fine of €450,000 on Twitter” is “an effective, proportionate and dissuasive measure.”

The DPC issued a revised decision that incorporates the EDPB’s decision on the case that arose from a glitch that changed a person’s protected tweets to unprotected. Twitter users may protect their tweets, meaning only certain people, usually just followers, can see this content. However, a bug with the Android OS resulted in a person’s desire to protect their tweets being thwarted the DPC explained:

The bug that resulted in this data breach meant that, if a user operating an Android device changed the  email  address  associated  with  that  Twitter  account,  their  tweets  became  unprotected  and consequently were accessible to the wider public without the user’s knowledge.

The DPC said this breach occurred between September 2017 and January 2019, affecting 88,726 EU and European Economic Area (EEA) users, and on 8 January 2019, Twitter alerted the DPC, triggering an investigation. Twitter revealed:

On 26 December 2018, we received a bug report through our bug bounty program that if a Twitter user with a protected account, using Twitter for Android, changed their email address the bug would result in their account being unprotected.

Article 33(1) of the GDPR requires breaches to be reported to a DPA within 72 hours in most cases:

In the case of a personal data breach, the controller shall without undue delay and, where feasible, not later than 72 hours after having become aware of it, notify the personal data breach to the supervisory authority competent in accordance with Article 55, unless the personal data breach is unlikely to result in a risk to the rights and freedoms of natural persons. Where the notification to the supervisory authority is not made within 72 hours, it shall be accompanied by reasons for the delay.

However, Twitter conceded by way of reason as to why it had not reported the breach within the 72 hour window:

The severity of the issue – and that it was reportable – was not appreciated until 3 January 2018 at which point Twitter’s incident response process was put into action.

Additionally, Article 33(5) would become relevant during the DPC investigation:

The controller shall document any personal data breaches, comprising the facts relating to the personal data breach, its effects and the remedial action taken. That documentation shall enable the supervisory authority to verify compliance with this Article.

Consequently, Twitter had a responsibility as the controller to document all the relevant facts about the data breach and then to report the breach within 72 hours of becoming aware of the breach subject to a range of exceptions.

Shortly thereafter, the DPC named itself the lead supervisory agency (LSA), investigated and reached its proposed decision in late April and submitted it to the European Commission (EC). And, this is where the need for the EDPB to step in began.

Irish Data Protection Commissioner Helen Dixon explained the scope of the subsequent investigation:

  1. Whether Twitter International Company (TIC) complied with its obligations, in accordance with Article 33(1) GDPR, to notify the Commission of the Breach without undue delay and, where feasible, not later than 72 hours after having become aware of it; and
  2. Whether TIC complied with its obligation under Article 33(5) to document the Breach.

Dixon found that TIC did not comply with Article 33(1) and found unpersuasive the main claim of TIC that because Twitter, International, its processor under EU law, did not alert TIC in a timely fashion, it need not meet the 72 hour window. Moreover, Dixon found TIC did not meet its Article 33(5) obligations such that its compliance with Article 33 could be determined. However, the size of the fine became the issue necessitating the EDPB step in because the Austrian Supervisory Authority (Österreichische Datenschutzbehörde), the German Supervisory Authority (Der Hamburgische Beauftragte für Datenschutz und Informationsfreiheit) and the Italian Supervisory Authority (Garante per la protezione dei dati personali) made “relevant and reasoned” objections.

Per the GDPR, the EDPB intervened. Article 60 of the GDPR provides if a CSA “expresses a relevant and reasoned objection to the draft decision [of the LSA], the lead supervisory authority shall, if it does not follow the relevant and reasoned objection or is of the opinion that the objection is not relevant or reasoned, submit the matter to the consistency mechanism.” Article 65 also provides that where “a supervisory authority concerned has raised a relevant and reasoned objection to a draft decision of the lead authority or the lead authority has rejected such an objection as being not relevant or reasoned,” then the EDPB must step in and work towards a final binding decision. This process was installed so that the enforcement of the EDPB would be uniform throughout the EU and to forestall the possibility that one DPA or a small group of DPAs would construe the data protection regime in ways contrary to its intention. As it is, there have already been allegations that some DPAs have been ineffective or lenient towards alleged offenders.

In its mid-November statement, the EDPB said it “adopted by a 2/3 majority of its members its first dispute resolution decision on the basis of Art. 65 GDPR.” The EDPB stated

The Irish supervisory authority (SA) issued the draft decision following an own-volition inquiry and investigations into Twitter International Company, after the company notified the Irish SA of a personal data breach on 8 January 2019. In May 2020, the Irish SA shared its draft decision with the concerned supervisory authorities (CSAs) in accordance with Art. 60 (3) GDPR. The CSAs then had four weeks to submit their relevant and reasoned objections (RROs.) Among others, the CSAs issued RROs on the infringements of the GDPR identified by the lead supervisory authority (LSA), the role of Twitter International Company as the (sole) data controller, and the quantification of the proposed fine. 

It appears from the EDPB’s statement that other DPAs/SAs had objected to the size of the fine (which can be as high as 2% of annual revenue), how Twitter violated the GDPR, and Twitter’s culpability based on whether it was the only controller of the personal data or other controllers may have also been held responsible.

According to the DPC, the EDPB ultimately decided that

…the [DPC] is required to re-assess the elements it relies upon to calculate the amount of the fixed fine to be imposed on TIC, and to amend its Draft Decision by increasing the level of the fine in order to ensure it fulfils its purpose as a corrective measure and meets the requirements of effectiveness, dissuasiveness and proportionality established by Article 83(1) GDPR and taking into account the criteria of Article 83(2) GDPR.

Dixon went back and reasoned through the breach and compliance. She stressed that the GDPR infringements were largely aside and apart from the substance of the breach, which is why the administrative fine was low. Nonetheless, Dixon reexamined the evidence in light of the EDPB’s decision and concluded in relevant part:

  • I therefore consider that the nature of the obligations arising under Article 33(1) and Article 33(5) are such that, compliance is central to the overall functioning of the supervision and enforcement regime performed by supervisory authorities in relation to both the specific issue of personal data breaches but also the identification and assessment of wider issues of non-compliance by controllers. As such, non-compliance with these obligations has serious consequences in that it risks undermining the effective exercise by supervisory authorities of their functions under the GDPR. With regard to the nature of the specific infringements in these circumstances, it is clear, having regard to the foregoing, that in the circumstances of this case, the delayed notification under Article 33(1) inevitably delayed the Commission’s assessment of the Breach. With regard to Article 33(5), the deficiencies in the “documenting” of the Breach by TIC impacted on the Commission’s overall efficient assessment of the Breach, necessitating the raising of multiple queries concerning the facts and sequencing surrounding the notification of the Breach.
  • Accordingly, having regard to the potential for damage to data subjects caused by the delayed notification to the Commission (which I have set out above in the context of Article 83(2)(a)), the corollary of this is that any category of personal data could have been affected by the delayed notification. Whilst, as stated above, there was no direct evidence of damage, at the same time, it cannot be definitively said that there was no damage to data subjects or no affected categories of personal data.

Dixon also recalculated the fine that she noted was bound on the upper limit at €10 million or 2% of annual worldwide revenue after once again turning aside TIC’s argument that it independent of Twitter for purposes of determining a fine. Dixon determined the appropriate administrative fine would be about $500,000 and Twitter’s worldwide revenue was $3.46 billion in 2019 (meaning a maximum penalty of $69.2 million.) Dixon explained:

Having regard to all of the foregoing, and, in particular, having had due regard to all of the factors which I am required to consider under Articles 83(2)(a) to (k), as applicable, and in the interests of effectiveness, proportionality and deterrence, and in light of the re-assessment of the elements I have implemented and documented above in accordance with the EDPB Decision, I have decided to impose an administrative fine of $500,000, which equates (in my estimation for this purpose) to €450,000. In deciding to impose a fine in this amount, I have had regard to the previous range of the fine, set out in the Draft Decision (of $150,000 – $300,000), and to the binding direction in the EDPB Decision, at paragraph 207 thereof, that the level of the fine should be increased “..in order to ensure it fulfils its purpose as a corrective measure and meets the requirements of effectiveness, dissuasiveness and proportionality established by Article 83(1) GDPR and taking into account the criteria of Article 83(2) GDPR.”

In its Article 65 decision, the EDPB judged the various objections to the DPC’s proposed decision against Article 4(24) of the GDPR:

‘relevant and reasoned objection’ means an objection to a draft decision as to whether there is an infringement of this Regulation, or whether envisaged action in relation to the controller or processor complies with this Regulation, which clearly demonstrates the significance of the risks posed by the draft decision as regards the fundamental rights and freedoms of data subjects and, where applicable, the free flow of personal data within the Union;

The EDPB ultimately decided “the fine proposed in the Draft Decision is too low and therefore does not fulfil its purpose as a corrective measure, in particular it does not meet the requirements of Article 83(1) GDPR of being effective, dissuasive and proportionate.” The EDPB directed the DPC “to re-assess the elements it relies upon to calculate the amount of the fixed fine to be imposed on TIC so as to ensure it is appropriate to the facts of the case.” However, the EDPB turned aside a number of other objections raised by EU DPAs as failing to meet the standard of review in Article 4(24):

  • the competence of the LSA;
  • the qualification of the roles of TIC and Twitter, Inc., respectively;
  • the infringements of the GDPR identified by the LSA;
  • the existence of possible additional (or alternative) infringements of the GDPR;
  • the lack of a reprimand;

However, the EDPB stressed:

Regarding the objections deemed not to meet the requirements stipulated by Art 4(24) GDPR, the EDPB does not take any position on the merit of any substantial issues raised by these objections. The EDPB reiterates that its current decision is without any prejudice to any assessments the EDPB may be called upon to make in other cases, including with the same parties, taking into account the contents of the relevant draft decision and the objections raised by the CSAs.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2021. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by papagnoc from Pixabay