Chopra Named CFPB Head

The CFPB will undoubtedly be a more muscular enforcer of financial services entities under  the FTC Commissioner nominated to head the agency, including with respect to privacy, data security, and cybersecurity.

Federal Trade Commission (FTC) Commissioner Rohit Chopra has been tapped by President-elect Joe Biden to lead the agency at which he oversaw the student loan market. Chopra’s nomination must be confirmed by the Senate to be the next Director of the Consumer Financial Protection Bureau (CFPB), an entity that possesses largely unused powers to police the cybersecurity, data security, and privacy practices of broad swaths of the United States (U.S.) economy. And given Chopra’s aggressive advocacy at the FTC to be more active and more muscular, it seems fair to assume the same will be true at the CFPB, awakening an entity that has been largely dormant under the Trump Administration except to the extent it employed a “light regulatory touch.” Of course, Chopra’s expected departure from the FTC likely means Biden will be able to name two FTC nominees in the near future and means he will name Commissioner Rebecca Kelly Slaughter as the next chair as she would be the only currently confirmed Democratic member of the FTC. Whether this designation will be on an acting basis or permanent basis remains to be seen.

In making the announcement, Biden’s transition team highlighted Chopra’s push “for aggressive remedies against lawbreaking companies, especially repeat offenders” and work “to increase scrutiny of dominant technology firms that pose risks to privacy, national security, and fair competition.” The press release added:

Chopra previously served as Assistant Director of the Consumer Financial Protection Bureau, where he led the agency’s efforts on student loans. In 2011, the Secretary of the Treasury appointed him to serve as the CFPB’s Student Loan Ombudsman, a new position established in the financial reform law. He also served as a Special Advisor at the U.S. Department of Education.

In these roles, Chopra led efforts to spur competition in the student loan financing market, develop new tools for students and student loan borrowers to make smarter decisions, and secure hundreds of millions of dollars in refunds for borrowers victimized by unlawful conduct by loan servicers, debt collectors, and for-profit college chains.

Chopra used his powers as an FTC Commissioner to appeal to the majority Republicans to use the agency’s powers more forcefully in combatting privacy, data security, and antitrust abuses. For example, he voted against the FTC’s $5 billion settlement with Facebook and dissented, listing his reasons for breaking with the three Republican Commissioners:

  • Facebook’s violations were a direct result of the company’s behavioral advertising business model. The proposed settlement does little to change the business model or practices that led to the recidivism.
  • The $5 billion penalty is less than Facebook’s exposure from its illegal conduct, given its financial gains.
  • The proposed settlement lets Facebook off the hook for unspecified violations.
  • The grant of immunity for Facebook’s officers and directors is a giveaway.
  • The case against Facebook is about more than just privacy – it is also about the power to control and manipulate.

More recently, in June 2020, Chopra issued a statement on the a pair of reports required by Congress that articulate his view the FTC “must do more to use our existing authority and resources more effectively:”

1. Inventory and use the rulemaking authorities that Congress has already authorized.

Contrary to what many believe, the FTC has several relevant rulemaking authorities when it comes to data protection, but simply chooses not to use them. Rules do not need to create any new requirements for market participants. In fact, they can simply codify existing legal precedents and enforcement policy to give even more clarity on what the law requires. In addition, when rules are in place, it is much easier for the agency to obtain relief for those who are harmed and seek penalties to deter other bad actors. This can be far more efficient than chasing after the same problems year after year through no-money settlements.

2. Ensure that large firms face the same level of scrutiny we apply to smaller businesses.

To meaningfully deter data protection abuses and other wrongful conduct, the FTC must enforce the law equally. While we have taken a hard line against smaller violators in the data protection sphere, charging individual decisionmakers and wiping out their earnings, I am very concerned that the FTC uses a different standard for larger firms, like in the recent Facebook and YouTube matters.6 This is not only unfair to small firms, but also sends the unfortunate message that the largest corporations can avoid meaningful accountability for abuse and misuse of data.

3. Increase cooperation with state attorneys general and other regulators.

State attorneys general are the country’s front-line watchdogs when it comes to consumer protection, and many states have enacted privacy and data protection laws backed by strong remedial tools, including civil penalties. Partnering more frequently with state enforcers could significantly enhance the Commission’s effectiveness and make better use of taxpayer resources.

4. Hold third-party watchdogs accountable and guard against conflicts of interest.

The FTC typically orders lawbreaking companies to hire a third-party assessor to review privacy and security practices going forward. However, the Commission should not place too much faith in the efficacy of these third parties.

5. Reallocate resources.

While the Commission’s report has rightly noted to Congress that the number of employees working on data protection is inadequate, the Commissioners can vote to reallocate resources from other functions to increase our focus on data protection.

6. Investigate firms comprehensively across the FTC’s mission.

The FTC should use its authority to deter unfair and deceptive conduct in conjunction with our authority to deter unfair methods of competition. However, in the digital economy, the data that companies compete to obtain and utilize is also at the center of significant privacy and data security infractions.

7. Conduct more industry-wide studies under Section 6(b) of the FTC Act.

Surveillance-based advertising is a major driver of data-related abuses, but the Commission has not yet used its authority to compel information from major industry players to study these practices. The Commission should vote to issue orders to study how technology platforms engage in surveillance-based advertising.

Without doubt, Chopra will seek to read and exercise the CFPB’s powers as broadly as possible. For example, in a late October 2020 draft law review article, he and an attorney advisor Samuel Levine argued the FTC would use a dormant power to fill the gap in its enforcement authority left by the cases before the Supreme Court of the United States regarding the FTC’s injunctive powers under Section 13 of the FTC Act. They asserted:

  • [T]he agency should resurrect one of the key authorities abandoned in the 1980s: Section 5(m)(1)(B) of the FTC Act, the Penalty Offense Authority. The Penalty Offense Authority is a unique tool in commercial regulation. Typically, first- time offenses involving unfair or deceptive practices do not lead to civil penalties. However, if the Commission formally condemns these practices in a cease-and-desist order, they can become what we call “Penalty Offenses.” Other parties that commit these offenses with knowledge that they have been condemned by the Commission face financial penalties that can add up to a multiple of their illegal profits, rather than a fraction.
  • Using this authority, the Commission can substantially increase deterrence and reduce litigation risk by noticing whole industries of Penalty Offenses, exposing violators to significant civil penalties, while helping to ensure fairness for honest firms. This would dramatically improve the FTC’s effectiveness relative to our current approach, which relies almost entirely on Section 13(b) and no-money cease-and-desist orders, even in cases of blatant lawbreaking.

Should the FTC heed Chopra and Levine’s suggestion, the agency could threaten fines in the first instance of Section 5 violations for specific illegal practices the FTC has put regulated entities on notice about.

The CFPB’s organic statute is patterned on the FTC Act, particularly its bar on unfair or deceptive acts or practices (UDAP). However, the “Dodd-Frank Wall Street Reform and Consumer Protection Act” (P.L. 111-203) that created the CFPB provided the agency “may take any action authorized under subtitle E to prevent a covered person or service provider from committing or engaging in an unfair, deceptive, or abusive act or practice (UDAAP) under Federal law in connection with any transaction with a consumer for a consumer financial product or service, or the offering of a consumer financial product or service.” While the CFPB may be limited in its jurisdiction, it has a more expansive regulatory remit that Chopra will almost certainly push to its maximum. Consequently, unfair, deceptive, and abusive practices in the financial services sector could, in his view, include privacy, cybersecurity, and data security practices that heretofore have been allowed by the CFPB could be subject to enforcement action. And while the current CFPB issued a 2020 policy statement regarding how it thinks the agency should use its authority to punish “abusive” practices, Chopra’s team will likely withdraw and rewrite this document.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2021. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by ArtTower from Pixabay

Further Reading, Other Developments, and Coming Events (12 January 2021)

Further Reading

  • Biden’s NSC to focus on global health, climate, cyber and human rights, as well as China and Russia” By Karen DeYoung — The Washington Post. Like almost every incoming White House, the Biden team has announced a restructuring of the National Security Council (NSC) to better effectuate the President-elect’s policy priorities. To not surprise, the volume on cybersecurity policy will be turned up. Other notable change is plans to take “cross-cutting” approaches to issues that will likely meld foreign and domestic and national security and civil issues, meaning there could be a new look on offensive cyber operations, for example. It is possible President Biden decides to put the genie back in the bottle, so to speak, by re-imposing an interagency decision-making process as opposed to the Trump Administration’s approach of delegating discretion to the National Security Agency/Cyber Command head. Also, the NSC will focus on emerging technology, a likely response to the technology arms race the United States finds itself in against the People’s Republic of China.
  • Exclusive: Pandemic relief aid went to media that promoted COVID misinformation” By Caitlin Dickson — yahoo! news. The consulting firm Alethea Group and the nonprofit Global Disinformation Index are claiming the COVID stimulus Paycheck Protection Program (PPP) provided loans and assistance to five firms that “were publishing false or misleading information about the pandemic, thus profiting off the infodemic” according to an Alethea Group vice president. This report follows an NBC News article claiming that 14 white supremacist and racist organizations have also received PPP loans. The Alethea Group and Global Disinformation Index named five entities who took PPP funds and kept spreading pandemic misinformation: Epoch Media Group, Newsmax Media, The Federalist, Liftable Media, and Prager University.
  • Facebook shuts Uganda accounts ahead of vote” — France24. The social media company shuttered a number of Facebook and Instagram accounts related to government officials in Uganda ahead of an election on account of “Coordinated Inauthentic Behaviour” (CIB). This follows the platform shutting down accounts related to the French Army and Russia seeking to influence events in Africa. These and other actions may indicate the platform is starting to pay the same attention to the non-western world as at least one former employee has argued the platform was negligent at best and reckless at worst in not properly resourcing efforts to police CIB throughout the Third World.
  • China tried to punish European states for Huawei bans by adding eleventh-hour rule to EU investment deal” By Finbarr Bermingham — South China Morning Post. At nearly the end of talks on a People’s Republic of China (PRC)-European Union (EU) trade deal, PRC negotiators tried slipping in language that would have barred entry to the PRC’s cloud computing market to any country or company from a country that restricts Huawei’s services and products. This is alternately being seen as either standard Chinese negotiating tactics or an attempt to avenge the thwarting of the crown jewel in its telecommunications ambitions.
  • Chinese regulators to push tech giants to share consumer credit data – sources” By Julie Zhu — Reuters. Ostensibly in a move to better manage the risks of too much unsafe lending, tech giants in the People’s Republic of China (PRC) will soon need to share data on consumer loans. It seems inevitable that such data will be used by Beijing to further crack down on undesirable people and elements within the PRC.
  • The mafia turns social media influencer to reinforce its brand” By Miles Johnson — The Financial Times. Even Italy’s feared ’Ndrangheta is creating and curating a social media presence.

Other Developments

  • President Donald Trump signed an executive order (EO) that bans eight applications from the People’s Republic of China on much the same grounds as the EOs prohibiting TikTok and WeChat. If this EO is not rescinded by the Biden Administration, federal courts may block its implementation as has happened with the TikTok and WeChat EOs to date. Notably, courts have found that the Trump Administration exceeded its authority under the International Emergency Economic Powers Act (IEEPA), which may also be an issue in the proposed prohibition on Alipay, CamScanner, QQ Wallet, SHAREit, Tencent QQ, VMate, WeChat Pay, and WPS Office. Trump found:
    • that additional steps must be taken to deal with the national emergency with respect to the information and communications technology and services supply chain declared in Executive Order 13873 of May 15, 2019 (Securing the Information and Communications Technology and Services Supply Chain).  Specifically, the pace and pervasiveness of the spread in the United States of certain connected mobile and desktop applications and other software developed or controlled by persons in the People’s Republic of China, to include Hong Kong and Macau (China), continue to threaten the national security, foreign policy, and economy of the United States.  At this time, action must be taken to address the threat posed by these Chinese connected software applications.
    • Trump directed that within 45 days of issuance of the EO, there shall be a prohibition on “any transaction by any person, or with respect to any property, subject to the jurisdiction of the United States, with persons that develop or control the following Chinese connected software applications, or with their subsidiaries, as those transactions and persons are identified by the Secretary of Commerce (Secretary) under subsection (e) of this section: Alipay, CamScanner, QQ Wallet, SHAREit, Tencent QQ, VMate, WeChat Pay, and WPS Office.”
  • The Government Accountability Office (GAO) issued its first statutorily required annual assessment of how well the United States Department of Defense (DOD) is managing its major information technology (IT) procurements. The DOD spent more than $36 billion of the $90 billion the federal government was provided for IT in FY 2020. The GAO was tasked with assessing how well the DOD did in using iterative development, managing costs and schedules, and implementing cybersecurity measures. The GAO found progress in the first two realms but a continued lag in deploying long recommended best practices to ensure the security of the IT the DOD buys or builds. Nonetheless, the GAO focused on 15 major IT acquisitions that qualify as administrative (i.e. “business”) and communications and information security (i.e. “non-business.”) While there were no explicit recommendations made, the GAO found:
    • Ten of the 15 selected major IT programs exceeded their planned schedules, with delays ranging from 1 month for the Marine Corps’ CAC2S Inc 1 to 5 years for the Air Force’s Defense Enterprise Accounting and Management System-Increment 1.
    • …eight of the 10 selected major IT programs that had tested their then-current technical performance targets reported having met all of their targets…. As of December 2019, four programs had not yet conducted testing activities—Army’s ACWS, Air Force’s AFIPPS Inc 1, Air Force’s MROi, and Navy ePS. Testing data for one program, Air Force’s ISPAN Inc 4, were classified.
    • …officials from the 15 selected major IT programs we reviewed reported using software development approaches that may help to limit risks to cost and schedule outcomes. For example, major business IT programs reported using COTS software. In addition, most programs reported using an iterative software development approach and using a minimum deployable product. With respect to cybersecurity practices, all the programs reported developing cybersecurity strategies, but programs reported mixed experiences with respect to conducting cybersecurity testing. Most programs reported using operational cybersecurity testing, but less than half reported conducting developmental cybersecurity testing. In addition, programs that reported conducting cybersecurity vulnerability assessments experienced fewer increases in planned program costs and fewer schedule delays. Programs also reported a variety of challenges associated with their software development and cybersecurity staff.
    • 14 of the 15 programs reported using an iterative software development approach which, according to leading practices, may help reduce cost growth and deliver better results to the customer. However, programs also reported using an older approach to software development, known as waterfall, which could introduce risk for program cost growth because of its linear and sequential phases of development that may be implemented over a longer period of time. Specifically, two programs reported using a waterfall approach in conjunction with an iterative approach, while one was solely using a waterfall approach.
    • With respect to cybersecurity, programs reported mixed implementation of specific practices, contributing to program risks that might impact cost and schedule outcomes. For example, all 15 programs reported developing cybersecurity strategies, which are intended to help ensure that programs are planning for and documenting cybersecurity risk management efforts.
    • In contrast, only eight of the 15 programs reported conducting cybersecurity vulnerability assessments—systematic examinations of an information system or product intended to, among other things, determine the adequacy of security measures and identify security deficiencies. These eight programs experienced fewer increases in planned program costs and fewer schedule delays relative to the programs that did not report using cybersecurity vulnerability assessments.
  • The United States (U.S.) Department of Energy gave notice of a “Prohibition Order prohibiting the acquisition, importation, transfer, or installation of specified bulk-power system (BPS) electric equipment that directly serves Critical Defense Facilities (CDFs), pursuant to Executive Order 13920.” (See here for analysis of the executive order.) The Department explained:
    • Executive Order No. 13920 of May 1, 2020, Securing the United States Bulk-Power System (85 FR 26595 (May 4, 2020)) (E.O. 13920) declares that threats by foreign adversaries to the security of the BPS constitute a national emergency. A current list of such adversaries is provided in a Request for Information (RFI), issued by the Department of Energy (Department or DOE) on July 8, 2020 seeking public input to aid in its implementation of E.O. 13920. The Department has reason to believe, as detailed below, that the government of the People’s Republic of China (PRC or China), one of the listed adversaries, is equipped and actively planning to undermine the BPS. The Department has thus determined that certain BPS electric equipment or programmable components subject to China’s ownership, control, or influence, constitute undue risk to the security of the BPS and to U.S. national security. The purpose of this Order is to prohibit the acquisition, importation, transfer, or subsequent installation of such BPS electric equipment or programmable components in certain sections of the BPS.
  • The United States’ (U.S.) Department of Commerce’s Bureau of Industry and Security (BIS) added the People’s Republic of China’s (PRC) Semiconductor Manufacturing International Corporation (SMIC) to its Entity List in a move intended to starve the company of key U.S. technology needed to manufacture high end semiconductors. Therefore, any U.S. entity wishing to do business with SMIC will need a license which the Trump Administration may not be likely to grant. The Department of Commerce explained in its press release:
    • The Entity List designation limits SMIC’s ability to acquire certain U.S. technology by requiring U.S. exporters to apply for a license to sell to the company.  Items uniquely required to produce semiconductors at advanced technology nodes—10 nanometers or below—will be subject to a presumption of denial to prevent such key enabling technology from supporting China’s military-civil fusion efforts.
    • BIS also added more than sixty other entities to the Entity List for actions deemed contrary to the national security or foreign policy interest of the United States.  These include entities in China that enable human rights abuses, entities that supported the militarization and unlawful maritime claims in the South China Sea, entities that acquired U.S.-origin items in support of the People’s Liberation Army’s programs, and entities and persons that engaged in the theft of U.S. trade secrets.
    • As explained in the Federal Register notice:
      • SMIC is added to the Entity List as a result of China’s military-civil fusion (MCF) doctrine and evidence of activities between SMIC and entities of concern in the Chinese military industrial complex. The Entity List designation limits SMIC’s ability to acquire certain U.S. technology by requiring exporters, reexporters, and in-country transferors of such technology to apply for a license to sell to the company. Items uniquely required to produce semiconductors at advanced technology nodes 10 nanometers or below will be subject to a presumption of denial to prevent such key enabling technology from supporting China’s military modernization efforts. This rule adds SMIC and the following ten entities related to SMIC: Semiconductor Manufacturing International (Beijing) Corporation; Semiconductor Manufacturing International (Tianjin) Corporation; Semiconductor Manufacturing International (Shenzhen) Corporation; SMIC Semiconductor Manufacturing (Shanghai) Co., Ltd.; SMIC Holdings Limited; Semiconductor Manufacturing South China Corporation; SMIC Northern Integrated Circuit Manufacturing (Beijing) Co., Ltd.; SMIC Hong Kong International Company Limited; SJ Semiconductor; and Ningbo Semiconductor International Corporation (NSI).
  • The United States’ (U.S.) Department of Commerce’s Bureau of Industry and Security (BIS) amended its Export Administration Regulations “by adding a new ‘Military End User’ (MEU) List, as well as the first tranche of 103 entities, which includes 58 Chinese and 45 Russian companies” per its press release. The Department asserted:
    • The U.S. Government has determined that these companies are ‘military end users’ for purposes of the ‘military end user’ control in the EAR that applies to specified items for exports, reexports, or transfers (in-country) to the China, Russia, and Venezuela when such items are destined for a prohibited ‘military end user.’
  • The Australia Competition and Consumer Commission (ACCC) rolled out another piece of the Consumer Data Right (CDR) scheme under the Competition and Consumer Act 2010, specifically accreditation guidelines “to provide information and guidance to assist applicants with lodging a valid application to become an accredited person” to whom Australians may direct data holders share their data. The ACCC explained:
    • The CDR aims to give consumers more access to and control over their personal data.
    • Being able to easily and efficiently share data will improve consumers’ ability to compare and switch between products and services and encourage competition between service providers, leading to more innovative products and services for consumers and the potential for lower prices.
    • Banking is the first sector to be brought into the CDR.
    • Accredited persons may receive a CDR consumer’s data from a data holder at the request and consent of the consumer. Any person, in Australia or overseas, who wishes to receive CDR data to provide products or services to consumers under the CDR regime, must be accredited
  • Australia’s government has released its “Data Availability and Transparency Bill 2020” that “establishes a new data sharing scheme for federal government data, underpinned by strong safeguards to mitigate risks and simplified processes to make it easier to manage data sharing requests” according to the summary provided in Parliament by the government’s point person. In the accompanying “Explanatory Memorandum,” the following summary was provided:
    • The Bill establishes a new data sharing scheme which will serve as a pathway and regulatory framework for sharing public sector data. ‘Sharing’ involves providing controlled access to data, as distinct from open release to the public.
    • To oversee the scheme and support best practice, the Bill creates a new independent regulator, the National Data Commissioner (the Commissioner). The Commissioner’s role is modelled on other regulators such as the Australian Information Commissioner, with whom the Commissioner will cooperate.
    • The data sharing scheme comprises the Bill and disallowable legislative instruments (regulations, Minister-made rules, and any data codes issued by the Commissioner). The Commissioner may also issue non-legislative guidelines that participating entities must have regard to, and may release other guidance as necessary.
    • Participants in the scheme are known as data scheme entities:
      • Data custodians are Commonwealth bodies that control public sector data, and have the right to deal with that data.
      • Accredited users are entities accredited by the Commissioner to access to public sector data. To become accredited, entities must satisfy the security, privacy, infrastructure and governance requirements set out in the accreditation framework.
      • Accredited data service providers (ADSPs) are entities accredited by the Commissioner to perform data services such as data integration. Government agencies and users will be able to draw upon ADSPs’ expertise to help them to share and use data safely.
    • The Bill does not compel sharing. Data custodians are responsible for assessing each sharing request, and deciding whether to share their data if satisfied the risks can be managed.
    • The data sharing scheme contains robust safeguards to ensure sharing occurs in a consistent and transparent manner, in accordance with community expectations. The Bill authorises data custodians to share public sector data with accredited users, directly or through an ADSP, where:
      • Sharing is for a permitted purpose – government service delivery, informing government policy and programs, or research and development;
      • The data sharing principles have been applied to manage the risks of sharing; and
      • The terms of the arrangement are recorded in a data sharing agreement.
    • Where the above requirements are met, the Bill provides limited statutory authority to share public sector data, despite other Commonwealth, State and Territory laws that prevent sharing. This override of non-disclosure laws is ‘limited’ because it occurs only when the Bill’s requirements are met, and only to the extent necessary to facilitate sharing.
  • The United Kingdom’s Competition and Markets Authority’s (CMA) is asking interested parties to provide input on the proposed acquisition of British semiconductor company by a United States (U.S.) company before it launches a formal investigation later this year. However, CMA is limited to competition considerations, and any national security aspects of the proposed deal would need to be investigated by Prime Minister Boris Johnson’s government. CMA stated:
    • US-based chip designer and producer NVIDIA Corporation (NVIDIA) plans to purchase the Intellectual Property Group business of UK-based Arm Limited (Arm) in a deal worth $40 billion. Arm develops and licenses intellectual property (IP) and software tools for chip designs. The products and services supplied by the companies support a wide range of applications used by businesses and consumers across the UK, including desktop computers and mobile devices, game consoles and vehicle computer systems.
    • CMA added:
      • The CMA will look at the deal’s possible effect on competition in the UK. The CMA is likely to consider whether, following the takeover, Arm has an incentive to withdraw, raise prices or reduce the quality of its IP licensing services to NVIDIA’s rivals.
  • The Israeli firm, NSO Group, has been accused by an entity associated with a British university of using real-time cell phone data to sell its COVID-19 contact tracing app, Fleming, in ways that may have broken the laws of a handful of nations. Forensic Architecture,  a research agency, based at Goldsmiths, University of London, argued:
    • In March 2020, with the rise of COVID-19, Israeli cyber-weapons manufacturer NSO Group launched a contact-tracing technology named ‘Fleming’. Two months later, a database belonging to NSO’s Fleming program was found unprotected online. It contained more than five hundred thousand datapoints for more than thirty thousand distinct mobile phones. NSO Group denied there was a security breach. Forensic Architecture received and analysed a sample of the exposed database, which suggested that the data was based on ‘real’ personal data belonging to unsuspecting civilians, putting their private information in risk
    • Forensic Architecture added:
      • Leaving a database with genuine location data unprotected is a serious violation of the applicable data protection laws. That a surveillance company with access to personal data could have overseen this breach is all the more concerning.
      • This could constitute a violation of the General Data Protection Regulation (GDPR) based on where the database was discovered as well as the laws of the nations where NSO Group allegedly collected personal data
    • The NSO Group denied the claims and was quoted by Tech Crunch:
      • “We have not seen the supposed examination and have to question how these conclusions were reached. Nevertheless, we stand by our previous response of May 6, 2020. The demo material was not based on real and genuine data related to infected COVID-19 individuals,” said an unnamed spokesperson. (NSO’s earlier statement made no reference to individuals with COVID-19.)
      • “As our last statement details, the data used for the demonstrations did not contain any personally identifiable information (PII). And, also as previously stated, this demo was a simulation based on obfuscated data. The Fleming system is a tool that analyzes data provided by end users to help healthcare decision-makers during this global pandemic. NSO does not collect any data for the system, nor does NSO have any access to collected data.”

Coming Events

  • On 13 January, the Federal Communications Commission (FCC) will hold its monthly open meeting, and the agency has placed the following items on its tentative agenda “Bureau, Office, and Task Force leaders will summarize the work their teams have done over the last four years in a series of presentations:
    • Panel One. The Commission will hear presentations from the Wireless Telecommunications Bureau, International Bureau, Office of Engineering and Technology, and Office of Economics and Analytics.
    • Panel Two. The Commission will hear presentations from the Wireline Competition Bureau and the Rural Broadband Auctions Task Force.
    • Panel Three. The Commission will hear presentations from the Media Bureau and the Incentive Auction Task Force.
    • Panel Four. The Commission will hear presentations from the Consumer and Governmental Affairs Bureau, Enforcement Bureau, and Public Safety and Homeland Security Bureau.
    • Panel Five. The Commission will hear presentations from the Office of Communications Business Opportunities, Office of Managing Director, and Office of General Counsel.
  • On 27 July, the Federal Trade Commission (FTC) will hold PrivacyCon 2021.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2021. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Judith Scharnowski from Pixabay

Further Reading, Other Developments, and Coming Events (29 October)

Further Reading

  •  “Cyberattacks hit Louisiana government offices as worries rise about election hacking” By Eric Geller — Politico. The Louisiana National Guard located and addressed a remote access trojan, a common precursor to ransomware attacks, in some of the state’s systems. This may or may not have been the beginning stages of an election day attack, and other states have made similar discoveries.
  • Kicked off Weibo? Here’s what happens next.” By Shen Lu — Rest of World. Beijing is increasingly cracking down on dissent on Weibo, the People’s Republic of China’s (PRC) version of Twitter. People get banned for posting content critical of the PRC government or pro-Hong Kong. Some are allowed back and are usually banned again. Some buy burner accounts inevitably to get also get banned.
  • Inside the campaign to ‘pizzagate’ Hunter Biden” By Ben Collins and Brandy Zadrozny — NBC News. The sordid tale of how allies or advocates of the Trump Campaign have tried to propagate rumors of illegal acts committed by Hunter Biden in an attempt to smear former Vice President Joe Biden as was done to former Secretary of State Hillary Clinton in 2016.
  • Russians Who Pose Election Threat Have Hacked Nuclear Plants and Power Grid” By Nicole Perlroth — The New York Times. Some of Russia’s best hackers have been prowling around state and local governments’ systems for unknown ends. These are the same hackers, named Dragonfly or Energetic Bear by researchers, who have penetrated a number of electric utilities and the power grid in the United States, including a nuclear plant. It is not clear what these hackers want to do, which worries U.S. officials and cybersecurity experts and researchers.
  • Activists Turn Facial Recognition Tools Against the Police” By Kashmir Hill — The New York Times. In an interesting twist, protestors and civil liberties groups are adopting facial recognition technology to try to identify police officers who attack protestors or commit acts of violence who refuse to identify themselves.

Other Developments

  • The United Kingdom’s Information Commissioner’s Office (ICO) has completed its investigation into the data brokering practices of Equifax, Transunion, and Experian and found widespread privacy and data protection violations. Equifax and Transunion were amendable to working with the ICO to correct abuses and shutter illegal products and businesses, but Experian was not. In the words of the ICO, Experian “did not accept that they were required to make the changes set out by the ICO, and as such were not prepared to issue privacy information directly to individuals nor cease the use of credit reference data for direct marketing purposes.” Consequently, Experian must affect specified changes within nine months or face “a fine of up to £20m or 4% of the organisation’s total annual worldwide turnover.” The ICO investigated using its powers under the British Data Protection Act 2018 and the General Data Protection Regulation (GDPR).
    • The ICO found widespread problems in the data brokering businesses of the three firms:
      • The investigation found how the three CRAs were trading, enriching and enhancing people’s personal data without their knowledge. This processing resulted in products which were used by commercial organisations, political parties or charities to find new customers, identify the people most likely to be able to afford goods and services, and build profiles about people.
      • The ICO found that significant ‘invisible’ processing took place, likely affecting millions of adults in the UK. It is ‘invisible’ because the individual is not aware that the organisation is collecting and using their personal data. This is against data protection law.
      • Although the CRAs varied widely in size and practice, the ICO found significant data protection failures at each company. As well as the failure to be transparent, the regulator found that personal data provided to each CRA, in order for them to provide their statutory credit referencing function, was being used in limited ways for marketing purposes. Some of the CRAs were also using profiling to generate new or previously unknown information about people, which is often privacy invasive.
      • Other thematic failings identified were:
        • Although the CRAs did provide some privacy information on their websites about their data broking activities, their privacy information did not clearly explain what they were doing with people’s data;
        • Separately, they were using certain lawful bases incorrectly for processing people’s data.
      • The ICO issued its report “Investigation into data protection compliance in the direct marketing data broking sector,” with these key findings:
        • Key finding 1: The privacy information of the CRAs did not clearly explain their processing with respect to their marketing services. CRAs have to revise and improve their privacy information. Those engaging in data broking activities must ensure that their privacy information is compliant with the GDPR.
        • Key finding 2: In the circumstances we assessed the CRAs were incorrectly relying on an exception from the requirement to directly provide privacy information to individuals (excluding where the data processed has come solely from the open electoral register or would be in conflict with the purpose of processing – such as suppression lists like the TPS). To comply with the GDPR, CRAs have to ensure that they provide appropriate privacy information directly to all the individuals for whom they hold personal data in their capacity as data brokers for direct marketing purposes. Those engaging in data broking activities must ensure individuals have the information required by Article 14.
        • Key finding 3: The CRAs were using personal data collected for credit referencing purposes for direct marketing purposes. The CRAs must not use this data for direct marketing purposes unless this has been transparently explained to individuals and they have consented to this use. Where the CRAs are currently using personal data obtained for credit referencing purposes for direct marketing, they must stop using it.
        • Key finding 4: The consents relied on by Equifax were not valid under the GDPR. To comply with the GDPR, CRAs must ensure that the consent is valid, if they intend to rely on consent obtained by a third party. Those engaging in data broking activities must ensure that any consents they use meet the standard of the GDPR.
        • Key finding 5: Legitimate interest assessments (LIAs) conducted by the CRAs in respect of their marketing services were not properly weighted. The CRAs must revise their LIAs to reconsider the balance of their own interests against the rights and freedoms of individuals in the context of their marketing services. Where an objective LIA does not favour the interests of the organisation, the processing of that data must stop until that processing can be made lawful. Those engaging in data broking activities must ensure that LIAs are conducted objectively taking into account all factors.
        • Key finding 6: In some cases Experian was obtaining data on the basis of consent and then processing it on the basis of legitimate interests. Switching from consent to legitimate interests in this situation is not appropriate. Where personal data is collected by a third party and shared for direct marketing purposes on the basis of consent, then the appropriate lawful basis for subsequent processing for these purposes will also be consent. Experian must therefore delete any data supplied to it on the basis of consent that it is processing on the basis of legitimate interests.
  • The United States (U.S.) Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA), the Federal Bureau of Investigation (FBI), and the U.S. Cyber Command Cyber National Mission Force (CNMF) issued a joint advisory on the “the tactics, techniques, and procedures (TTPs) used by North Korean advanced persistent threat (APT) group Kimsuky—against worldwide targets—to gain intelligence on various topics of interest to the North Korean government.” CISA, FBI, and CNMF stated “individuals and organizations within this target profile increase their defenses and adopt a heightened state of awareness…[and] [p]articularly important mitigations include safeguards against spearphishing, use of multi-factor authentication, and user awareness training.” The agencies noted:
    • This advisory describes known Kimsuky TTPs, as found in open-source and intelligence reporting through July 2020. The target audience for this advisory is commercial sector businesses desiring to protect their networks from North Korean APT activity.
    • The agencies highlighted the key findings:
      • Kimsuky is most likely tasked by the North Korean regime with a global intelligence gathering mission.
      • Kimsuky employs common social engineering tactics, spearphishing, and watering hole attacks to exfiltrate desired information from victims.
      •  Kimsuky is most likely to use spearphishing to gain initial access into victim hosts or networks.
      • Kimsuky conducts its intelligence collection activities against individuals and organizations in South Korea, Japan, and the United States.
      • Kimsuky focuses its intelligence collection activities on foreign policy and national security issues related to the Korean peninsula, nuclear policy, and sanctions.
      • Kimsuky specifically targets:
        • Individuals identified as experts in various fields,
        • Think tanks, and
        • South Korean government entities.
  • European Data Protection Supervisor (EDPS) Wojciech Wiewiórowski made remarks at the European Union Agency for Cybersecurity’s (ENISA) Annual Privacy Forum and advocated for a European Union (EU) moratorium on the rollout of new technology like facial recognition and artificial intelligence (AI) until this “development can be reconciled with the values and fundamental rights that are at the foundation of our democratic societies.” He claimed the EU could maintain the rights of its people while taking the lead in cutting edge technologies. Wiewiórowski asserted:
    • Now we are entering a new phase of contactless tracking of individuals in public areas. Remote facial recognition technology has developed quickly; so much so that some authorities and private entities want to use it in many places. If this all becomes true, we could be tracked everywhere in the world.
    • I do not believe that such a development can be reconciled with the values and fundamental rights that are at the foundation of our democratic societies. The EDPS therefore, together with other authorities, supports a moratorium on the rollout of such technologies. The aim of this moratorium would be twofold. Firstly, an informed and democratic debate would take place. Secondly, the EU and Member States would put in place all the appropriate safeguards, including a comprehensive legal framework, to guarantee the proportionality of the respective technologies and systems in relation to their specific use.
    • As an example, any new regulatory framework for AI should, in my view:
      • apply both to EU Member States and to EU institutions, offices, bodies and agencies;
      • be designed to protect individuals, communities and society as a whole, from any negative impact;
      • propose a robust and nuanced risk classification scheme, ensuring that any significant potential harm posed by AI applications is matched with appropriate mitigating measures.
    • We must ensure that Europe’s leading role in AI, or any other technology in development, does not come at the cost of our fundamental rights. Europe must remain true to its values and provide the grounds for innovation. We will only get it right if we ensure that technology serves both individuals and society.
    • Faced with these developments, transparency is a starting point for proper debate and assessment. Transparency for citizens puts them in a position to understand what they are subject to, and to decide whether they want to accept the infringements of their rights.
  • The Office of the Privacy Commissioner of Canada (OPC) and “its international counterparts” laid out their thinking on “stronger privacy protections and greater accountability in the development and use of facial recognition technology and artificial intelligence (AI) systems” at the recent Global Privacy Assembly. The OPC summarized the two resolutions adopted at the assembly:
    • the resolution on facial recognition technology acknowledges that this technology can benefit security and public safety. However, it asserts that facial recognition can erode data protection, privacy and human rights because it is highly intrusive and enables widespread surveillance that can produce inaccurate results. The resolution also calls on data protection authorities to work together to develop principles and expectations that strengthen data protection and ensure privacy by design in the development of innovative uses of this technology.
    • a resolution on the development and use of AI systems that urges organizations developing or using them to ensure human accountability for AI systems and address adverse impacts on human rights. The resolution encourages governments to amend personal data protection laws to make clear legal obligations for accountability in the development and use of AI. It also calls on governments, public authorities and other stakeholders to work with data protection authorities to ensure legal compliance, accountability and ethics in the development and use of AI systems.
  • The Alliance for Securing Democracy (ASD) at the German Marshall Fund of the United States (GMFUS) issued a report, “A Future Internet for Democracies: Contesting China’s Push for Dominance in 5G, 6G, and the Internet of Everything” that “provides a roadmap for contesting China’s growing dominance in this critical information arena across infrastructure, application, and governance dimensions—one that doubles down on geostrategic interests and allied cooperation.” ASD stated “[a]n allied approach that is rooted firmly in shared values and resists an authoritarian divide-and-conquer strategy is vital for the success of democracies in commercial, military, and governance domains.” ASD asserted:
    • The United States and its democratic allies are engaged in a contest for the soul of the Future Internet. Conceived as a beacon of free expression with the power to tear down communication barriers across free and unfree societies alike, the Internet today faces significant challenges to its status as the world’s ultimate connector.1 In creating connectivity and space for democratic speech, it has also enabled new means of authoritarian control and the suppression of human rights through censorship and surveillance. As tensions between democracies and the People’s Republic of China (PRC) heat up over Internet technologies, the prospect of a dichotomous Inter-net comes more sharply into focus: a democratic Internet where information flows freely and an authoritarian Internet where it is tightly controlled—separated not by an Iron Curtain, but a Silicon one. The Future Internet is deeply enmeshed in the dawning information contest between autocracies and democracies.2 It is the base layer—the foundation—on which communication takes place and the entry point into narrative and societal influence. How the next generation of Internet technologies are created, defined, governed, and ultimately used will have an outsized impact on this information contest—and the larger geopolitical contest—between democracy and authoritarianism.
    • ASD found:
      • The Chinese Communist Party (CCP) has a history of creating infrastructure dependence and using it for geopolitical leverage. As such, China’s global market dominance in Future Internet infrastructure carries unacceptable risks for democracies.
      • The contest to shape 6G standards is already underway, with China leading the charge internationally. As the United States ponders how it ended up on the back foot on 5G, China is moving ahead with new proposals that would increase authoritarian control and undermine fundamental freedoms.
      • The battle over the Future Internet is playing out in the Global South. As more developed nations eschew Chinese network equipment, democracies’ response has largely ignored this global build-out of networks and applications in the proving ground of the developing world that threaten both technological competitiveness and universal rights.
      • China is exporting “technology to anticipate crime”—a dystopian future police state. “Minority report”-style pre-criminal arrests decimate the practice of the rule of law centered in the presumption of innocence.
      • Personal Data Exfiltration: CCP entities see “Alternative Data” as “New Oil” for AI-driven applications in the Internet-of-Everything. These applications provide new and expanded avenues for mass data collection, as much as they depend on this data to succeed–giving China the means and the motivation to vacuum up the world’s data.
      • Data in, propaganda out: Future Internet technology presents opportunities to influence the information environment, including the development of information applications that simultaneously perform big data collection. Chinese companies are building information platforms into application technologies, reimagining both the public square and private locales as tools for propaganda.
      • Already victims of intellectual property theft by China, the United States and its democratic partners are ill-prepared to secure sensitive information as the Future Internet ecosystem explodes access points. This insecurity will continue to undermine technological competitiveness and national security and compound these effects in new ways.
      • China outnumbers the United States nearly two-to-one on participation in and leadership of critical international Future Internet standards-setting efforts. Technocratic standards bodies are becoming unlikely loci of great power technical competition, as Beijing uses leadership posts to shape the narrative and set the course for the next generation of Internet technologies to support China’s own technological leadership, governance norms, and market access.
      • The world’s oldest UN agency is being leveraged as a propaganda mouthpiece for the CCP’s AI and Future Internet agenda, whitewashing human rights abuses under a banner of “AI for Good.” The upshot is an effort to shape the UN Sustainable Development agenda to put economic development with authoritarian technology–not individual liberty—at their center.
      • A symbiotic relationship has developed between China’s Belt and Road Initiative and UN agencies involved in Future Internet and digital development. In this way, China leverages the United Nations enterprise to capture market dominance in next generation technologies.
  • A Dutch think tank has put together the “(best) practices of Asian countries and the United States in the field of digital connectivity” in the hopes of realizing European Commission President Ursula von der Leyen’s goal of making the next ten years “Europe’s Digital Decade.” The Clingendael Institute explained that the report “covers a wide range of topics related to digital regulation, the e-economy, and telecommunications infrastructure.” The Clingendael Institute asserted:
    • Central to the debate and any policy decision on digital connectivity are the trade-offs concerning privacy, business interests and national security. While all regulations are a combination of these three, the United States (US) has taken a path that prioritises the interests of businesses. This is manifested, for example, in the strong focus on free data flows, both personal and non-personal, to strengthen companies’ competitive advantage in collecting and using data to develop themselves. China’s approach, by contrast, strongly focuses on state security, wherein Chinese businesses are supported and leveraged to pre-empt threats to the country and, more specifically, to the Chinese Communist Party. This is evident from its strict data localisation requirements to prevent any data from being stored outside its borders and a mandatory security assessment for cross-border transfers. The European Union represents a third way, emphasising individuals’ privacy and a human-centred approach that puts people first, and includes a strong focus on ethics, including in data-protection regulations. This Clingendael Report aims to increase awareness and debate about the trade-offs of individual, state and business interests in all subsets of digital connectivity. This is needed to reach a more sustainable EU approach that will outlast the present decade. After all, economic competitiveness is required to secure Europe and to further its principled approach to digital connectivity in the long term. The analysis presented here covers a wide range of topics within digital connectivity’s three subsets: regulation; business; and telecommunications infrastructure. Aiming to contribute to improved European policy-making, this report discusses (best) practices of existing and rising digital powers in Asia and the United States. In every domain, potential avenues for cooperation with those countries are explored as ways forward for the EU.
    • Findings show that the EU and its member states are slowly but steadily moving from being mainly a regulatory power to also claiming their space as a player in the digitalised world. Cloud computing initiative GAIA-X is a key example, constituting a proactive alternative to American and Chinese Cloud providers that is strongly focused on uniting small European initiatives to create a strong and sustainable Cloud infrastructure. Such initiatives, including also the more recent Next Generation Internet (NGI), not only help defend and push European digital norms and standards, but also assist the global competitiveness of European companies and business models by facilitating the availability of large data-sets as well as scaling up. Next to such ‘EU only’ initiatives, working closely together with like-minded partners will benefit the EU and its member states as they seek to finetune and implement their digital strategies. The United States and Asian partners, particularly Japan, South Korea, India and Singapore, are the focus of attention here.

Coming Events

  • On 10 November, the Senate Commerce, Science, and Transportation Committee will hold a hearing to consider nominations, including Nathan Simington’s to be a Member of the Federal Communications Commission.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by David Peterson from Pixabay

PRC Response To U.S. Clean Networks

The PRC responds to  the U.S.’ Clean Networks with call for international, multilateral standards

In a speech given by the People’s Republic of China’s (PRC) Foreign Minister Wang Yi, the PRC proposed international, multilateral cooperation in addressing data security around the globe. In doing, Wang took some obvious shots at recent policies announced by the United States (U.S.) and longer term actions such as surveillance by the National Security Agency (NSA). The PRC floated a “Global Initiative on Data Security” that would, on its face, seem to argue against actions being undertaken by Beijing against the U.S. and some of its allies. For example, this initiative would bar the stealing of “important data,” yet the PRC stands accused of hacking Australia’s Parliament. Nonetheless, the PRC is likely seeking to position itself as more internationalist than the U.S., which under President Donald Trump has become more isolationist and unilateralist in its policies. The PRC is also calling for the rule of law, especially around “security issues,” most likely a reference to the ongoing trade/national security dispute between the two nations playing out largely in their technology sectors.

Wang’s speech came roughly a month after the U.S. Department of State unveiled its Clean Networks program, an initiative aimed at countering the national security risks posed by PRC technology companies, hardware, software, and apps (see here for more analysis.) He even went so far as to condemn unilateral actions by one nation in particular looking to institute a “clean” networks program. Wang framed this program as aiming to blunt the PRC’s competitive advantage by playing on national security fears. The Trump Administration has sought to persuade, cajole, and lean on other nations to forgo use of Huawei equipment and services in building their next generation 5G networks with some success.

And yet, since the Clean Networks program lacks much in the way of apparent enforcement mechanisms, the Department of States’s announcement may have had more to do with optics as the Trump Administration and many of its Republican allies in Congress have pinned the blame on COVID-19 on the PRC and cast the country as the primary threat to the U.S. This has played out as the Trump Administration has been choking off access to advanced semiconductors and chips to PRC firms, banned TikTok and WeChat, and order ByteDance to sell musical.ly, the app and platform that served as the fulcrum by which TikTok was launched in the U.S.

Wang asserted the PRC “believes that to effectively address the risks and challenges to data security, the following principles must be observed:

  • First, uphold multilateralism. Pursuing extensive consultation and joint contribution for shared benefits is the right way forward for addressing the deficit in global digital governance. It is important to develop a set of international rules on data security that reflect the will and respect the interests of all countries through broad-based participation. Bent on unilateral acts, a certain country keeps making groundless accusations against others in the name of “clean” network and used security as a pretext to prey on enterprises of other countries who have a competitive edge. Such blatant acts of bullying must be opposed and rejected.
  • Second, balance security and development. Protecting data security is essential for the sound growth of digital economy. Countries have the right to protect data security according to law. That said, they are also duty-bound to provide an open, fair and non-discriminatory environment for all businesses. Protectionism in the digital domain runs counter to the laws of economic development and the trend of globalization. Protectionist practices undermine the right of global consumers to equally access digital services and will eventually hold back the country’s own development.
  • Third, ensure fairness and justice. Protection of digital security should be based on facts and the law. Politicization of security issues, double standards and slandering others violate the basic norms governing international relations, and seriously disrupt and hamper global digital cooperation and development.

Wang continued, “[i]n view of the new issues and challenges emerging in this field, China would like to propose a Global Initiative on Data Security, and looks forward to the active participation of all parties…[and] [l]et me briefly share with you the key points of our Initiative:

  • First, approach data security with an objective and rational attitude, and maintain an open, secure and stable global supply chain.
  • Second, oppose using ICT activities to impair other States’ critical infrastructure or steal important data.
  • Third, take actions to prevent and put an end to activities that infringe upon personal information, oppose abusing ICT to conduct mass surveillance against other States or engage in unauthorized collection of personal information of other States.
  • Fourth, ask companies to respect the laws of host countries, desist from coercing domestic companies into storing data generated and obtained overseas in one’s own territory.
  • Fifth, respect the sovereignty, jurisdiction and governance of data of other States, avoid asking companies or individuals to provide data located in other States without the latter’s permission.
  • Sixth, meet law enforcement needs for overseas data through judicial assistance or other appropriate channels.
  • Seventh, ICT products and services providers should not install backdoors in their products and services to illegally obtain user data.
  • Eighth, ICT companies should not seek illegitimate interests by taking advantage of users’ dependence on their products.

As mentioned in the opening paragraph of this article, the U.S. and many of its allies and partners would argue the PRC has transgressed a number of these proposed rules. However, the Foreign Ministry was very clever in how they drafted and translated these principles, for in the second key principle, the PRC is proposing that no country should use “ICT activities to impair other States’ critical infrastructure.” And yet, two international media outlets reported that the African Union’s (AU) computers were transmitting reams of sensitive data to Shanghai daily between 2012 and 2017. If this claim is true, and the PRC’s government was behind the exfiltration, is it fair to say the AU’s critical infrastructure was impaired? One could argue the infrastructure was not even though there was apparently massive data exfiltration. Likewise, in the third key principle, the PRC appears to be condemning mass surveillance of other states, but just this week a PRC company was accused of compiling the personal information of more than 2.4 million worldwide, many of them in influential positions like the Prime Ministers of the United Kingdom and Australia. And yet, if this is the extent of the surveillance, it is not of the same magnitude as U.S. surveillance over the better part of the last two decades. Moreover, the PRC is not opposing a country using mass surveillance of its own people as the PRC is regularly accused of doing, especially against its Uighur minority.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Photo by Hanson Lu on Unsplash

Further Reading, Other Developments, and Coming Events (13 August)

Here are Further Reading, Other Developments, and Coming Events:

Coming Events

  • On 18 August, the National Institute of Standards and Technology (NIST) will host the “Bias in AI Workshop, a virtual event to develop a shared understanding of bias in AI, what it is, and how to measure it.”
  • The United States’ Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) announced that its third annual National Cybersecurity Summit “will be held virtually as a series of webinars every Wednesday for four weeks beginning September 16 and ending October 7:”
    • September 16: Key Cyber Insights
    • September 23: Leading the Digital Transformation
    • September 30: Diversity in Cybersecurity
    • October 7: Defending our Democracy
    • One can register for the event here.
  • The Senate Judiciary Committee’s Antitrust, Competition Policy & Consumer Rights Subcommittee will hold a hearing on 15 September titled “Stacking the Tech: Has Google Harmed Competition in Online Advertising?.” In their press release, Chair Mike Lee (R-UT) and Ranking Member Amy Klobuchar (D-MN) asserted:
    • Google is the dominant player in online advertising, a business that accounts for around 85% of its revenues and which allows it to monetize the data it collects through the products it offers for free. Recent consumer complaints and investigations by law enforcement have raised questions about whether Google has acquired or maintained its market power in online advertising in violation of the antitrust laws. News reports indicate this may also be the centerpiece of a forthcoming antitrust lawsuit from the U.S. Department of Justice. This hearing will examine these allegations and provide a forum to assess the most important antitrust investigation of the 21st century.

Other Developments

  • Senate Intelligence Committee Acting Chair Marco Rubio (R-FL) and Vice Chairman Mark Warner (D-VA) released a statement indicating the committee had voted to adopt the fifth and final volume of its investigation of the Russian Federation’s interference in the 2016 election. The committee had submitted the report to the Intelligence Community for vetting and have received the report with edits and redactions. The report could be released sometime over the next few weeks.  Rubio and Warner stated “the Senate Intelligence Committee voted to adopt the classified version of the final volume of the Committee’s bipartisan Russia investigation. In the coming days, the Committee will work to incorporate any additional views, as well as work with the Intelligence Community to formalize a properly redacted, declassified, publicly releasable version of the Volume 5 report.” The Senate Intelligence Committee’s has released four previous reports:
  • The National Institute of Standards and Technology (NIST) is accepting comments until 11 September on draft Special Publication 800-53B, “Control Baselines for Information Systems and Organizations,” a guidance document that will serve a key role in the United States government’s efforts to secure and protect the networks and systems it operates and those run by federal contractors. NIST explained:
    • This publication establishes security and privacy control baselines for federal information systems and organizations and provides tailoring guidance for those baselines. The use of the security control baselines is mandatory, in accordance with OMB Circular A-130 [OMB A-130] and the provisions of the Federal Information Security Modernization Act4 [FISMA], which requires the implementation of a set of minimum controls to protect federal information and  information systems. Whereas use of the privacy control baseline is not mandated by law or [OMB A-130], SP 800-53B, along with other supporting NIST publications, is designed to help organizations identify the security and privacy controls needed to manage risk and satisfy the security and privacy requirements in FISMA, the Privacy Act of 1974 [PRIVACT], selected OMB policies (e.g., [OMB A-130]), and designated Federal Information Processing Standards (FIPS), among others
  • The United States Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) released an “Election Vulnerability Reporting Guide
    to provide “election administrators with a step-by-step guide, list of resources, and a template for establishing a successful vulnerability disclosure program to address possible vulnerabilities in their state and local election systems…[and] [t]he six steps include:
    • Step 1: Identify Systems Where You Would Accept Security Testing, and those Off-Limits
    • Step 2: Draft an Easy-to-Read Vulnerability Disclosure Policy (See Appendix III)
    • Step 3: Establish a Way to Receive Reports/Conduct Follow-On Communication
    • Step 4: Assign Someone to Thank and Communicate with Researchers
    • Step 5: Assign Someone to Vet and Fix the Vulnerabilities
    • Step 6: Consider Sharing Information with Other Affected Parties
  • The United Kingdom’s Information Commissioner’s Office (ICO) has issued “Guidance on AI and data protection” that “clarifies how you can assess the risks to rights and freedoms that AI can pose from a data protection perspective; and the appropriate measures you can implement to mitigate them.” The ICO explained “[w]hile data protection and ‘AI ethics’ overlap, this guidance does not provide generic ethical or design principles for your use of AI.” The ICO stated “[i]t corresponds to data protection principles, and is structured as follows:
    • part one addresses accountability and governance in AI, including data protection impact assessments (DPIAs);
    • part two covers fair, lawful and transparent processing, including lawful bases, assessing and improving AI system performance, and mitigating potential discrimination;
    • part three addresses data minimisation and security; and
    • part four covers compliance with individual rights, including rights related to automated decision-making.
  •  20 state attorneys general wrote Facebook Chief Executive Officer Mark Zuckerberg and Chief Operating Officer Sheryl Sandberg “to request  that  you  take  additional  steps  to prevent   Facebook   from   being used   to   spread   disinformation   and   hate   and   to   facilitate discrimination.” They also asked “that you take more steps to provide redress for users who fall victim to intimidation and harassment, including violence and digital abuse.” The attorneys general said that “[b]ased on our collective experience, we believe that Facebook should take additional actions including the following steps—many of which are highlighted in Facebook’s recent Civil Rights Audit—to strengthen its commitment to civil rights and fighting disinformation and discrimination:
    • Aggressively enforce Facebook policies against hate speech and organized hate organizations: Although Facebook has developed policies against hate speech and organizations that peddle it, we remain concerned that Facebook’s policies on Dangerous Individuals and Organizations, including but not limited to its policies on white nationalist and white supremacist content, are not enforced quickly and comprehensively enough. Content that violates Facebook’s own policies too often escapes removal just because it comes as coded language, rather than specific magic words. And even where Facebook takes steps to address a particular violation, it often fails to proactively address the follow-on actions by replacement or splinter groups that quickly emerge.
    • Allow public, third-party audits of hate content and enforcement: To gauge the ongoing progress of Facebook’s enforcement efforts, independent experts should be permitted access to the data necessary to conduct regular, transparent third-party audits of hate and hate-related misinformation on the platform, including any information made available to the Global Oversight Board. As part of this effort, Facebook should capture data on the prevalence of different forms of hate content on the platform, whether or not covered by Facebook’s own community standards, thus allowing the public to determine whether enforcement of anti-hate policies differs based on the type of hate content at issue.
    • Commit to an ongoing, independent analysis of Facebook’s content population scheme and the prompt development of best practices guidance: By funneling users toward particular types of content, Facebook’s content population scheme, including its algorithms, can push users into extremist online communities that feature divisive and inflammatory messages, often directed at particular groups. Although Facebook has conducted research and considered programs to reduce this risk, there is still no mandatory guidance for coders and other teams involved in content population. Facebook should commit to an ongoing, independent analysis of its content population scheme, including its algorithms, and also continuously implement mandatory protocols as best practices are identified to curb bias and prevent recommendations of hate content and groups.
    • Expand policies limiting inflammatory advertisements that vilify minority groups: Although Facebook currently prohibits ads that claim that certain people, because of their membership in a protected group, pose a threat to the physical safety of communities or the nation, its policies still allow attacks that characterize such groups as threats to national culture or values. The current prohibition should be expanded to include such ads.
  • New Zealand’s Ministry of Statistics “launched the Algorithm Charter for Aotearoa New Zealand” that “signals that [the nation’s agencies] are committed to being consistent, transparent and accountable in their use of algorithms.”
    • The Ministry explained “[t]he Algorithm Charter is part of a wider ecosystem and works together with existing tools, networks and research, including:
      • Principles for the Safe and Effective Use of Data and Analytics (Privacy Commissioner and Government Chief Data Steward, 2018)
      • Government Use of Artificial Intelligence in New Zealand (New Zealand Law Foundation and Otago University, 2019)
      • Trustworthy AI in Aotearoa – AI Principles (AI Forum New Zealand, 2020)
      • Open Government Partnership, an international agreement to increase transparency.
      • Data Protection and Use Policy (Social Wellbeing Agency, 2020)
      • Privacy, Human Rights and Ethics Framework (Ministry of Social Development).
  • The European Union (EU) imposed its first cyber sanctions under its Framework for a Joint EU Diplomatic Response to Malicious Cyber Activities (aka the cyber diplomacy toolbox) against six hackers and three entities from the Russian Federation, the People’s Republic of China (PRC) and the Democratic People’s Republic of Korea for attacks against the against the Organisation for the Prohibition of Chemical Weapons (OPCW) in the Netherlands, the malware attacks known as Petya and WannaCry, and Operation Cloud Hopper. The EU’s cyber sanctions follow sanctions the United States has placed on a number of people and entities from the same nations and also indictments the U.S. Department of Justice has announced over the years. The sanctions are part of the effort to levy costs on nations and actors that conduct cyber attacks. The EU explained:
    • The attempted cyber-attack was aimed at hacking into the Wi-Fi network of the OPCW, which, if successful, would have compromised the security of the network and the OPCW’s ongoing investigatory work. The Netherlands Defence Intelligence and Security Service (DISS) (Militaire Inlichtingen- en Veiligheidsdienst – MIVD) disrupted the attempted cyber-attack, thereby preventing serious damage to the OPCW.
    • “WannaCry” disrupted information systems around the world by targeting information systems with ransomware and blocking access to data. It affected information systems of companies in the Union, including information systems relating to services necessary for the maintenance of essential services and economic activities within Member States.
    • “NotPetya” or “EternalPetya” rendered data inaccessible in a number of companies in the Union, wider Europe and worldwide, by targeting computers with ransomware and blocking access to data, resulting amongst others in significant economic loss. The cyber-attack on a Ukrainian power grid resulted in parts of it being switched off during winter.
    • “Operation Cloud Hopper” has targeted information systems of multinational companies in six continents, including companies located in the Union, and gained unauthorised access to commercially sensitive data, resulting in significant economic loss.
  • The United States’ Federal Communications Commission (FCC) is asking for comments on the Department of Commerce’s the National Telecommunications and Information Administration’s (NTIA) petition asking the agency to start a rulemaking to clarify alleged ambiguities in 47 USC 230 regarding the limits of the liability shield for the content others post online versus the liability protection for “good faith” moderation by the platform itself. The NTIA was acting per direction in an executive order allegedly aiming to correct online censorship. Executive Order 13925, “Preventing Online Censorship” was issued in late May after Twitter factchecked two of President Donald Trump’s Tweets regarding false claims made about mail voting in California in response to the COVID-19 pandemic. Comments are due by 2 September.
  • The Australian Competition & Consumer Commission (ACCC) released for public consultation a draft of “a mandatory code of conduct to address bargaining power imbalances between Australian news media businesses and digital platforms, specifically Google and Facebook.” The government in Canberra had asked the ACCC to draft this code earlier this year after talks broke down between the Australian Treasury
    • The ACCC explained
      • The code would commence following the introduction and passage of relevant legislation in the Australian Parliament. The ACCC released an exposure draft of this legislation on 31 July 2020, with consultation on the draft due to conclude on 28 August 2020. Final legislation is expected to be introduced to Parliament shortly after conclusion of this consultation process.
    • This is not the ACCC’s first interaction with the companies. Late last year, the ACCC announced a legal action against Google “alleging they engaged in misleading conduct and made false or misleading representations to consumers about the personal location data Google collects, keeps and uses” according to the agency’s press release. In its initial filing, the ACCC is claiming that Google mislead and deceived the public in contravention of the Australian Competition Law and Android users were harmed because those that switched off Location Services were unaware that their location information was still be collected and used by Google for it was not readily apparent that Web & App Activity also needed to be switched off.
    • A year ago, the ACCC released its final report in its “Digital Platforms Inquiry” that “proposes specific recommendations aimed at addressing some of the actual and potential negative impacts of digital platforms in the media and advertising markets, and also more broadly on consumers.”
  • The United States’ Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) issued “released core guidance documentation for the Trusted Internet Connections (TIC) program, developed to assist agencies in protecting modern information technology architectures and services.” CISA explained “In accordance with the Office of Management and Budget (OMB) Memorandum (M) 19-26: Update to the TIC Initiative, TIC 3.0 expands on the original initiative to drive security standards and leverage advances in technology to secure a wide spectrum of agency network architectures.” Specifically, CISA released three core guidance documents:
    • Program Guidebook (Volume 1) – Outlines the modernized TIC program and includes its historical context
    • Reference Architecture (Volume 2) – Defines the concepts of the program to guide and constrain the diverse implementations of the security capabilities
  • Senators Ron Wyden (D-OR), Bill Cassidy (R-LA) and ten other Members wrote the Federal Trade Commission (FTC) urging the agency “to investigate widespread privacy violations by companies in the advertising technology (adtech) industry that are selling private data about millions of Americans, collected without their knowledge or consent from their phones, computers, and smart TVs.” They asked the FTC “to use its authority to conduct broad industry probes under Section 6(b) of the FTC Act to determine whether adtech companies and their data broker partners have violated federal laws prohibiting unfair and deceptive business practices.” They argued “[t]he FTC should not proceed with its review of the Children’s Online Privacy Protection Act (COPPA) Rule before it has completed this investigation.”
  •  “100 U.S. women lawmakers and current and former legislators from around the world,” including Speaker of the House Nancy Pelosi (D-CA), sent a letter to Facebook CEO Mark Zuckerberg and COO Sheryl Sandberg urging the company “to take decisive action to protect women from rampant and increasing online attacks on their platform that have caused many women to avoid or abandon careers in politics and public service.” They noted “[j]ust a few days ago, a manipulated and widely shared video that depicted Speaker Pelosi slurring her speech was once again circulating on major social media platforms, gaining countless views before TikTok, Twitter, and YouTube all removed the footage…[and] [t]he video remains on Facebook and is labeled “partly false,” continuing to gain millions of views.” The current and former legislators “called on Facebook to enforce existing rules, including:
    • Quick removal of posts that threaten candidates with physical violence, sexual violence or death, and that glorify, incite or praise violence against women; disable the relevant accounts, and refer offenders to law enforcement.
    • Eliminate malicious hate speech targeting women, including violent, objectifying or dehumanizing speech, statements of inferiority, and derogatory sexual terms;
    • Remove accounts that repeatedly violate terms of service by threatening, harassing or doxing or that use false identities to attack women leaders and candidates; and
    • Remove manipulated images or videos misrepresenting women public figures.
  • The United States’ Departments of Commerce and Homeland Security released an update “highlighting more than 50 activities led by industry and government that demonstrate progress in the drive to counter botnet threats.” in May 2018, the agencies submitted “A Report to the President on Enhancing the Resilience of the Internet and Communications Ecosystem Against Botnets and Other Automated, Distributed Threats” that identified a number of steps and prompted a follow on “A Road Map Toward Resilience Against Botnets” released in November 2018.
  • United States (U.S.) Secretary of Commerce Wilbur Ross and European Commissioner for Justice Didier Reynders released a joint statement explaining that “[t]he U.S. Department of Commerce and the European Commission have initiated discussions to evaluate the potential for an enhanced EU-U.S. Privacy Shield framework to comply with the July 16 judgment of the Court of Justice of the European Union in the Schrems II case.”
    • Maximillian Schrems filed a complaint against Facebook with Ireland’s Data Protection Commission (DPC) in 2013, alleging that the company’s transfer of his personal data violated his rights under European Union law because of the mass U.S. surveillance revealed by former National Security Agency (NSA) contractor Edward Snowden. Ultimately, this case resulted in a 2015 Court of Justice of the European Union (CJEU) ruling that invalidated the Safe Harbor agreement under which the personal data of EU residents was transferred to the US by commercial concerns. The EU and US executed a follow on agreement, the EU-U.S. Privacy Shield, that was designed to address some of the problems the CJEU turned up, and the U.S. passed a law, the “Judicial Redress Act of 2015” (P.L. 114-126), to provide EU citizens a way to exercise their EU rights in US courts via the “Privacy Act of 1974.”
    • However, Schrems continued and soon sought to challenge the legality of the European Commission’s signing off on the Privacy Shield agreement, the adequacy decision issued in 2016, and also the use of standard contractual clauses (SCC) by companies for the transfer of personal data to the US. The CJEU struck down the adequacy decision, throwing into doubt many entities’ transfers out of the EU into the U.S. but upheld SCCs in a way that suggested EU data protection authorities (DPA) may need to review all such agreements to ensure they comply with EU law.
  • The European Commission (EC) announced an “an in-depth investigation to assess the proposed acquisition of Fitbit by Google under the EU Merger Regulation.” The EC voiced its concern “that the proposed transaction would further entrench Google’s market position in the online advertising markets by increasing the already vast amount of data that Google could use for personalisation of the ads it serves and displays.” The EC detailed its “preliminary competition concerns:
    • Following its first phase investigation, the Commission has concerns about the impact of the transaction on the supply of online search and display advertising services (the sale of advertising space on, respectively, the result page of an internet search engine or other internet pages), as well as on the supply of ”ad tech” services (analytics and digital tools used to facilitate the programmatic sale and purchase of digital advertising). By acquiring Fitbit, Google would acquire (i) the database maintained by Fitbit about its users’ health and fitness; and (ii) the technology to develop a database similar to Fitbit’s one.
    • The data collected via wrist-worn wearable devices appears, at this stage of the Commission’s review of the transaction, to be an important advantage in the online advertising markets. By increasing the data advantage of Google in the personalisation of the ads it serves via its search engine and displays on other internet pages, it would be more difficult for rivals to match Google’s online advertising services. Thus, the transaction would raise barriers to entry and expansion for Google’s competitors for these services, to the ultimate detriment of advertisers and publishers that would face higher prices and have less choice.
    • At this stage of the investigation, the Commission considers that Google:
      • is dominant in the supply of online search advertising services in the EEA countries (with the exception of Portugal for which market shares are not available);
      • holds a strong market position in the supply of online display advertising services at least in Austria, Belgium, Bulgaria, Croatia, Denmark, France, Germany, Greece, Hungary, Ireland, Italy, Netherlands, Norway, Poland, Romania, Slovakia, Slovenia, Spain, Sweden and the United Kingdom, in particular in relation to off-social networks display ads;
      • holds a strong market position in the supply of ad tech services in the EEA.
    • The Commission will now carry out an in-depth investigation into the effects of the transaction to determine whether its initial competition concerns regarding the online advertising markets are confirmed.
    • In addition, the Commission will also further examine:
      • the effects of the combination of Fitbit’s and Google’s databases and capabilities in the digital healthcare sector, which is still at a nascent stage in Europe; and
      • whether Google would have the ability and incentive to degrade the interoperability of rivals’ wearables with Google’s Android operating system for smartphones once it owns Fitbit.
    • In February after the deal had been announced, the European Data Protection Board (EDPB) made clear it position that Google and Fitbit will need to scrupulously observe the General Data Protection Regulation’s privacy and data security requirements if the body is sign off on the proposed $2.2 billion acquisition. Moreover, at present Google has not informed European Union (EU) regulators of the proposed deal. The deal comes at a time when both EU and U.S. regulators are already investigating Google for alleged antitrust and anticompetitive practices, and the EDPB’s opinion could carry weight in this process.
  • The United States’ (U.S.) Department of Homeland Security released a Privacy Impact Assessment for the U.S. Border Patrol (USPB) Digital Forensics Programs that details how it may conduct searches of electronic devices at the U.S. border and ports of entry. DHS explained
    • As part of USBP’s law enforcement duties, USBP may search and extract information from electronic devices, including: laptop computers; thumb drives; compact disks; digital versatile disks (DVDs); mobile phones; subscriber identity module (SIM) cards; digital cameras; vehicles; and other devices capable of storing electronic information.
    • Last year, a U.S. District Court held that U.S. Customs and Border Protection (CPB) and U.S. Immigration and Customs Enforcement’s (ICE) current practices for searches of smartphones and computers at the U.S. border are unconstitutional and the agency must have reasonable suspicion before conducting such a search. However, the Court declined the plaintiffs’ request that the information taken off of their devices be expunged by the agencies. This ruling follows a Department of Homeland Security Office of the Inspector General (OIG) report that found CPB “did not always conduct searches of electronic devices at U.S. ports of entry according to its Standard Operating Procedures” and asserted that “[t]hese deficiencies in supervision, guidance, and equipment management, combined with a lack of performance measures, limit [CPB’s] ability to detect and deter illegal activities related to terrorism; national security; human, drug, and bulk cash smuggling; and child pornography.”
    • In terms of a legal backdrop, the United States Supreme Court has found that searches and seizures of electronic devices at borders and airports are subject to lesser legal standards than those conducted elsewhere in the U.S. under most circumstances. Generally, the government’s interest in securing the border against the flow of contraband and people not allowed to enter allow considerable leeway to the warrant requirements for many other types of searches. However, in recent years two federal appeals courts (the Fourth and Ninth Circuits) have held that searches of electronic devices require suspicion on the part of government agents while another appeals court (the Eleventh Circuit) held differently. Consequently, there is not a uniform legal standard for these searches.
  • The Inter-American Development Bank (IDB) and the Organization of Americans States (OAS) released their second assessment of cybersecurity across Latin America and the Caribbean that used the Cybersecurity Capacity Maturity Model for Nations (CMM) developed at University of Oxford’s Global Cyber Security Capacity Centre (GSCC). The IDB and OAS explained:
    • When the first edition of the report “Cybersecurity: Are We Ready in Latin America and the Caribbean?” was released in March 2016, the IDB and the OAS aimed to provide the countries of Latin America and the Caribbean (LAC) not only with a picture of the state of cybersecurity but also guidance about the next steps that should be pursued to strengthen national cybersecurity capacities. This was the first study of its kind, presenting the state of cybersecurity with a comprehensive vision and covering all LAC countries.
    • The great challenges of cybersecurity, like those of the internet itself, are of a global nature. Therefore, it is undeniable that the countries of LAC must continue to foster greater cooperation among themselves, while involving all relevant actors, as well as establishing a mechanism for monitoring, analysis, and impact assessment related to cybersecurity both nationally and regionally. More data in relation to cybersecurity would allow for the introduction of a culture of cyberrisk management that needs to be extended both in the public and private sectors. Countries must be prepared to adapt quickly to the dynamic environment around us and make decisions based on a constantly changing threat landscape. Our member states may manage these risks by understanding the impact on and the likelihood of cyberthreats to their citizens, organizations, and national critical infrastructure. Moving to the next level of maturity will require a comprehensive and sustainable cybersecurity policy, supported by the country’s political agenda, with allocation of  financial resources and qualified human capital to carry it out.
    • The COVID-19 pandemic will pass, but events that will require intensive use of digital technologies so that the world can carry on will continue happening. The challenge of protecting our digital space will, therefore, continue to grow. It is the hope of the IDB and the OAS that this edition of the report will help LAC countries to have a better understanding of their current state of cybersecurity capacity and be useful in the design of the policy initiatives that will lead them to increase their level of cyberresilience.
  • The European Data Protection Supervisor (EDPS) issued an opinion on “the European Commission’s action plan for a comprehensive Union policy on preventing money laundering and terrorism financing (C(2020)2800 final), published on 7 May 2020.” The EDPS asserted:
    • While  the  EDPS acknowledges the  importance  of  the  fight  against money  laundering  and terrorism financing as an objective of general interest, we call for the legislation to strike a balance between the interference with the fundamental rights of privacy and personal data protection and  the measures that  are  necessary  to  effectively  achieve  the  general  interest goals on anti-money  laundering  and  countering the  financing  of terrorism (AML/CFT) (the principle of proportionality).
    • The EDPS recommends that the Commission monitors the effective implementation of the existing  AML/CFT  framework while ensuring that the  GDPR  and  the  data  protection framework are respected and complied with. This is particularly relevant for the works on the interconnection of central bank account mechanisms and beneficial ownership registers that should be largely inspired by the principles of data minimisation, accuracy and privacy-by-design and by default.  

Further Reading

  • China already has your data. Trump’s TikTok and WeChat bans can’t stop that.” By Aynne Kokas – The Washington Post. This article persuasively makes the case that even if a ban on TikTok and WeChat were to work, and there are substantive questions as to how a ban would given how widely the former has been downloaded, the People’s Republic of China (PRC) is almost certainly acquiring massive reams of data on Americans through a variety of apps, platforms, and games. For example, Tencent, owner of WeChat, has a 40% stake in Epic Games that has Fortnite, a massively popular multiplayer game (if you have never heard of it, ask one of the children in your family). Moreover, a recent change to PRC law mandates that companies operating in the PRC must share their data bases for cybersecurity reviews, which may be an opportunity aside from hacking and exfiltrating United States entities, to access data. In summation, if the Trump Administration is serious about stopping the flow of data from the U.S. to the PRC, these executive orders will do very little.
  • Big Tech Makes Inroads With the Biden Campaign” by David McCabe and Kenneth P. Vogel – The New York Times. Most likely long before former Vice President Joe Biden clinched the Democratic nomination, advisers volunteered to help plot out his policy positions, a process that intensified this year. Of course, this includes technology policy, and many of those volunteering for the campaign’s Innovation Policy Committee have worked or are working for large technology companies directly or as consultants or lobbyists. This piece details some of these people and their relationships and how the Biden campaign is managing possible conflicts of interest. Naturally, those on the left wing of the Democratic Party calling for tighter antitrust, competition, and privacy regulation are concerned that Biden might be pulled away from these positions despite his public statements arguing that the United States government needs to get tougher with some practices.
  • A Bible Burning, a Russian News Agency and a Story Too Good to Check Out” By Matthew Rosenberg and Julian E. Barnes – The New York Times. The Russian Federation seems to be using a new tactic with some success for sowing discord in the United States that is the information equivalent of throwing fuel onto a fire. In this case, a fake story manufactured by a Russian outlet was seized on by some prominent Republicans, in part, because it fits their preferred world view of protestors. In this instance, a Russian outlet created a fake story amplifying an actual event that went viral. We will likely see more of this, and it is not confined to fake stories intended to appeal to the right. The same is happening with content meant for the left wing in the United States.
  • Facebook cracks down on political content disguised as local news” by Sara Fischer – Axios. As part of its continuing effort to crack down on violations of its policies, Facebook will no longer allow groups with a political viewpoint to masquerade as news. The company and outside experts have identified a range of instances where groups propagating a viewpoint, as opposed to reporting, have used a Facebook exemption by pretending to be local news outlets.
  • QAnon groups have millions of members on Facebook, documents show” By Ari Sen and Brandy Zadrozny – NBC News. It appears as if some Facebooks are leaking the results of an internal investigation that identified more than 1 million users who are part of QAnon groups. Most likely these employees want the company to take a stronger stance on the conspiracy group QAnon like the company has with COVID-19 lies and misinformation.
  • And, since Senator Kamala Harris (D-CA) was named former Vice President Joe Biden’s (D-DE) vice presidential pick, this article has become even more relevant than when I highlighted it in late July: “New Emails Reveal Warm Relationship Between Kamala Harris And Big Tech” – HuffPost. Obtained via an Freedom of Information request, new email from Senator Kamala Harris’ (D-CA) tenure as her state’s attorney general suggest she was willing to overlook the role Facebook, Google, and others played and still play in one of her signature issues: revenge porn. This article makes the case Harris came down hard on a scammer running a revenge porn site but did not press the tech giants with any vigor to take down such material from their platforms. Consequently, the case is made if Harris is former Vice President Joe Biden’s vice presidential candidate, this would signal a go easy approach on large companies even though many Democrats have been calling to break up these companies and vigorously enforce antitrust laws. Harris has largely not engaged on tech issues during her tenure in the Senate. To be fair, many of these companies are headquartered in California and pump billions of dollars into the state’s economy annually, putting Harris in a tricky position politically. Of course, such pieces should be taken with a grain of salt since it may have been suggested or planted by one of Harris’ rivals for the vice president nomination or someone looking to settle a score.
  • Unwanted Truths: Inside Trump’s Battles With U.S. Intelligence Agencies” by Robert Draper – The New York Times. A deeply sourced article on the outright antipathy between President Donald Trump and Intelligence Community officials, particularly over the issue of how deeply Russia interfered in the election in 2016. A number of former officials have been fired or forced out because they refused to knuckle under to the White House’s desire to soften or massage conclusions of Russia’s past and current actions to undermine the 2020 election in order to favor Trump.
  • Huawei says it’s running out of chips for its smartphones because of US sanctions” By Kim Lyons – The Verge and “Huawei: Smartphone chips running out under US sanctions” by Joe McDonald – The Associated Press. United States (U.S.) sanctions have started biting the Chinese technology company Huawei, which announced it will likely run out of processor chips for its smartphones. U.S. sanctions bar any company from selling high technology items like processors to Huawei, and this capability is not independently available in the People’s Republic of China (PRC) at present.
  • Targeting WeChat, Trump Takes Aim at China’s Bridge to the World” By Paul Mozur and Raymond Zhong – The New York Times. This piece explains WeChat, the app, the Trump Administration is trying to ban in the United States (U.S.) without any warning. It is like a combination of Facebook, WhatsApp, news app, and payment platform and is used by more than 1.2 billion people.
  • This Tool Could Protect Your Photos From Facial Recognition” By Kashmir Hill – The New York Times. Researchers at the University of Chicago have found a method of subtly altering photos of people that appears to foil most facial recognition technologies. However, a number of experts interviewed said it is too late to stop companies like AI Clearview.
  • I Tried to Live Without the Tech Giants. It Was Impossible.” By Kashmir Hill – The New York Times. This New York Times reporter tried living without the products of large technology companies, which involved some fairly obvious challenges and some that were not so obvious. Of course, it was hard for her to skip Facebook, Instagram, and the like, but cutting out Google and Amazon proved hardest and basically impossible because of the latter’s cloud presence and the former’s web presence. The fact that some of the companies cannot be avoided if one wants to be online likely lends weight to those making the case these companies are anti-competitive.
  • To Head Off Regulators, Google Makes Certain Words Taboo” by Adrianne Jeffries – The Markup. Apparently, in what is a standard practice at large companies, employees at Google were coached to avoid using certain terms or phrases that antitrust regulators would take notice of such as: “market,” “barriers to entry,” and “network effects.” The Markup obtained a 16 August 2019 document titled “Five Rules of Thumb For Written Communications” that starts by asserting “[w]ords matter…[e]specially in antitrust laws” and goes on to advise Google’s employees:
    • We’re out to help users, not hurt competitors.
    • Our users should always be free to switch, and we don’t lock anyone in.
    • We’ve got lots of competitors, so don’t assume we control or dominate any market.
    • Don’t try and define a market or estimate our market share.
    • Assume every document you generate, including email, will be seen by regulators.
  • Facebook Fired An Employee Who Collected Evidence Of Right-Wing Pages Getting Preferential Treatment” By Craig Silverman and Ryan Mac – BuzzFeed News. A Facebook engineer was fired after adducing proof in an internal communications system that the social media platform is more willing to change false and negative ratings to claims made by conservative outlets and personalities than any other viewpoint. If this is true, it would be opposite to the narrative spun by the Trump Administration and many Republicans in Congress. Moreover, Facebook’s incentives would seem to align with giving conservatives more preferential treatment because many of these websites advertise on Facebook, the company probably does not want to get crosswise with the Administration, sensational posts and content drive engagement which increases user numbers that allows for higher ad rates, and it wants to appear fair and impartial.
  • How Pro-Trump Forces Work the Refs in Silicon Valley” By Ben Smith – The New York Times. This piece traces the nearly four decade old effort of Republicans to sway mainstream media and now Silicon Valley to its viewpoint.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Photo credit: Gerd Altmann on Pixabay

CCPA Regulations Finalized

Final CCPA regulations submitted, but it is not clear if they will be approved by 1 July as required by the statute. However, if a follow-on ballot initiative becomes law, these regulations could be moot or greatly changed.

First things first, if you would like to receive my Technology Policy Update, email me. You can find some of these Updates from 2019 and 2020 here.

The Office of California Attorney General Xavier Becerra has submitted its finalized regulations to implement the “California Consumer Privacy Act” (CCPA) (AB 375) to the Office of Administrative Law (OAL), typically the last step in the regulatory process in California. The Office of the Attorney General (OAG) is requesting expedited review so the regulations may become effective on 1 July as required by the CCPA. However, the OAL has 30 days per California law to review regulations to ensure compliance with California’s Administrative Procedure Act (APA). However, under Executive Order N-40-20, issued in response to the COVID-19 pandemic, the OAL has been given an additional 60 days beyond the 30 statutory days to review regulations, so it is possible the CCPA regulations are not effective on 1 July. In fact, it could be three months before they are effective, meaning early September.

With respect to the substance, the final regulations are very similar to the third round of regulations circulated for comment in March, in part, in response to legislation passed and signed into law last fall that modified the CCPA. The OAG released other documents along with the finalized regulations:

For further reading on the third round of proposed CCPA regulations, see this issue of the Technology Policy Update, for the second round, see here, and for the first round, see here. Additionally, to read more on the legislation signed into law last fall, modifying the CCPA, see this issue.

Moreover, Californians for Consumer Privacy have submitted the “California Privacy Rights Act” (CPRA) for the November 2020 ballot. This follow on statute to the CCPA could again force the legislature into making a deal that would revamp privacy laws in California as happened when the CCPA was added to the ballot in 2018. It is also possible this statute remains on the ballot and is added to California’s laws. In either case, much of the CCPA and its regulations may be moot or in effect for only the few years it takes for a new privacy regulatory structure to be established as laid out in the CPRA. See here for more detail.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Odds and Ends (14 April)

Every week, not surprisingly, there are more developments in the technology space than I can reasonably get to. And so, this week, at least, I’ve decided to include some of the odds and ends.

To no great surprise, federal and state elected officials have been questioning Zoom on its security and privacy practices and demanding improvements thereof.

Earlier this month, Senator Michael Bennet (D-CO) sent a letter after the Washington Post found that thousands of Zoom calls could be accessed online that contained people’s sensitive personal information such as therapy sessions and financial information. The culprit is apparently Zoom’s practice of using an identical name format for each video, meaning once someone knows the format they can look up many videos. Security experts call for unique names for each file for a platform like Zoom so as to avoid this outcome.

With these revelations in mind, Bennet wrote Zoom CEO Eric Yuan, asking him to “provide answers to the following questions no later than April 15, 2020: 

  • Please describe all data that Zoom collects from users with and without accounts and please specify how long Zoom retains this data. 
  • Please list every third party and service provider with which Zoom shares user data and for what purposes and level of compensation, if any.
  • Will Zoom require participants to provide affirmative consent if their calls are being recorded or will later be uploaded to the cloud or transcribed? When recorded calls are uploaded and transcribed, will Zoom provide all participants a copy along with an opportunity to correct errors in the recording?
  • Does Zoom plan to change the naming convention that allowed thousands of videos to become easily searchable online?
  • What steps has Zoom taken to notify users featured in videos that are now searchable online? And when users wish for these videos to be removed, what steps will Zoom take to do so, for example, by engaging the third parties where the videos are now viewable?
  • Which privacy settings for users with and without accounts are activated by default, and which require them to opt-in? Does Zoom plan to expand its default privacy settings?
  • What dedicated staff and other resources is Zoom devoting to ensure the privacy and safety of users on its platform?

Bennet was also quoted in a Politico article along with other Democratic Members calling for the Federal Trade Commission (FTC) to open an investigation. House Energy and Commerce Chair Frank Pallone Jr (D-NJ) and Consumer Protection & Commerce Subcommittee Chair Jan Schakowsky (D-IL) were both quoted as being in support of the FTC investigating. Senators Amy Klobuchar (D-MN) and Sherrod Brown (D-OH) are also requesting that the agency investigate Zoom’s claims on security and privacy as promised versus what the company is actually providing. Brown sent letters to Zoom and the FTC on this matter.

Moreover, the Politico article relates that In blessing Zoom for Government from a security standpoint, the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency and the General Services Administration’s Federal Risk and Authorization Management Program explained in a statement:

We advise federal government users to not initiate video conferences using Zoom’s free/commercial offering, but instead to use Zoom for Government

More recently, Senators Elizabeth Warren (D-MA) and Ed Markey (D-MA) asked Zoom how well they are protecting the personal data of students per the Family Education Rights and Privacy Act (FERPA) and the Children’s Online Privacy Protection Act (COPPA). If the FTC were to find COPPA violations, the company would be facing as much as $42,530 per violation.

Markey wrote the FTC separately, urging agency “to issue guidance and provide a comprehensive resource for technology companies that are developing or expanding online conferencing tools during the coronavirus pandemic, so that these businesses can strengthen their cybersecurity and protect customer privacy.” He argued that “[a]t a minimum, this guidance should cover topics including:

  • Implementing secure authentication and other safeguards against unauthorized access;
  • Enacting limits on data collection and recording;
  • Employing encryption and other security protocols for securing data;and
  • Providing clear and conspicuous privacy policies for users.

Markey also “request[ed] that the FTC develop best practices for users of online conferencing software, so that individuals can make informed, safe decisions when choosing and utilizing these technologies. At a minimum, this guidance should cover topics including:

  • Identifying and preventing cyber threats such as phishing and malware;
  • Sharing links to online meetings without compromising security;
  • Restricting access to meetings via software settings; and
  • Recognizing that different versions of a company’s service may provide varying levels of privacy protection.

Many of the Democrats on the House Energy and Commerce Committee also asked Zoom about its recent update to privacy policies made after some of its substandard practices came to light. These Members stated:

“Despite Zoom’s recent clarifications to its privacy policy, a review of Zoom’s privacy policy shows that Zoom may still collect a significant amount of information about both registered and non-registered users from their use of the platform as well as from third parties. Zoom may use that information for a broad range of purposes, including for targeted marketing from both Zoom and third parties… As consumers turn to Zoom for business meetings, remote consultations with psychologists, or even virtual happy hours with friends, they may not expect Zoom to be collecting and using so much of their information.”

Moreover, federal agency Chief Information Officers are formally and informally directing agency employees not to use the commercial/free edition of Zoom as detailed by Federal News Network.

Last week, CISA and the United Kingdom’s National Cyber Security Centre (NCSC) released a joint advisory titled “COVID-19 exploited by malicious cyber actors.” The two agencies argued:

Malicious cyber actors are using the high appetite for COVID-19 related information as an opportunity to deliver malware and ransomware and to steal user credentials. Individuals and organisations should remain vigilant.

CISA and NCSC noted “[t]hreats observed include:

  • Phishing, using the subject of coronavirus orCOVID-19 as a lure
  • Malware distribution using coronavirus orCOVID-19 themed lures
  • Registration of new domain names containing coronavirus orCOVID-19 related wording
  • Attacks against newly (and often rapidly) deployed remote access or remote working infrastructure.

The agencies added they “are working with law enforcement and industry partners to disrupt or prevent these malicious COVID-19 themed cyber activities.”

The Electronic Privacy Information Center (EPIC) sent the FTC a letter, renewing the concerns it detailed on Zoom’s security practices in its complaint last year asking the agency to open an investigation. EPIC stated “[w]e asked you to open an investigation, to compel Zoom to fix the security flaws with its conferencing services, and to investigate the other companies engaged in similar practices.” The organizations stated that “[w]e anticipated that the FTC, with a staff of more than a 1,000 (EPIC has about a dozen people), would find many problems we missed…[t]hat would lead to a change in business practices, a consent order, and 20 years of agency oversight.”

However, the FTC and the Federal Communications Commission (FCC) sent  joint letters “to three companies providing Voice over Internet Protocol (VoIP) services, warning them that routing and transmitting illegal robocalls, including Coronavirus-related scam calls, is illegal and may lead to federal law enforcement against them.” The FTC and FCC “sent a separate letter to USTelecom – The Broadband Association (USTelecom), a trade association that represents U.S.-based telecommunications-related businesses…thank[ing] USTelecom for identifying and mitigating fraudulent robocalls that are taking advantage of the Coronavirus national health crisis, and notes that the USTelecom Industry Traceback Group has helped identify various entities that appear to be responsible for originating or transmitting Coronavirus-related scam robocalls.”

The FCC also denied “an emergency petition requesting an investigation into broadcasters that have aired the President of the United States’ statements and press conferences regarding the novel coronavirus (COVID-19) and related commentary by other on-air personalities” that Free Press filed. The FCC claimed “the Petition misconstrues the Commission’s rules and seeks remedies that would dangerously curtail the freedom of the press embodied in the First Amendment.” In its press release, the FCC added “[t]he decision also makes clear that the FCC will neither act as a roving arbiter of broadcasters’ editorial judgments nor discourage them from airing breaking news events involving government officials in the midst of the current global pandemic.”

Markey and Senator Richard Blumenthal (D-CT) sent a letter “to Google requesting information about the company’s recently announced COVID-19 Community Mobility Reports.” They asked Google to answer the following

  • Does Google plan to share with any government entities, researchers, or private sector partners any users’ coronavirus-related personal data or pseudonymous information
  • Does Google plan to use datasets other than Location History for its Community Mobility Reports?
  • What measures has Google undertaken to ensure that the trends detailed in the reports are representative of the entire population of an area, including non-Google users, those without smartphones, or individuals that have opted out of Location History?
  • Does Google expect that the Community Mobility Reports to be accurate for more rural or less connected communities?
  • What guidance has Google provided to public health officials about how to interpret the reports, including how Google accounts for common social patterns and categorizes locations?

Blumenthal also joined a letter sent along with Senator Mark Warner (D-VA) and Representative Anna Eshoo (D-CA) “a letter to White House Senior Advisor Jared Kushner, raising questions about reports that the White House has assembled technology and health care firms to establish a far-reaching national coronavirus surveillance system.” They stated their “fear that – absent a clear commitment and improvements to our health privacy laws – these extraordinary measures could undermine the confidentiality and security of our health information and become the new status quo.”

Warner, Eshoo, and Blumenthal argued

Given reports indicating that the Administration has solicited help from companies with checkered histories in protecting user privacy, we have serious concerns that these public health surveillance systems may serve as beachheads for far-reaching health data collection efforts that go beyond responding to the current crisis. Public health surveillance efforts must be accompanied by governance measures that provide durable privacy protections and account for any impacts on our rights. For instance, secondary uses of public health surveillance data beyond coordinating our public health response should be strictly restricted. Any secondary usage for commercial purposes should be explicitly prohibited unless authorized on a limited basis with appropriate administrative process and public input. 

They asked that Kushner answer these questions:

  1. Which technology companies, data providers, and other companies have you approached to participate in the public health surveillance initiative and on what basis were they chosen?
  2. What measures will the Administration put into place to ensure that federal agencies and private sector partners do not misuse or reuse health data for non-pandemic-related purposes, including for training commercial algorithmic decision-making systems, and to require the disposal of data after the sunset of the national emergency? What additional steps have you taken to protect health data from their potential misuse or mishandling?
  3. What is the program described in the press meant to accomplish? Will it be used for the allocation of resources, symptom tracking, or contact tracing? What agency will be operating the program and which agencies will have access to the data? 
  4. When will the federal government stop collecting and sharing health data with the private sector for the public health surveillance initiative? Will the Administration commit to a sunset period after the lifting of the national emergency?
  5. What measures will the Administration put into place to ensure that the public health surveillance initiative protects against misuse of sensitive information and mitigates discriminatory outcomes, such as on the basis of racial identity, sexual orientation, disability status, and income?
  6. Will the Administration commit to conducting an audit of data use, sharing, and security by federal agencies and private sector partners under any waivers or surveillance initiative within a short period after the end of the health emergency?
  7. What steps has the Administration taken under the Privacy Act, which limits the federal government’s authority to collect personal data from third parties and imposes numerous other privacy safeguards?
  8. Will you commit to working with us to pass strong legal safeguards that ensure public health surveillance data can be effectively collected and used without compromising privacy? 

Finally, Consumer Reports showed that Facebook’s system of preventing incorrect COVID-19 from being posted on its platform is not as robust as a top company official claimed. Kaveh Waddell of Consumer Reports stated

Facebook has been saying for weeks that it’s intent on keeping coronavirus misinformation off its platforms, which include Instagram and WhatsApp. During one recent interview with NPR, Nick Clegg, Facebook’s vice president for global affairs and communication, cited two examples of the kinds of posts the company would not allow: any message telling people to drink bleach, or discrediting urgent calls for social distancing to slow the pandemic. 

Waddell continued

  • I’ve been covering Facebook and online misinformation for several years, and I wanted to see how well the company is policing coronavirus-related advertising during the global crisis. So I put the two dangerous claims Clegg brought up, plus other false or dangerous information, into a series of seven paid ads.
  • Facebook approved them all. The advertisements remained scheduled for publication for more than a week without being flagged by Facebook. Then, I pulled them out of the queue to make sure none of them were seen by the public. Consumer Reports made certain not to publish any ads with false or misleading information.

EC Calls For EU-Wide Approach on Big Data and COVID-19

The European Commission (EC) met in Brussels last week and issued a recommendation outlining what it hopes will be a unified approach throughout the European Union (EU) on how smartphones and data are used to fight the spread of COVID-19. The EC laid out an ambitious timeline and explained “[t]he first priority for the Toolbox should be a pan-European approach for COVID-19 mobile applications, to be developed together by Member States and the Commission, by 15 April 2020.” The EC added that “[t]he European Data Protection Board (EDPB) and the European Data Protection supervisor (EDPS) will be associated to the process.” The EC stated “[t]he second priority for the Toolbox should be a common approach for the use of anonymised and aggregated mobility data necessary” for a range of purposes to model, predict, and track the virus throughout the EU. On this second priority, the EC is calling for measures to ensure anonymization, safeguarding, and permanent deletion after these data are no longer needed.

It bears note that the EC is working within the structure provided by the General Data Protection Regulation (GDPR), the Privacy and Electronic Communications Directive 2002/58/EC on Privacy and Electronic Communications (the ePrivacy Directive), and other statutes and regulations. In its press release, the EC asserted “[t]o support Member States, the Commission will provide guidance including on data protection and privacy implications…[and] is in close contact with the EDPB for an overview of the processing of personal data at national level in the context of the coronavirus crisis.” The EC also remarked on its 23 March call with the heads of EU telecommunications companies and GSMA, their association, that “also covered the need to collect anonymised mobile metadata to help analysing the patterns of diffusion of the coronavirus, in a way that is fully compliant with the GDPR and ePrivacy legislation.”

The EC declared

The public health crisis caused by the current COVID-19 pandemic (hereinafter, ‘COVID-19 crisis’) is compelling the Union and the Member States to face an unprecedented challenge to its health care systems, way of life, economic stability and values. No single Member State can succeed alone in combating the COVID-19 crisis. An exceptional crisis of such magnitude requires determined action of all Member States and EU institutions and bodies working together in a genuine spirit of solidarity.

The EC continued

Since the beginning of the COVID-19 crisis, a variety of mobile applications have been developed, some of them by public authorities, and there have been calls from Member States and the private sector for coordination at Union level, including to address cybersecurity, security and privacy concerns. These applications tend to serve three general functions:

(i) informing and advising citizens and facilitating the organisation of medical follow-up of persons with symptoms, often combined with a self-diagnosis questionnaire;

(ii) warning people who have been in proximity to an infected person in order to interrupt infection chains and preventing resurgence of infections in the reopening phase; and

(iii) monitoring and enforcement of quarantine of infected persons, possibly combined with features assessing their health condition during the quarantine period.

Certain applications are available to the general public, while others only to closed user groups directed at tracing contacts in the workplace. The effectiveness of these applications has generally not been evaluated. Information and symptom-checker apps may be useful to raise awareness of citizens. However, expert opinion suggests that applications aiming to inform and warn users seem to be the most promising to prevent the propagation of the virus, taking into account also their more limited impact on privacy, and several Member States are currently exploring their use.

The EC found

A common Union approach to the COVID-19 crisis has also become necessary since measures taken in certain countries, such as the geolocation-based tracking of individuals, the use of technology to rate an individual’s level of health risk and the centralisation of sensitive data, raise questions from the viewpoint of several fundamental rights and freedoms guaranteed in the EU legal order, including the right to privacy and the right to the protection of personal data. In any event, pursuant to the Charter of Fundamental Rights of the Union, restrictions on the exercise of the fundamental rights and freedoms laid down therein must be justified and proportionate. Any such restrictions should, in particular, be temporary, in that they remain strictly limited to what is necessary to combat the crisis and do not continue to exist, without an adequate justification, after the crisis has passed.

The EC explained the purpose of the recommendation:

(1)  This recommendation sets up a process for developing a common approach, referred to as a Toolbox, to use digital means to address the crisis. The Toolbox will consist of practical measures for making effective use of technologies and data, with a focus on two areas in particular:

(1)  A pan-European approach for the use of mobile applications, coordinated at Union level, for empowering citizens to take effective and more targeted social distancing measures, and for warning, preventing and contact tracing to help limit the propagation of the COVID-19 disease. This will involve a methodology monitoring and sharing assessments of effectiveness of these applications, their interoperability and cross-border implications, and their respect for security, privacy and data protection; and

(2)  A common scheme for using anonymized and aggregated data on mobility of populations in order (i) to model and predict the evolution of the disease, (ii) to monitor the effectiveness of decision-making by Member States’ authorities on measures such as social distancing and confinement, and (iii) to inform a coordinated strategy for exiting from the COVID-19 crisis.

(2)  Member States should take these actions as a matter of urgency and in close coordination with other Member States, the Commission and other relevant stakeholders, and without prejudice to the competences of the Member States in the domain of public health. They should ensure that all actions are taken in accordance with Union law, in particular law on medical devices and the right to privacy and the protection of personal data along with other rights and freedoms enshrined in the Charter of Fundamental Rights of the Union. The Toolbox will be complemented by Commission guidance, including guidance on the data protection and privacy implications of the use of mobile warning and prevention applications.

The EC added “[t]he EDPB and the EDPS should also be closely involved to ensure the Toolbox integrates data protection and privacy-by-design principles.”

The EC stated that “[t]he first priority for the Toolbox should be a pan-European approach for COVID-19 mobile applications, to be developed together by Member States and the Commission, by 15 April 2020” that “should consist of:

(1) specifications to ensure the effectiveness of mobile information, warning and tracing applications for combating COVID-19 from the medical and technical point of view;

(2)  measures to prevent proliferation of applications that are not compatible with Union law, to support requirements for accessibility for persons with disabilities, and for interoperability and promotion of common solutions, not excluding a potential pan-European application;

(3)  governance mechanisms to be applied by public health authorities and cooperation with the ECDC;

(4)  the identification of good practices and mechanisms for exchange of information on the functioning of the applications; and

(5)  sharing data with relevant epidemiological public bodies and public health research institutions, including aggregated data to ECDC.

Regarding the second principle for the Toolbox, the EC stated it “should be guided by privacy and data protection principles” including:

(1)  safeguards ensuring respect for fundamental rights and prevention of stigmatization, in particular applicable rules governing protection of personal data and confidentiality of communications;

(2)  preference for the least intrusive yet effective measures, including the use of proximity data and the avoidance of processing data on location or movements of individuals, and the use of anonymised and aggregated data where possible;

(3)  technical requirements concerning appropriate technologies (e.g. Bluetooth Low Energy) to establish device proximity, encryption, data security, storage of data on the mobile device, possible access by health authorities and data storage;

(4)  effective cybersecurity requirements to protect the availability, authenticity integrity, and confidentiality of data;

(5)  the expiration of measures taken and the deletion of personal data obtained through these measures when the pandemic is declared to be under control, at the latest;

(6)  uploading of proximity data in case of a confirmed infection and appropriate methods of warning persons who have been in close contact with the infected person, who shall remain anonymous; and

(7)  transparency requirements on the privacy settings to ensure trust into the applications.

The EC added it “will publish guidance further specifying privacy and data protection principles in the light of practical considerations arising from the development and implementation of the Toolbox.”

Further Reading (27 March)

Democrat Proposes Creating Data Protection Authority To Address Privacy

Another Senate Democrat has introduced a privacy and data security bill. Senator Kirsten Gillibrand’s “Data Protection Act of 2020” (S. 3300) would create a federal data protection authority along the lines of the agencies each European Union member nation has. This new agency would be the primary federal regulator of privacy laws, including a number of existing laws that govern the privacy practices of the financial services industries, healthcare industry, and others. This new agency would displace the Federal Trade Commission (FTC) regarding privacy matters but would receive similar enforcement authority but with the ability to levy fines in the first instance. However, state laws would be preempted only if they are contrary to the new regime, and state attorneys general could enforce the new law. A private right of action would not, however, be created under this law.

The bill would establish the Data Protection Agency (DPA), an independent agency headed by a presidentially nominated and Senate confirmed Director who may serve for a five year term normally or more time until a successor is nominated and confirmed. Hence, Directors would not serve at the pleasure of the President and would be independent from the political pressure Cabinet Members may feel from the White House. However, the Director may be removed for “inefficiency, neglect of duty, or malfeasance in office.” Generally, the DPA “shall seek to protect individuals’ privacy and limit the collection, disclosure, processing and misuse of individuals’ personal data by covered entities, and is authorized to exercise its authorities under this Act for such purposes.”

Personal data is defined widely as “any information that identifies, relates to, describes, is capable of being associated with, or could reasonably be linked, directly or indirectly, with a particular individual or device” including a number of different enumerated types of data such as medical information, biometric information, browsing history, geolocation data, political information, photographs and videos not password protected, and others. The bill also creates a term “high-risk data practice” to cover the collection of processing of personal data that is sensitive, novel, or may have adverse, discriminatory real world effects and would be subject to heightened scrutiny and regulation. For example, new high-risk data practices “or related profiling techniques” may not be used before the DPA conducts “a formal public rulemaking process,” which under administrative law is usually meant as a lengthy process including a public hearing.

Those entities covered by the bill are “any person that collects, processes, or otherwise obtains personal data with the exception of an individual processing personal data in the course of personal or household activity,” an incredibly broad definition that sweeps in virtually any commercial entity collecting or processing personal data. There is no carve out for businesses below a certain revenue level or number of persons whose data they collect and process. Large covered entities would be subject to extra scrutiny from the DPA and extra responsibility. Entities falling into category are those with “gross revenues that exceed $25,000,000;” that buy, receive for the covered entity’s commercial purposes, sells, or discloses for commercial purposes the personal information of 50,000 or more individuals, households, or devices; or that drive “50 percent or more of its annual revenues from the sale of personal data.” The DPA “may require reports and conduct examinations on a periodic basis” from large covered entities to ensure compliance with federal privacy laws, examine their practices, compliance processes, and procedures, “detecting and assessing associated risks to individuals and groups of individuals;” and “requiring and overseeing ex-ante impact assessments and ex-post outcome audits of high-risk data practices to advance fair and just data practices.”

Most notably, it appears that the enforcement and rulemaking authority of current privacy statutes would be transferred to the agency, including Title V of the “Financial Services Modernization Act of 1999” (aka Gramm-Leach-Bliley), Subtitle D of the Health Information Technology for Economic and Clinical Health Act (i.e. HIPAA’s privacy provisions), the “Children’s Online Privacy Protection Act,” and the “Fair Credit Reporting Act.” Specifically, the bill provides “[t]he Agency is authorized to exercise its authorities under this Act and Federal privacy law to administer, enforce, and otherwise implement the provisions of this Act and Federal privacy law.” The bill defines “federal privacy law” to include all the aforementioned statutes. Consequently, the agencies currently enforcing the privacy provisions of those statutes and related regulations would turn over enforcement authority to the DPA. This, of course, is not without precedent. Dodd-Frank required the FTC to relinquish some of its jurisdiction to the Consumer Financial Protection Bureau (CFPB) to cite but one recent example. In any event, this approach sets the “Data Protection Act of 2020” apart from a number of the privacy bills, and aside from the policy elegance of housing privacy statutes and regulations at one agency, this would likely cause the current regulators and the committees that oversee them to oppose this provision of the bill.

The DPA would receive authority to punish unfair and deceptive practices (UDAP) regarding the collection, processing, and use of personal data, but unlike the FTC, notice and comment rulemaking authority to effectuate this authority as needed. However, like the FTC, before the agency may use its UDAP powers regarding unfairness, it must establish the harm would is causing or is likely to cause substantial injury, is unavoidable by the consumer, and is not outweighed by countervailing benefits.

The DPA would receive many of the same authorities the FTC currently has to punish UDAP violations, including injunctions, restitution, disgorgement, damages, and other monetary relief, and also the ability to levy civil fines. However, the fine structure is tiered with reckless and knowingly violations subject to much higher liability. The first tier would expose entities to fines of $5,000 per day the violation is occurring or that the entity fails to heed a DPA order. The language could use clarification as to whether this means per violation per day or just a per day fine regardless of the number of separate violations. Nonetheless, the second tier is for reckless violations and the fines could be as high as $25,000, and the third tier for knowing violations for $1,000,000. However, the DPA must either give notice to entities liable to fines an opportunity and a hearing before levying a fine through its administrative procedures or go to federal court to seek a judgment. However, the DPA could enforce the other federal privacy laws under their terms and not bring to bear the aforementioned authority.

There would be no preemption of state laws to the extent such privacy laws are not inconsistent with the “Data Protection Act of 2020” and states may maintain or institute stronger privacy laws so long as they do not run counter to this statute. This is the structure used under Gramm-Leach-Bliley, and so there is precedence. Hence, it is possible there would be a federal privacy floor that some states like California could regulate above. However, the bill would not change the preemption status quo of the federal privacy laws the DPA will be able to enforce, and those federal statutes that preempt state laws would continue to do so. State attorneys general could bring actions in federal court to enforce this law, but no federal private right of action would be created.

Of course, the only other major privacy and data security bill that would create a new agency to regulate these matters instead of putting the FTC in charge is Representatives Anna Eshoo (D-CA) and Zoe Lofgren’s (D-CA) bill, the “Online Privacy Act of 2019” (H.R. 4978) that would create the U.S. Digital Privacy Agency (DPA) that would supersede the FTC on many privacy and data security issues. For many sponsors of privacy bills, creating a new agency may be seen as a few bridges too far, and so they have opted to house new privacy regulation at the FTC.

Finally, as can be seen in her press release, Gillibrand’s bill has garnered quite a bit of support from privacy and civil liberties advocates, some of which generally endorses the idea of a U.S. data protection authority and not this bill per se. Nonetheless, this is another bill that is on the field, and it remains to be seen how much Gillibrand will engage on the issue. It also bears note that she serves on none of the committees of jurisdiction in the Senate.