Chopra Named CFPB Head

The CFPB will undoubtedly be a more muscular enforcer of financial services entities under  the FTC Commissioner nominated to head the agency, including with respect to privacy, data security, and cybersecurity.

Federal Trade Commission (FTC) Commissioner Rohit Chopra has been tapped by President-elect Joe Biden to lead the agency at which he oversaw the student loan market. Chopra’s nomination must be confirmed by the Senate to be the next Director of the Consumer Financial Protection Bureau (CFPB), an entity that possesses largely unused powers to police the cybersecurity, data security, and privacy practices of broad swaths of the United States (U.S.) economy. And given Chopra’s aggressive advocacy at the FTC to be more active and more muscular, it seems fair to assume the same will be true at the CFPB, awakening an entity that has been largely dormant under the Trump Administration except to the extent it employed a “light regulatory touch.” Of course, Chopra’s expected departure from the FTC likely means Biden will be able to name two FTC nominees in the near future and means he will name Commissioner Rebecca Kelly Slaughter as the next chair as she would be the only currently confirmed Democratic member of the FTC. Whether this designation will be on an acting basis or permanent basis remains to be seen.

In making the announcement, Biden’s transition team highlighted Chopra’s push “for aggressive remedies against lawbreaking companies, especially repeat offenders” and work “to increase scrutiny of dominant technology firms that pose risks to privacy, national security, and fair competition.” The press release added:

Chopra previously served as Assistant Director of the Consumer Financial Protection Bureau, where he led the agency’s efforts on student loans. In 2011, the Secretary of the Treasury appointed him to serve as the CFPB’s Student Loan Ombudsman, a new position established in the financial reform law. He also served as a Special Advisor at the U.S. Department of Education.

In these roles, Chopra led efforts to spur competition in the student loan financing market, develop new tools for students and student loan borrowers to make smarter decisions, and secure hundreds of millions of dollars in refunds for borrowers victimized by unlawful conduct by loan servicers, debt collectors, and for-profit college chains.

Chopra used his powers as an FTC Commissioner to appeal to the majority Republicans to use the agency’s powers more forcefully in combatting privacy, data security, and antitrust abuses. For example, he voted against the FTC’s $5 billion settlement with Facebook and dissented, listing his reasons for breaking with the three Republican Commissioners:

  • Facebook’s violations were a direct result of the company’s behavioral advertising business model. The proposed settlement does little to change the business model or practices that led to the recidivism.
  • The $5 billion penalty is less than Facebook’s exposure from its illegal conduct, given its financial gains.
  • The proposed settlement lets Facebook off the hook for unspecified violations.
  • The grant of immunity for Facebook’s officers and directors is a giveaway.
  • The case against Facebook is about more than just privacy – it is also about the power to control and manipulate.

More recently, in June 2020, Chopra issued a statement on the a pair of reports required by Congress that articulate his view the FTC “must do more to use our existing authority and resources more effectively:”

1. Inventory and use the rulemaking authorities that Congress has already authorized.

Contrary to what many believe, the FTC has several relevant rulemaking authorities when it comes to data protection, but simply chooses not to use them. Rules do not need to create any new requirements for market participants. In fact, they can simply codify existing legal precedents and enforcement policy to give even more clarity on what the law requires. In addition, when rules are in place, it is much easier for the agency to obtain relief for those who are harmed and seek penalties to deter other bad actors. This can be far more efficient than chasing after the same problems year after year through no-money settlements.

2. Ensure that large firms face the same level of scrutiny we apply to smaller businesses.

To meaningfully deter data protection abuses and other wrongful conduct, the FTC must enforce the law equally. While we have taken a hard line against smaller violators in the data protection sphere, charging individual decisionmakers and wiping out their earnings, I am very concerned that the FTC uses a different standard for larger firms, like in the recent Facebook and YouTube matters.6 This is not only unfair to small firms, but also sends the unfortunate message that the largest corporations can avoid meaningful accountability for abuse and misuse of data.

3. Increase cooperation with state attorneys general and other regulators.

State attorneys general are the country’s front-line watchdogs when it comes to consumer protection, and many states have enacted privacy and data protection laws backed by strong remedial tools, including civil penalties. Partnering more frequently with state enforcers could significantly enhance the Commission’s effectiveness and make better use of taxpayer resources.

4. Hold third-party watchdogs accountable and guard against conflicts of interest.

The FTC typically orders lawbreaking companies to hire a third-party assessor to review privacy and security practices going forward. However, the Commission should not place too much faith in the efficacy of these third parties.

5. Reallocate resources.

While the Commission’s report has rightly noted to Congress that the number of employees working on data protection is inadequate, the Commissioners can vote to reallocate resources from other functions to increase our focus on data protection.

6. Investigate firms comprehensively across the FTC’s mission.

The FTC should use its authority to deter unfair and deceptive conduct in conjunction with our authority to deter unfair methods of competition. However, in the digital economy, the data that companies compete to obtain and utilize is also at the center of significant privacy and data security infractions.

7. Conduct more industry-wide studies under Section 6(b) of the FTC Act.

Surveillance-based advertising is a major driver of data-related abuses, but the Commission has not yet used its authority to compel information from major industry players to study these practices. The Commission should vote to issue orders to study how technology platforms engage in surveillance-based advertising.

Without doubt, Chopra will seek to read and exercise the CFPB’s powers as broadly as possible. For example, in a late October 2020 draft law review article, he and an attorney advisor Samuel Levine argued the FTC would use a dormant power to fill the gap in its enforcement authority left by the cases before the Supreme Court of the United States regarding the FTC’s injunctive powers under Section 13 of the FTC Act. They asserted:

  • [T]he agency should resurrect one of the key authorities abandoned in the 1980s: Section 5(m)(1)(B) of the FTC Act, the Penalty Offense Authority. The Penalty Offense Authority is a unique tool in commercial regulation. Typically, first- time offenses involving unfair or deceptive practices do not lead to civil penalties. However, if the Commission formally condemns these practices in a cease-and-desist order, they can become what we call “Penalty Offenses.” Other parties that commit these offenses with knowledge that they have been condemned by the Commission face financial penalties that can add up to a multiple of their illegal profits, rather than a fraction.
  • Using this authority, the Commission can substantially increase deterrence and reduce litigation risk by noticing whole industries of Penalty Offenses, exposing violators to significant civil penalties, while helping to ensure fairness for honest firms. This would dramatically improve the FTC’s effectiveness relative to our current approach, which relies almost entirely on Section 13(b) and no-money cease-and-desist orders, even in cases of blatant lawbreaking.

Should the FTC heed Chopra and Levine’s suggestion, the agency could threaten fines in the first instance of Section 5 violations for specific illegal practices the FTC has put regulated entities on notice about.

The CFPB’s organic statute is patterned on the FTC Act, particularly its bar on unfair or deceptive acts or practices (UDAP). However, the “Dodd-Frank Wall Street Reform and Consumer Protection Act” (P.L. 111-203) that created the CFPB provided the agency “may take any action authorized under subtitle E to prevent a covered person or service provider from committing or engaging in an unfair, deceptive, or abusive act or practice (UDAAP) under Federal law in connection with any transaction with a consumer for a consumer financial product or service, or the offering of a consumer financial product or service.” While the CFPB may be limited in its jurisdiction, it has a more expansive regulatory remit that Chopra will almost certainly push to its maximum. Consequently, unfair, deceptive, and abusive practices in the financial services sector could, in his view, include privacy, cybersecurity, and data security practices that heretofore have been allowed by the CFPB could be subject to enforcement action. And while the current CFPB issued a 2020 policy statement regarding how it thinks the agency should use its authority to punish “abusive” practices, Chopra’s team will likely withdraw and rewrite this document.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2021. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by ArtTower from Pixabay

Further Reading, Other Developments, and Coming Events (12 January 2021)

Further Reading

  • Biden’s NSC to focus on global health, climate, cyber and human rights, as well as China and Russia” By Karen DeYoung — The Washington Post. Like almost every incoming White House, the Biden team has announced a restructuring of the National Security Council (NSC) to better effectuate the President-elect’s policy priorities. To not surprise, the volume on cybersecurity policy will be turned up. Other notable change is plans to take “cross-cutting” approaches to issues that will likely meld foreign and domestic and national security and civil issues, meaning there could be a new look on offensive cyber operations, for example. It is possible President Biden decides to put the genie back in the bottle, so to speak, by re-imposing an interagency decision-making process as opposed to the Trump Administration’s approach of delegating discretion to the National Security Agency/Cyber Command head. Also, the NSC will focus on emerging technology, a likely response to the technology arms race the United States finds itself in against the People’s Republic of China.
  • Exclusive: Pandemic relief aid went to media that promoted COVID misinformation” By Caitlin Dickson — yahoo! news. The consulting firm Alethea Group and the nonprofit Global Disinformation Index are claiming the COVID stimulus Paycheck Protection Program (PPP) provided loans and assistance to five firms that “were publishing false or misleading information about the pandemic, thus profiting off the infodemic” according to an Alethea Group vice president. This report follows an NBC News article claiming that 14 white supremacist and racist organizations have also received PPP loans. The Alethea Group and Global Disinformation Index named five entities who took PPP funds and kept spreading pandemic misinformation: Epoch Media Group, Newsmax Media, The Federalist, Liftable Media, and Prager University.
  • Facebook shuts Uganda accounts ahead of vote” — France24. The social media company shuttered a number of Facebook and Instagram accounts related to government officials in Uganda ahead of an election on account of “Coordinated Inauthentic Behaviour” (CIB). This follows the platform shutting down accounts related to the French Army and Russia seeking to influence events in Africa. These and other actions may indicate the platform is starting to pay the same attention to the non-western world as at least one former employee has argued the platform was negligent at best and reckless at worst in not properly resourcing efforts to police CIB throughout the Third World.
  • China tried to punish European states for Huawei bans by adding eleventh-hour rule to EU investment deal” By Finbarr Bermingham — South China Morning Post. At nearly the end of talks on a People’s Republic of China (PRC)-European Union (EU) trade deal, PRC negotiators tried slipping in language that would have barred entry to the PRC’s cloud computing market to any country or company from a country that restricts Huawei’s services and products. This is alternately being seen as either standard Chinese negotiating tactics or an attempt to avenge the thwarting of the crown jewel in its telecommunications ambitions.
  • Chinese regulators to push tech giants to share consumer credit data – sources” By Julie Zhu — Reuters. Ostensibly in a move to better manage the risks of too much unsafe lending, tech giants in the People’s Republic of China (PRC) will soon need to share data on consumer loans. It seems inevitable that such data will be used by Beijing to further crack down on undesirable people and elements within the PRC.
  • The mafia turns social media influencer to reinforce its brand” By Miles Johnson — The Financial Times. Even Italy’s feared ’Ndrangheta is creating and curating a social media presence.

Other Developments

  • President Donald Trump signed an executive order (EO) that bans eight applications from the People’s Republic of China on much the same grounds as the EOs prohibiting TikTok and WeChat. If this EO is not rescinded by the Biden Administration, federal courts may block its implementation as has happened with the TikTok and WeChat EOs to date. Notably, courts have found that the Trump Administration exceeded its authority under the International Emergency Economic Powers Act (IEEPA), which may also be an issue in the proposed prohibition on Alipay, CamScanner, QQ Wallet, SHAREit, Tencent QQ, VMate, WeChat Pay, and WPS Office. Trump found:
    • that additional steps must be taken to deal with the national emergency with respect to the information and communications technology and services supply chain declared in Executive Order 13873 of May 15, 2019 (Securing the Information and Communications Technology and Services Supply Chain).  Specifically, the pace and pervasiveness of the spread in the United States of certain connected mobile and desktop applications and other software developed or controlled by persons in the People’s Republic of China, to include Hong Kong and Macau (China), continue to threaten the national security, foreign policy, and economy of the United States.  At this time, action must be taken to address the threat posed by these Chinese connected software applications.
    • Trump directed that within 45 days of issuance of the EO, there shall be a prohibition on “any transaction by any person, or with respect to any property, subject to the jurisdiction of the United States, with persons that develop or control the following Chinese connected software applications, or with their subsidiaries, as those transactions and persons are identified by the Secretary of Commerce (Secretary) under subsection (e) of this section: Alipay, CamScanner, QQ Wallet, SHAREit, Tencent QQ, VMate, WeChat Pay, and WPS Office.”
  • The Government Accountability Office (GAO) issued its first statutorily required annual assessment of how well the United States Department of Defense (DOD) is managing its major information technology (IT) procurements. The DOD spent more than $36 billion of the $90 billion the federal government was provided for IT in FY 2020. The GAO was tasked with assessing how well the DOD did in using iterative development, managing costs and schedules, and implementing cybersecurity measures. The GAO found progress in the first two realms but a continued lag in deploying long recommended best practices to ensure the security of the IT the DOD buys or builds. Nonetheless, the GAO focused on 15 major IT acquisitions that qualify as administrative (i.e. “business”) and communications and information security (i.e. “non-business.”) While there were no explicit recommendations made, the GAO found:
    • Ten of the 15 selected major IT programs exceeded their planned schedules, with delays ranging from 1 month for the Marine Corps’ CAC2S Inc 1 to 5 years for the Air Force’s Defense Enterprise Accounting and Management System-Increment 1.
    • …eight of the 10 selected major IT programs that had tested their then-current technical performance targets reported having met all of their targets…. As of December 2019, four programs had not yet conducted testing activities—Army’s ACWS, Air Force’s AFIPPS Inc 1, Air Force’s MROi, and Navy ePS. Testing data for one program, Air Force’s ISPAN Inc 4, were classified.
    • …officials from the 15 selected major IT programs we reviewed reported using software development approaches that may help to limit risks to cost and schedule outcomes. For example, major business IT programs reported using COTS software. In addition, most programs reported using an iterative software development approach and using a minimum deployable product. With respect to cybersecurity practices, all the programs reported developing cybersecurity strategies, but programs reported mixed experiences with respect to conducting cybersecurity testing. Most programs reported using operational cybersecurity testing, but less than half reported conducting developmental cybersecurity testing. In addition, programs that reported conducting cybersecurity vulnerability assessments experienced fewer increases in planned program costs and fewer schedule delays. Programs also reported a variety of challenges associated with their software development and cybersecurity staff.
    • 14 of the 15 programs reported using an iterative software development approach which, according to leading practices, may help reduce cost growth and deliver better results to the customer. However, programs also reported using an older approach to software development, known as waterfall, which could introduce risk for program cost growth because of its linear and sequential phases of development that may be implemented over a longer period of time. Specifically, two programs reported using a waterfall approach in conjunction with an iterative approach, while one was solely using a waterfall approach.
    • With respect to cybersecurity, programs reported mixed implementation of specific practices, contributing to program risks that might impact cost and schedule outcomes. For example, all 15 programs reported developing cybersecurity strategies, which are intended to help ensure that programs are planning for and documenting cybersecurity risk management efforts.
    • In contrast, only eight of the 15 programs reported conducting cybersecurity vulnerability assessments—systematic examinations of an information system or product intended to, among other things, determine the adequacy of security measures and identify security deficiencies. These eight programs experienced fewer increases in planned program costs and fewer schedule delays relative to the programs that did not report using cybersecurity vulnerability assessments.
  • The United States (U.S.) Department of Energy gave notice of a “Prohibition Order prohibiting the acquisition, importation, transfer, or installation of specified bulk-power system (BPS) electric equipment that directly serves Critical Defense Facilities (CDFs), pursuant to Executive Order 13920.” (See here for analysis of the executive order.) The Department explained:
    • Executive Order No. 13920 of May 1, 2020, Securing the United States Bulk-Power System (85 FR 26595 (May 4, 2020)) (E.O. 13920) declares that threats by foreign adversaries to the security of the BPS constitute a national emergency. A current list of such adversaries is provided in a Request for Information (RFI), issued by the Department of Energy (Department or DOE) on July 8, 2020 seeking public input to aid in its implementation of E.O. 13920. The Department has reason to believe, as detailed below, that the government of the People’s Republic of China (PRC or China), one of the listed adversaries, is equipped and actively planning to undermine the BPS. The Department has thus determined that certain BPS electric equipment or programmable components subject to China’s ownership, control, or influence, constitute undue risk to the security of the BPS and to U.S. national security. The purpose of this Order is to prohibit the acquisition, importation, transfer, or subsequent installation of such BPS electric equipment or programmable components in certain sections of the BPS.
  • The United States’ (U.S.) Department of Commerce’s Bureau of Industry and Security (BIS) added the People’s Republic of China’s (PRC) Semiconductor Manufacturing International Corporation (SMIC) to its Entity List in a move intended to starve the company of key U.S. technology needed to manufacture high end semiconductors. Therefore, any U.S. entity wishing to do business with SMIC will need a license which the Trump Administration may not be likely to grant. The Department of Commerce explained in its press release:
    • The Entity List designation limits SMIC’s ability to acquire certain U.S. technology by requiring U.S. exporters to apply for a license to sell to the company.  Items uniquely required to produce semiconductors at advanced technology nodes—10 nanometers or below—will be subject to a presumption of denial to prevent such key enabling technology from supporting China’s military-civil fusion efforts.
    • BIS also added more than sixty other entities to the Entity List for actions deemed contrary to the national security or foreign policy interest of the United States.  These include entities in China that enable human rights abuses, entities that supported the militarization and unlawful maritime claims in the South China Sea, entities that acquired U.S.-origin items in support of the People’s Liberation Army’s programs, and entities and persons that engaged in the theft of U.S. trade secrets.
    • As explained in the Federal Register notice:
      • SMIC is added to the Entity List as a result of China’s military-civil fusion (MCF) doctrine and evidence of activities between SMIC and entities of concern in the Chinese military industrial complex. The Entity List designation limits SMIC’s ability to acquire certain U.S. technology by requiring exporters, reexporters, and in-country transferors of such technology to apply for a license to sell to the company. Items uniquely required to produce semiconductors at advanced technology nodes 10 nanometers or below will be subject to a presumption of denial to prevent such key enabling technology from supporting China’s military modernization efforts. This rule adds SMIC and the following ten entities related to SMIC: Semiconductor Manufacturing International (Beijing) Corporation; Semiconductor Manufacturing International (Tianjin) Corporation; Semiconductor Manufacturing International (Shenzhen) Corporation; SMIC Semiconductor Manufacturing (Shanghai) Co., Ltd.; SMIC Holdings Limited; Semiconductor Manufacturing South China Corporation; SMIC Northern Integrated Circuit Manufacturing (Beijing) Co., Ltd.; SMIC Hong Kong International Company Limited; SJ Semiconductor; and Ningbo Semiconductor International Corporation (NSI).
  • The United States’ (U.S.) Department of Commerce’s Bureau of Industry and Security (BIS) amended its Export Administration Regulations “by adding a new ‘Military End User’ (MEU) List, as well as the first tranche of 103 entities, which includes 58 Chinese and 45 Russian companies” per its press release. The Department asserted:
    • The U.S. Government has determined that these companies are ‘military end users’ for purposes of the ‘military end user’ control in the EAR that applies to specified items for exports, reexports, or transfers (in-country) to the China, Russia, and Venezuela when such items are destined for a prohibited ‘military end user.’
  • The Australia Competition and Consumer Commission (ACCC) rolled out another piece of the Consumer Data Right (CDR) scheme under the Competition and Consumer Act 2010, specifically accreditation guidelines “to provide information and guidance to assist applicants with lodging a valid application to become an accredited person” to whom Australians may direct data holders share their data. The ACCC explained:
    • The CDR aims to give consumers more access to and control over their personal data.
    • Being able to easily and efficiently share data will improve consumers’ ability to compare and switch between products and services and encourage competition between service providers, leading to more innovative products and services for consumers and the potential for lower prices.
    • Banking is the first sector to be brought into the CDR.
    • Accredited persons may receive a CDR consumer’s data from a data holder at the request and consent of the consumer. Any person, in Australia or overseas, who wishes to receive CDR data to provide products or services to consumers under the CDR regime, must be accredited
  • Australia’s government has released its “Data Availability and Transparency Bill 2020” that “establishes a new data sharing scheme for federal government data, underpinned by strong safeguards to mitigate risks and simplified processes to make it easier to manage data sharing requests” according to the summary provided in Parliament by the government’s point person. In the accompanying “Explanatory Memorandum,” the following summary was provided:
    • The Bill establishes a new data sharing scheme which will serve as a pathway and regulatory framework for sharing public sector data. ‘Sharing’ involves providing controlled access to data, as distinct from open release to the public.
    • To oversee the scheme and support best practice, the Bill creates a new independent regulator, the National Data Commissioner (the Commissioner). The Commissioner’s role is modelled on other regulators such as the Australian Information Commissioner, with whom the Commissioner will cooperate.
    • The data sharing scheme comprises the Bill and disallowable legislative instruments (regulations, Minister-made rules, and any data codes issued by the Commissioner). The Commissioner may also issue non-legislative guidelines that participating entities must have regard to, and may release other guidance as necessary.
    • Participants in the scheme are known as data scheme entities:
      • Data custodians are Commonwealth bodies that control public sector data, and have the right to deal with that data.
      • Accredited users are entities accredited by the Commissioner to access to public sector data. To become accredited, entities must satisfy the security, privacy, infrastructure and governance requirements set out in the accreditation framework.
      • Accredited data service providers (ADSPs) are entities accredited by the Commissioner to perform data services such as data integration. Government agencies and users will be able to draw upon ADSPs’ expertise to help them to share and use data safely.
    • The Bill does not compel sharing. Data custodians are responsible for assessing each sharing request, and deciding whether to share their data if satisfied the risks can be managed.
    • The data sharing scheme contains robust safeguards to ensure sharing occurs in a consistent and transparent manner, in accordance with community expectations. The Bill authorises data custodians to share public sector data with accredited users, directly or through an ADSP, where:
      • Sharing is for a permitted purpose – government service delivery, informing government policy and programs, or research and development;
      • The data sharing principles have been applied to manage the risks of sharing; and
      • The terms of the arrangement are recorded in a data sharing agreement.
    • Where the above requirements are met, the Bill provides limited statutory authority to share public sector data, despite other Commonwealth, State and Territory laws that prevent sharing. This override of non-disclosure laws is ‘limited’ because it occurs only when the Bill’s requirements are met, and only to the extent necessary to facilitate sharing.
  • The United Kingdom’s Competition and Markets Authority’s (CMA) is asking interested parties to provide input on the proposed acquisition of British semiconductor company by a United States (U.S.) company before it launches a formal investigation later this year. However, CMA is limited to competition considerations, and any national security aspects of the proposed deal would need to be investigated by Prime Minister Boris Johnson’s government. CMA stated:
    • US-based chip designer and producer NVIDIA Corporation (NVIDIA) plans to purchase the Intellectual Property Group business of UK-based Arm Limited (Arm) in a deal worth $40 billion. Arm develops and licenses intellectual property (IP) and software tools for chip designs. The products and services supplied by the companies support a wide range of applications used by businesses and consumers across the UK, including desktop computers and mobile devices, game consoles and vehicle computer systems.
    • CMA added:
      • The CMA will look at the deal’s possible effect on competition in the UK. The CMA is likely to consider whether, following the takeover, Arm has an incentive to withdraw, raise prices or reduce the quality of its IP licensing services to NVIDIA’s rivals.
  • The Israeli firm, NSO Group, has been accused by an entity associated with a British university of using real-time cell phone data to sell its COVID-19 contact tracing app, Fleming, in ways that may have broken the laws of a handful of nations. Forensic Architecture,  a research agency, based at Goldsmiths, University of London, argued:
    • In March 2020, with the rise of COVID-19, Israeli cyber-weapons manufacturer NSO Group launched a contact-tracing technology named ‘Fleming’. Two months later, a database belonging to NSO’s Fleming program was found unprotected online. It contained more than five hundred thousand datapoints for more than thirty thousand distinct mobile phones. NSO Group denied there was a security breach. Forensic Architecture received and analysed a sample of the exposed database, which suggested that the data was based on ‘real’ personal data belonging to unsuspecting civilians, putting their private information in risk
    • Forensic Architecture added:
      • Leaving a database with genuine location data unprotected is a serious violation of the applicable data protection laws. That a surveillance company with access to personal data could have overseen this breach is all the more concerning.
      • This could constitute a violation of the General Data Protection Regulation (GDPR) based on where the database was discovered as well as the laws of the nations where NSO Group allegedly collected personal data
    • The NSO Group denied the claims and was quoted by Tech Crunch:
      • “We have not seen the supposed examination and have to question how these conclusions were reached. Nevertheless, we stand by our previous response of May 6, 2020. The demo material was not based on real and genuine data related to infected COVID-19 individuals,” said an unnamed spokesperson. (NSO’s earlier statement made no reference to individuals with COVID-19.)
      • “As our last statement details, the data used for the demonstrations did not contain any personally identifiable information (PII). And, also as previously stated, this demo was a simulation based on obfuscated data. The Fleming system is a tool that analyzes data provided by end users to help healthcare decision-makers during this global pandemic. NSO does not collect any data for the system, nor does NSO have any access to collected data.”

Coming Events

  • On 13 January, the Federal Communications Commission (FCC) will hold its monthly open meeting, and the agency has placed the following items on its tentative agenda “Bureau, Office, and Task Force leaders will summarize the work their teams have done over the last four years in a series of presentations:
    • Panel One. The Commission will hear presentations from the Wireless Telecommunications Bureau, International Bureau, Office of Engineering and Technology, and Office of Economics and Analytics.
    • Panel Two. The Commission will hear presentations from the Wireline Competition Bureau and the Rural Broadband Auctions Task Force.
    • Panel Three. The Commission will hear presentations from the Media Bureau and the Incentive Auction Task Force.
    • Panel Four. The Commission will hear presentations from the Consumer and Governmental Affairs Bureau, Enforcement Bureau, and Public Safety and Homeland Security Bureau.
    • Panel Five. The Commission will hear presentations from the Office of Communications Business Opportunities, Office of Managing Director, and Office of General Counsel.
  • On 27 July, the Federal Trade Commission (FTC) will hold PrivacyCon 2021.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2021. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Judith Scharnowski from Pixabay

EDPB Data Protection By Design and Default Guidance

The EU’s arbiter on the GDPR explains what it considers data by design and default that complies with the GDPR.

The European Data Protection Board (EDPB or Board) issued “Guidelines 4/2019 on Article 25 Data Protection by Design and by Default Version 2.0,” which is “general guidance on the obligation of Data Protection by Design and by Default (DPbDD) set forth in Article 25 in the [General Data Protection Regulation] GDPR.” The EDPB’s Guidance follows guidance issued by at least three European Union (EU) data protection authorities (DPA) on data protection by design and by default. However, given the resource constrained nature of most EU DPAs, it is not clear how the data processing systems of controllers will be policed to ensure DPbDD. Presumably failings and violations would be turned up during investigations launched on other grounds.

Article 25 requires, in relevant part:

  • [T]he controller shall, both at the time of the determination of the means for processing and at the time of the processing itself, implement appropriate technical and organisational measures, such as pseudonymisation, which are designed to implement data-protection principles, such as data minimisation, in an effective manner and to integrate the necessary safeguards into the processing in order to meet the requirements of this Regulation and protect the rights of data subjects.
  • The controller shall implement appropriate technical and organisational measures for ensuring that, by default, only personal data which are necessary for each specific purpose of the processing are processed.

The EDPB pointed to the data protection and privacy by design guidance released by three EU DPAs:

The EDPB stated:

Data protection by design and data protection by default are complementary concepts, which mutually reinforce each other. Data subjects will benefit more from data protection by default if data protection by design is concurrently implemented – and vice versa.

The Board sought to explain its view on how controllers can meet these obligations under Article 25. The EDPB asserted:

The  core  obligation  is  the implementation  of appropriate measures  and necessary safeguards  that provide effective implementation of the data protection principles and, consequentially, data subjects’ rights and freedoms by design and by default. Article 25 prescribes both design and default elements that should be taken into account. (emphasis in the original.)

Again, and again throughout the Guidance, the EDPB stresses that “effective implementation” is the key, suggesting that processes and systems that appear compliant on the surface will not necessarily be found compliant should a controller be investigated.

Unlike the American approach to data protection, the size and resources of a controller have no bearing on the compliance obligations with respect to DPbDD. The EDPB stated

DPbDD is a requirement for all controllers, including small businesses and multinational companies alike. That being the case, the complexity of implementing DPbDD may vary based on the individual processing operation. Regardless of the size however, in all cases, positive benefits for controller and data subject can be achieved by implementing DPbDD.

Moreover, the GDPR’s Article 25 requirements regarding DPbPP apply to processing to be designed and processing systems that pre-date the GDPR:

The requirement described in Article 25 is for controllers to have data protection designed into the processing of personal data and as a default setting and this applies throughout the processing lifecycle. DPbDD is also a requirement for processing systems pre-existing before the GDPR entered into force. Controllers must have the processing consistently updated in line with the GDPR.

What’s more, the EDPB asserted “[c]ontrollers shall implement DPbDD before processing, and also continually at the time of processing, by regularly reviewing the effectiveness of the chosen measures and safeguards…[and] DPbDD also applies to existing systems that are processing personal data.”

The Board contextualized DPbDD in the GDPR and the EU’s human rights:

  • In line with Article 25(1) the controller shall implement appropriate technical and organisational measures which are designed to implement the data protection principles and to integrate the necessary safeguards into the processing in order to meet the requirements and protect the rights and freedoms of data subjects. Both appropriate measures and necessary safeguards are meant to serve the same purpose of protecting the rights of data subjects and ensuring that the protection of their personal data is built into the processing.
  • The controller should choose and be accountable for implementing default processing settings and options in a way that only processing that is strictly necessary to achieve the set, lawful purpose is carried out by default. Here, controllers should rely on their assessment of the necessity of the processing with regards to the legal grounds of Article 6(1). This means that by default, the controller shall not collect more data than is necessary, they shall not process the data collected more than is necessary for their purposes, nor shall they store the data for longer than necessary. The basic requirement is that data protection is built into the processing by default.

The EDPB explained:

In all stages of design of the processing activities, including procurement, tenders, outsourcing, development, support, maintenance, testing, storage, deletion, etc., the controller should take into account and consider the various elements of DPbDD which will be illustrated by examples in this chapter in the context of implementation of the principles.

The EDPB asserted the Guidance may also be of use to others with responsibilities under the GDPR: “Other actors, such as processors and producers of products, services and applications (henceforth “producers”), who are not directly addressed in Article 25, may also find these Guidelines useful in creating GDPR compliant products and services that enable controllers to fulfil their data protection obligations.” Moreover, a controller will be held accountable for the DPbDD of processors and sub-processors

Nonetheless, the Board made recommendations to processors:

  • Although not directly addressed in Article 25, processors and producers are also recognized as key enablers for DPbDD, they should be aware that controllers are required to only process personal data with systems and technologies that have built-in data protection.
  • When processing on behalf of controllers, or providing solutions to controllers, processors and producers should use their expertise to build trust and guide their customers, including SMEs, in designing /procuring solutions that embed data protection into the processing. This means in turn that the design of products and services should facilitate controllers’ needs.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

“Privacy” by Afsal CMK is licensed under CC BY 4.0

Further Reading, Other Developments, and Coming Events (29 October)

Further Reading

  •  “Cyberattacks hit Louisiana government offices as worries rise about election hacking” By Eric Geller — Politico. The Louisiana National Guard located and addressed a remote access trojan, a common precursor to ransomware attacks, in some of the state’s systems. This may or may not have been the beginning stages of an election day attack, and other states have made similar discoveries.
  • Kicked off Weibo? Here’s what happens next.” By Shen Lu — Rest of World. Beijing is increasingly cracking down on dissent on Weibo, the People’s Republic of China’s (PRC) version of Twitter. People get banned for posting content critical of the PRC government or pro-Hong Kong. Some are allowed back and are usually banned again. Some buy burner accounts inevitably to get also get banned.
  • Inside the campaign to ‘pizzagate’ Hunter Biden” By Ben Collins and Brandy Zadrozny — NBC News. The sordid tale of how allies or advocates of the Trump Campaign have tried to propagate rumors of illegal acts committed by Hunter Biden in an attempt to smear former Vice President Joe Biden as was done to former Secretary of State Hillary Clinton in 2016.
  • Russians Who Pose Election Threat Have Hacked Nuclear Plants and Power Grid” By Nicole Perlroth — The New York Times. Some of Russia’s best hackers have been prowling around state and local governments’ systems for unknown ends. These are the same hackers, named Dragonfly or Energetic Bear by researchers, who have penetrated a number of electric utilities and the power grid in the United States, including a nuclear plant. It is not clear what these hackers want to do, which worries U.S. officials and cybersecurity experts and researchers.
  • Activists Turn Facial Recognition Tools Against the Police” By Kashmir Hill — The New York Times. In an interesting twist, protestors and civil liberties groups are adopting facial recognition technology to try to identify police officers who attack protestors or commit acts of violence who refuse to identify themselves.

Other Developments

  • The United Kingdom’s Information Commissioner’s Office (ICO) has completed its investigation into the data brokering practices of Equifax, Transunion, and Experian and found widespread privacy and data protection violations. Equifax and Transunion were amendable to working with the ICO to correct abuses and shutter illegal products and businesses, but Experian was not. In the words of the ICO, Experian “did not accept that they were required to make the changes set out by the ICO, and as such were not prepared to issue privacy information directly to individuals nor cease the use of credit reference data for direct marketing purposes.” Consequently, Experian must affect specified changes within nine months or face “a fine of up to £20m or 4% of the organisation’s total annual worldwide turnover.” The ICO investigated using its powers under the British Data Protection Act 2018 and the General Data Protection Regulation (GDPR).
    • The ICO found widespread problems in the data brokering businesses of the three firms:
      • The investigation found how the three CRAs were trading, enriching and enhancing people’s personal data without their knowledge. This processing resulted in products which were used by commercial organisations, political parties or charities to find new customers, identify the people most likely to be able to afford goods and services, and build profiles about people.
      • The ICO found that significant ‘invisible’ processing took place, likely affecting millions of adults in the UK. It is ‘invisible’ because the individual is not aware that the organisation is collecting and using their personal data. This is against data protection law.
      • Although the CRAs varied widely in size and practice, the ICO found significant data protection failures at each company. As well as the failure to be transparent, the regulator found that personal data provided to each CRA, in order for them to provide their statutory credit referencing function, was being used in limited ways for marketing purposes. Some of the CRAs were also using profiling to generate new or previously unknown information about people, which is often privacy invasive.
      • Other thematic failings identified were:
        • Although the CRAs did provide some privacy information on their websites about their data broking activities, their privacy information did not clearly explain what they were doing with people’s data;
        • Separately, they were using certain lawful bases incorrectly for processing people’s data.
      • The ICO issued its report “Investigation into data protection compliance in the direct marketing data broking sector,” with these key findings:
        • Key finding 1: The privacy information of the CRAs did not clearly explain their processing with respect to their marketing services. CRAs have to revise and improve their privacy information. Those engaging in data broking activities must ensure that their privacy information is compliant with the GDPR.
        • Key finding 2: In the circumstances we assessed the CRAs were incorrectly relying on an exception from the requirement to directly provide privacy information to individuals (excluding where the data processed has come solely from the open electoral register or would be in conflict with the purpose of processing – such as suppression lists like the TPS). To comply with the GDPR, CRAs have to ensure that they provide appropriate privacy information directly to all the individuals for whom they hold personal data in their capacity as data brokers for direct marketing purposes. Those engaging in data broking activities must ensure individuals have the information required by Article 14.
        • Key finding 3: The CRAs were using personal data collected for credit referencing purposes for direct marketing purposes. The CRAs must not use this data for direct marketing purposes unless this has been transparently explained to individuals and they have consented to this use. Where the CRAs are currently using personal data obtained for credit referencing purposes for direct marketing, they must stop using it.
        • Key finding 4: The consents relied on by Equifax were not valid under the GDPR. To comply with the GDPR, CRAs must ensure that the consent is valid, if they intend to rely on consent obtained by a third party. Those engaging in data broking activities must ensure that any consents they use meet the standard of the GDPR.
        • Key finding 5: Legitimate interest assessments (LIAs) conducted by the CRAs in respect of their marketing services were not properly weighted. The CRAs must revise their LIAs to reconsider the balance of their own interests against the rights and freedoms of individuals in the context of their marketing services. Where an objective LIA does not favour the interests of the organisation, the processing of that data must stop until that processing can be made lawful. Those engaging in data broking activities must ensure that LIAs are conducted objectively taking into account all factors.
        • Key finding 6: In some cases Experian was obtaining data on the basis of consent and then processing it on the basis of legitimate interests. Switching from consent to legitimate interests in this situation is not appropriate. Where personal data is collected by a third party and shared for direct marketing purposes on the basis of consent, then the appropriate lawful basis for subsequent processing for these purposes will also be consent. Experian must therefore delete any data supplied to it on the basis of consent that it is processing on the basis of legitimate interests.
  • The United States (U.S.) Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA), the Federal Bureau of Investigation (FBI), and the U.S. Cyber Command Cyber National Mission Force (CNMF) issued a joint advisory on the “the tactics, techniques, and procedures (TTPs) used by North Korean advanced persistent threat (APT) group Kimsuky—against worldwide targets—to gain intelligence on various topics of interest to the North Korean government.” CISA, FBI, and CNMF stated “individuals and organizations within this target profile increase their defenses and adopt a heightened state of awareness…[and] [p]articularly important mitigations include safeguards against spearphishing, use of multi-factor authentication, and user awareness training.” The agencies noted:
    • This advisory describes known Kimsuky TTPs, as found in open-source and intelligence reporting through July 2020. The target audience for this advisory is commercial sector businesses desiring to protect their networks from North Korean APT activity.
    • The agencies highlighted the key findings:
      • Kimsuky is most likely tasked by the North Korean regime with a global intelligence gathering mission.
      • Kimsuky employs common social engineering tactics, spearphishing, and watering hole attacks to exfiltrate desired information from victims.
      •  Kimsuky is most likely to use spearphishing to gain initial access into victim hosts or networks.
      • Kimsuky conducts its intelligence collection activities against individuals and organizations in South Korea, Japan, and the United States.
      • Kimsuky focuses its intelligence collection activities on foreign policy and national security issues related to the Korean peninsula, nuclear policy, and sanctions.
      • Kimsuky specifically targets:
        • Individuals identified as experts in various fields,
        • Think tanks, and
        • South Korean government entities.
  • European Data Protection Supervisor (EDPS) Wojciech Wiewiórowski made remarks at the European Union Agency for Cybersecurity’s (ENISA) Annual Privacy Forum and advocated for a European Union (EU) moratorium on the rollout of new technology like facial recognition and artificial intelligence (AI) until this “development can be reconciled with the values and fundamental rights that are at the foundation of our democratic societies.” He claimed the EU could maintain the rights of its people while taking the lead in cutting edge technologies. Wiewiórowski asserted:
    • Now we are entering a new phase of contactless tracking of individuals in public areas. Remote facial recognition technology has developed quickly; so much so that some authorities and private entities want to use it in many places. If this all becomes true, we could be tracked everywhere in the world.
    • I do not believe that such a development can be reconciled with the values and fundamental rights that are at the foundation of our democratic societies. The EDPS therefore, together with other authorities, supports a moratorium on the rollout of such technologies. The aim of this moratorium would be twofold. Firstly, an informed and democratic debate would take place. Secondly, the EU and Member States would put in place all the appropriate safeguards, including a comprehensive legal framework, to guarantee the proportionality of the respective technologies and systems in relation to their specific use.
    • As an example, any new regulatory framework for AI should, in my view:
      • apply both to EU Member States and to EU institutions, offices, bodies and agencies;
      • be designed to protect individuals, communities and society as a whole, from any negative impact;
      • propose a robust and nuanced risk classification scheme, ensuring that any significant potential harm posed by AI applications is matched with appropriate mitigating measures.
    • We must ensure that Europe’s leading role in AI, or any other technology in development, does not come at the cost of our fundamental rights. Europe must remain true to its values and provide the grounds for innovation. We will only get it right if we ensure that technology serves both individuals and society.
    • Faced with these developments, transparency is a starting point for proper debate and assessment. Transparency for citizens puts them in a position to understand what they are subject to, and to decide whether they want to accept the infringements of their rights.
  • The Office of the Privacy Commissioner of Canada (OPC) and “its international counterparts” laid out their thinking on “stronger privacy protections and greater accountability in the development and use of facial recognition technology and artificial intelligence (AI) systems” at the recent Global Privacy Assembly. The OPC summarized the two resolutions adopted at the assembly:
    • the resolution on facial recognition technology acknowledges that this technology can benefit security and public safety. However, it asserts that facial recognition can erode data protection, privacy and human rights because it is highly intrusive and enables widespread surveillance that can produce inaccurate results. The resolution also calls on data protection authorities to work together to develop principles and expectations that strengthen data protection and ensure privacy by design in the development of innovative uses of this technology.
    • a resolution on the development and use of AI systems that urges organizations developing or using them to ensure human accountability for AI systems and address adverse impacts on human rights. The resolution encourages governments to amend personal data protection laws to make clear legal obligations for accountability in the development and use of AI. It also calls on governments, public authorities and other stakeholders to work with data protection authorities to ensure legal compliance, accountability and ethics in the development and use of AI systems.
  • The Alliance for Securing Democracy (ASD) at the German Marshall Fund of the United States (GMFUS) issued a report, “A Future Internet for Democracies: Contesting China’s Push for Dominance in 5G, 6G, and the Internet of Everything” that “provides a roadmap for contesting China’s growing dominance in this critical information arena across infrastructure, application, and governance dimensions—one that doubles down on geostrategic interests and allied cooperation.” ASD stated “[a]n allied approach that is rooted firmly in shared values and resists an authoritarian divide-and-conquer strategy is vital for the success of democracies in commercial, military, and governance domains.” ASD asserted:
    • The United States and its democratic allies are engaged in a contest for the soul of the Future Internet. Conceived as a beacon of free expression with the power to tear down communication barriers across free and unfree societies alike, the Internet today faces significant challenges to its status as the world’s ultimate connector.1 In creating connectivity and space for democratic speech, it has also enabled new means of authoritarian control and the suppression of human rights through censorship and surveillance. As tensions between democracies and the People’s Republic of China (PRC) heat up over Internet technologies, the prospect of a dichotomous Inter-net comes more sharply into focus: a democratic Internet where information flows freely and an authoritarian Internet where it is tightly controlled—separated not by an Iron Curtain, but a Silicon one. The Future Internet is deeply enmeshed in the dawning information contest between autocracies and democracies.2 It is the base layer—the foundation—on which communication takes place and the entry point into narrative and societal influence. How the next generation of Internet technologies are created, defined, governed, and ultimately used will have an outsized impact on this information contest—and the larger geopolitical contest—between democracy and authoritarianism.
    • ASD found:
      • The Chinese Communist Party (CCP) has a history of creating infrastructure dependence and using it for geopolitical leverage. As such, China’s global market dominance in Future Internet infrastructure carries unacceptable risks for democracies.
      • The contest to shape 6G standards is already underway, with China leading the charge internationally. As the United States ponders how it ended up on the back foot on 5G, China is moving ahead with new proposals that would increase authoritarian control and undermine fundamental freedoms.
      • The battle over the Future Internet is playing out in the Global South. As more developed nations eschew Chinese network equipment, democracies’ response has largely ignored this global build-out of networks and applications in the proving ground of the developing world that threaten both technological competitiveness and universal rights.
      • China is exporting “technology to anticipate crime”—a dystopian future police state. “Minority report”-style pre-criminal arrests decimate the practice of the rule of law centered in the presumption of innocence.
      • Personal Data Exfiltration: CCP entities see “Alternative Data” as “New Oil” for AI-driven applications in the Internet-of-Everything. These applications provide new and expanded avenues for mass data collection, as much as they depend on this data to succeed–giving China the means and the motivation to vacuum up the world’s data.
      • Data in, propaganda out: Future Internet technology presents opportunities to influence the information environment, including the development of information applications that simultaneously perform big data collection. Chinese companies are building information platforms into application technologies, reimagining both the public square and private locales as tools for propaganda.
      • Already victims of intellectual property theft by China, the United States and its democratic partners are ill-prepared to secure sensitive information as the Future Internet ecosystem explodes access points. This insecurity will continue to undermine technological competitiveness and national security and compound these effects in new ways.
      • China outnumbers the United States nearly two-to-one on participation in and leadership of critical international Future Internet standards-setting efforts. Technocratic standards bodies are becoming unlikely loci of great power technical competition, as Beijing uses leadership posts to shape the narrative and set the course for the next generation of Internet technologies to support China’s own technological leadership, governance norms, and market access.
      • The world’s oldest UN agency is being leveraged as a propaganda mouthpiece for the CCP’s AI and Future Internet agenda, whitewashing human rights abuses under a banner of “AI for Good.” The upshot is an effort to shape the UN Sustainable Development agenda to put economic development with authoritarian technology–not individual liberty—at their center.
      • A symbiotic relationship has developed between China’s Belt and Road Initiative and UN agencies involved in Future Internet and digital development. In this way, China leverages the United Nations enterprise to capture market dominance in next generation technologies.
  • A Dutch think tank has put together the “(best) practices of Asian countries and the United States in the field of digital connectivity” in the hopes of realizing European Commission President Ursula von der Leyen’s goal of making the next ten years “Europe’s Digital Decade.” The Clingendael Institute explained that the report “covers a wide range of topics related to digital regulation, the e-economy, and telecommunications infrastructure.” The Clingendael Institute asserted:
    • Central to the debate and any policy decision on digital connectivity are the trade-offs concerning privacy, business interests and national security. While all regulations are a combination of these three, the United States (US) has taken a path that prioritises the interests of businesses. This is manifested, for example, in the strong focus on free data flows, both personal and non-personal, to strengthen companies’ competitive advantage in collecting and using data to develop themselves. China’s approach, by contrast, strongly focuses on state security, wherein Chinese businesses are supported and leveraged to pre-empt threats to the country and, more specifically, to the Chinese Communist Party. This is evident from its strict data localisation requirements to prevent any data from being stored outside its borders and a mandatory security assessment for cross-border transfers. The European Union represents a third way, emphasising individuals’ privacy and a human-centred approach that puts people first, and includes a strong focus on ethics, including in data-protection regulations. This Clingendael Report aims to increase awareness and debate about the trade-offs of individual, state and business interests in all subsets of digital connectivity. This is needed to reach a more sustainable EU approach that will outlast the present decade. After all, economic competitiveness is required to secure Europe and to further its principled approach to digital connectivity in the long term. The analysis presented here covers a wide range of topics within digital connectivity’s three subsets: regulation; business; and telecommunications infrastructure. Aiming to contribute to improved European policy-making, this report discusses (best) practices of existing and rising digital powers in Asia and the United States. In every domain, potential avenues for cooperation with those countries are explored as ways forward for the EU.
    • Findings show that the EU and its member states are slowly but steadily moving from being mainly a regulatory power to also claiming their space as a player in the digitalised world. Cloud computing initiative GAIA-X is a key example, constituting a proactive alternative to American and Chinese Cloud providers that is strongly focused on uniting small European initiatives to create a strong and sustainable Cloud infrastructure. Such initiatives, including also the more recent Next Generation Internet (NGI), not only help defend and push European digital norms and standards, but also assist the global competitiveness of European companies and business models by facilitating the availability of large data-sets as well as scaling up. Next to such ‘EU only’ initiatives, working closely together with like-minded partners will benefit the EU and its member states as they seek to finetune and implement their digital strategies. The United States and Asian partners, particularly Japan, South Korea, India and Singapore, are the focus of attention here.

Coming Events

  • On 10 November, the Senate Commerce, Science, and Transportation Committee will hold a hearing to consider nominations, including Nathan Simington’s to be a Member of the Federal Communications Commission.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by David Peterson from Pixabay

Further Reading, Other Developments, and Coming Events (22 October)

Further Reading

  •  “A deepfake porn Telegram bot is being used to abuse thousands of women” By Matt Burgess — WIRED UK. A bot set loose on Telegram can take pictures of women and, apparently teens, too, and “takes off” their clothing, rendering a naked image of females who never took naked pictures. This seems to be the next iteration in deepfake porn, a problem that will surely get worse until governments legislate against it and technology companies have incentives to locate and take down such material.
  • The Facebook-Twitter-Trump Wars Are Actually About Something Else” By Charlie Warzel — The New York Times. This piece makes the case that there are no easy fixes for American democracy or for misinformation on social media platforms.
  • Facebook says it rejected 2.2m ads for breaking political campaigning rules” — Agence France-Presse. Facebook’s Vice President of Global Affairs and Communications Nick Clegg said the social media giant is employing artificial intelligence and humans to find and remove political advertisements that violate policy in order to avoid a repeat of 2016 where untrue information and misinformation played roles in both Brexit and the election of Donald Trump as President of the United States.
  • Huawei Fallout—Game-Changing New China Threat Strikes At Apple And Samsung” By Zak Doffman — Forbes. Smartphone manufacturers from the People’s Republic of China (PRC) appear ready to step into the projected void caused by the United States (U.S.) strangling off Huawei’s access to chips. Xiaomi and Oppo have already seen sales surge worldwide and are poised to pick up where Huawei is being forced to leave off, perhaps demonstrating the limits of U.S. power to blunt the rise of PRC technology companies.
  • As Local News Dies, a Pay-for-Play Network Rises in Its Place” By Davey Alba and Jack Nicas — The New York Times. With a decline and demise of many local media outlets in the United States, new groups are stepping into the void, and some are politically minded but not transparent about biases. The organization uncovered in this article is nakedly Republican and is running and planting articles at both legitimate and artificial news sites for pay. Sometimes conservative donors pay, sometimes campaigns do. Democrats are engaged in the same activity but apparently to a lesser extent. These sorts of activities will only erode further faith in the U.S. media.
  • Forget Antitrust Laws. To Limit Tech, Some Say a New Regulator Is Needed.” By Steve Lohr — The New York Times. This piece argues that anti-trust enforcement actions are plodding, tending to take years to finish. Consequently, this body of law is inadequate to the task of addressing the market dominance of big technology companies. Instead, a new regulatory body is needed along the lines of those regulating the financial services industries that is more nimble than anti-trust. Given the problems in that industry with respect to regulation, this may not be the best model.
  • “‘Do Not Track’ Is Back, and This Time It Might Work” By Gilad Edelman — WIRED. Looking to utilize the requirement in the “California Consumer Privacy Act” (CCPA) (AB 375) that requires regulated entities to respect and effectuate the use of a one-time opt-out mechanism, a group of entities have come together to build and roll out the Global Privacy Control. In theory, users could download this technical specification to their phones and computers, install it, use it once, and then all websites would be on notice regarding that person’s privacy preferences. Such a means would go to the problem turned up by Consumer Reports recent report on the difficulty of trying to opt out of having one’s personal information sold.
  • EU countries sound alarm about growing anti-5G movement” By Laurens Cerulus — Politico. 15 European Union (EU) nations wrote the European Commission (EC) warning that the nascent anti-5G movement borne of conspiracy thinking and misinformation threatens the Eu’s position vis-à-vis the United States (U.S.) and the People’s Republic of China (PRC). There have been more than 200 documented arson attacks in the EU with the most having occurred in the United Kingdom, France, and the Netherlands. These nations called for a more muscular, more forceful debunking of the lies and misinformation being spread about 5G.
  • Security firms call Microsoft’s effort to disrupt botnet to protect against election interference ineffective” By Jay Greene — The Washington Post. Microsoft seemingly acted alongside the United States (U.S.) Cyber Command to take down and impair the operation of Trickbot, but now cybersecurity experts are questioning how effective Microsoft’s efforts really were. Researchers have shown the Russian operated Trickbot has already stood up operations and has dispersed across servers around the world, showing how difficult it is to address some cyber threats.
  • Governments around the globe find ways to abuse Facebook” By Sara Fischer and Ashley Gold — Axios. This piece puts a different spin on the challenges Facebook faces in countries around the world, especially those that ruthlessly use the platform to spread lies and misinformation than the recent BuzzFeed News article. The new article paints Facebook as the well-meaning company being taken advantage of while the other one portrays a company callous to content moderation except in nations where it causes them political problems such as the United States, the European Union, and other western democracies.

Other Developments

  • The United States (U.S.) Department of Justice’s (DOJ) Cyber-Digital Task Force (Task Force) issued “Cryptocurrency: An Enforcement Framework,” that “provides a comprehensive overview of the emerging threats and enforcement challenges associated with the increasing prevalence and use of cryptocurrency; details the important relationships that the Department of Justice has built with regulatory and enforcement partners both within the United States government and around the world; and outlines the Department’s response strategies.” The Task Force noted “[t]his document does not contain any new binding legal requirements not otherwise already imposed by statute or regulation.” The Task Force summarized the report:
    • [I]n Part I, the Framework provides a detailed threat overview, cataloging the three categories into which most illicit uses of cryptocurrency typically fall: (1) financial transactions associated with the commission of crimes; (2) money laundering and the shielding of legitimate activity from tax, reporting, or other legal requirements; and (3) crimes, such as theft, directly implicating the cryptocurrency marketplace itself. 
    • Part II explores the various legal and regulatory tools at the government’s disposal to confront the threats posed by cryptocurrency’s illicit uses, and highlights the strong and growing partnership between the Department of Justice and the Securities and Exchange Commission, the Commodity Futures Commission, and agencies within the Department of the Treasury, among others, to enforce federal law in the cryptocurrency space.
    • Finally, the Enforcement Framework concludes in Part III with a discussion of the ongoing challenges the government faces in cryptocurrency enforcement—particularly with respect to business models (employed by certain cryptocurrency exchanges, platforms, kiosks, and casinos), and to activity (like “mixing” and “tumbling,” “chain hopping,” and certain instances of jurisdictional arbitrage) that may facilitate criminal activity.    
  • The White House’s Office of Science and Technology Policy (OSTP) has launched a new website for the United States’ (U.S.) quantum initiative and released a report titled “Quantum Frontiers: Report On Community Input To The Nation’s Strategy For Quantum Information Science.” The Quantum Initiative flows from the “National Quantum Initiative Act” (P.L. 115-368) “to  provide  for  a  coordinated  Federal  program  to  accelerate  quantum  research  and  development  for  the  economic and national security of the United States.” The OSTP explained that the report “outlines eight frontiers that contain core problems with fundamental questions confronting quantum information science (QIS) today:
    • Expanding Opportunities for Quantum Technologies to Benefit Society
    • Building the Discipline of Quantum Engineering
    • Targeting Materials Science for Quantum Technologies
    • Exploring Quantum Mechanics through Quantum Simulations
    • Harnessing Quantum Information Technology for Precision Measurements
    • Generating and Distributing Quantum Entanglement for New Applications
    • Characterizing and Mitigating Quantum Errors
    • Understanding the Universe through Quantum Information
    • OSTP asserted “[t]hese frontier areas, identified by the QIS research community, are priorities for the government, private sector, and academia to explore in order to drive breakthrough R&D.”
  • The New York Department of Financial Services (NYDFS) published its report on the July 2020 Twitter hack during which a team of hacker took over a number of high-profile accounts (e.g. Barack Obama, Kim Kardashian West, Jeff Bezos, and Elon Musk) in order to perpetrate a cryptocurrency scam. The NYDFS has jurisdiction over cryptocurrencies and companies dealing in this item in New York. The NYDFS found that the hackers used the most basic means to acquire permission to take over accounts. The NYDFS explained:
    • Given that Twitter is a publicly traded, $37 billion technology company, it was surprising how easily the Hackers were able to penetrate Twitter’s network and gain access to internal tools allowing them to take over any Twitter user’s account. Indeed, the Hackers used basic techniques more akin to those of a traditional scam artist: phone calls where they pretended to be from Twitter’s Information Technology department. The extraordinary access the Hackers obtained with this simple technique underscores Twitter’s cybersecurity vulnerability and the potential for devastating consequences. Notably, the Twitter Hack did not involve any of the high-tech or sophisticated techniques often used in cyberattacks–no malware, no exploits, and no backdoors.
    • The implications of the Twitter Hack extend far beyond this garden-variety fraud. There are well-documented examples of social media being used to manipulate markets and interfere with elections, often with the simple use of a single compromised account or a group of fake accounts.In the hands of a dangerous adversary, the same access obtained by the Hackers–the ability to take control of any Twitter users’ account–could cause even greater harm.
    • The Twitter Hack demonstrates the need for strong cybersecurity to curb the potential weaponization of major social media companies. But our public institutions have not caught up to the new challenges posed by social media. While policymakers focus on antitrust and content moderation problems with large social media companies, their cybersecurity is also critical. In other industries that are deemed critical infrastructure, such as telecommunications, utilities, and finance, we have established regulators and regulations to ensure that the public interest is protected. With respect to cybersecurity, that is what is needed for large, systemically important social media companies.
    • The NYDFS recommended the cybersecurity measures cryptocurrency companies in New York should implement to avoid similar hacks, including its own cybersecurity regulations that bind its regulated entities in New York. The NYDFS also called for a national regulator to address the lack of a dedicated regulator of Twitter and other massive social media platforms. The NYDFS asserted:
      • Social media companies currently have no dedicated regulator. They are subject to the same general oversight applicable to other companies. For instance, the SEC’s regulations for all public companies apply to public social media companies, and antitrust and related laws and regulations enforced by the Department of Justice and the FTC apply to social media companies as they do to all companies. Social media companies are also subject to generally applicable laws, such as the California Consumer Privacy Act and the New York SHIELD Act. The European Union’s General Data Protection Regulation, which regulates the storage and use of personal data, also applies to social media entities doing business in Europe.
      • But there are no regulators that have the authority to uniformly regulate social media platforms that operate over the internet, and to address the cybersecurity concerns identified in this Report. That regulatory vacuum must be filled.
      • A useful starting point is to create a “systemically important” designation for large social media companies, like the designation for critically important bank and non-bank financial institutions. In the wake of the 2007-08 financial crisis, Congress established a new regulatory framework for financial institutions that posed a systemic threat to the financial system of the United States. An institution could be designated as a Systemically Important Financial Institution (“SIFI”) “where the failure of or a disruption to the functioning of a financial market utility or the conduct of a payment, clearing, or settlement activity could create, or increase, the risk of significant liquidity or credit problems spreading among financial institutions or markets and thereby threaten the stability of the financial system of the United States.”
      • The risks posed by social media to our consumers, economy, and democracy are no less grave than the risks posed by large financial institutions. The scale and reach of these companies, combined with the ability of adversarial actors who can manipulate these systems, require a similarly bold and assertive regulatory approach.
      • The designation of an institution as a SIFI is made by the Financial Stability Oversight Council (“FSOC”), which Congress established to “identify risks to the financial stability of the United States” and to provide enhanced supervision of SIFIs.[67] The FSOC also “monitors regulatory gaps and overlaps to identify emerging sources of systemic risk.” In determining whether a financial institution is systemically important, the FSOC considers numerous factors including: the effect that a failure or disruption to an institution would have on financial markets and the broader financial system; the nature of the institution’s transactions and relationships; the nature, concentration, interconnectedness, and mix of the institution’s activities; and the degree to which the institution is regulated.
      • An analogue to the FSOC should be established to identify systemically important social media companies. This new Oversight Council should evaluate the reach and impact of social media companies, as well as the society-wide consequences of a social media platform’s misuse, to determine which companies they should designate as systemically important. Once designated, those companies should be subject to enhanced regulation, such as through the provision of “stress tests” to evaluate the social media companies’ susceptibility to key threats, including cyberattacks and election interference.
      • Finally, the success of such oversight will depend on the establishment of an expert agency to oversee designated social media companies. Systemically important financial companies designated by the FSOC are overseen by the Federal Reserve Board, which has a long-established and deep expertise in banking and financial market stability. A regulator for systemically important social media would likewise need deep expertise in areas such as technology, cybersecurity, and disinformation. This expert regulator could take various forms; it could be a completely new agency or could reside within an established agency or at an existing regulator.
  • The Government Accountability Office (GAO) evaluated how well the Trump Administration has been implementing the “Open, Public, Electronic and Necessary Government Data Act of 2018” (OPEN Government Data Act) (P.L. 115-435). As the GAO explained, this statute “requires federal agencies to publish their information as open data using standardized, nonproprietary formats, making data available to the public open by default, unless otherwise exempt…[and] codifies and expands on existing federal open data policy including the Office of Management and Budget’s (OMB) memorandum M-13-13 (M-13-13), Open Data Policy—Managing Information as an Asset.”
    • The GAO stated
      • To continue moving forward with open government data, the issuance of OMB implementation guidance should help agencies develop comprehensive inventories of their data assets, prioritize data assets for publication, and decide which data assets should or should not be made available to the public.
      • Implementation of this statutory requirement is critical to agencies’ full implementation and compliance with the act. In the absence of this guidance, agencies, particularly agencies that have not previously been subject to open data policies, could fall behind in meeting their statutory timeline for implementing comprehensive data inventories.
      • It is also important for OMB to meet its statutory responsibility to biennially report on agencies’ performance and compliance with the OPEN Government Data Act and to coordinate with General Services Administration (GSA) to improve the quality and availability of agency performance data that could inform this reporting. Access to this information could inform Congress and the public on agencies’ progress in opening their data and complying with statutory requirements. This information could also help agencies assess their progress and improve compliance with the act.
    • The GAO made three recommendations:
      • The Director of OMB should comply with its statutory requirement to issue implementation guidance to agencies to develop and maintain comprehensive data inventories. (Recommendation 1)
      • The Director of OMB should comply with the statutory requirement to electronically publish a report on agencies’ performance and compliance with the OPEN Government Data Act. (Recommendation 2)
      • The Director of OMB, in collaboration with the Administrator of GSA, should establish policy to ensure the routine identification and correction of errors in electronically published performance information. (Recommendation 3)
  • The United States’ (U.S.) National Security Agency (NSA) issued a cybersecurity advisory titled “Chinese State-Sponsored Actors Exploit Publicly Known Vulnerabilities,” that “provides Common Vulnerabilities and Exposures (CVEs) known to be recently leveraged, or scanned-for, by Chinese state-sponsored cyber actors to enable successful hacking operations against a multitude of victim networks.” The NSA recommended a number of mitigations generally for U.S. entities, including:
    • Keep systems and products updated and patched as soon as possible after patches are released.
    • Expect that data stolen or modified (including credentials, accounts, and software) before the device was patched will not be alleviated by patching, making password changes and reviews of accounts a good practice.
    • Disable external management capabilities and set up an out-of-band management network.
    • Block obsolete or unused protocols at the network edge and disable them in device configurations.
    • Isolate Internet-facing services in a network Demilitarized Zone (DMZ) to reduce the exposure of the internal network.
    • Enable robust logging of Internet-facing services and monitor the logs for signs of compromise.
    • The NSA then proceeded to recommend specific fixes.
    • The NSA provided this policy backdrop:
      • One of the greatest threats to U.S. National Security Systems (NSS), the U.S. Defense Industrial Base (DIB), and Department of Defense (DOD) information networks is Chinese state-sponsored malicious cyber activity. These networks often undergo a full array of tactics and techniques used by Chinese state-sponsored cyber actors to exploit computer networks of interest that hold sensitive intellectual property, economic, political, and military information. Since these techniques include exploitation of publicly known vulnerabilities, it is critical that network defenders prioritize patching and mitigation efforts.
      • The same process for planning the exploitation of a computer network by any sophisticated cyber actor is used by Chinese state-sponsored hackers. They often first identify a target, gather technical information on the target, identify any vulnerabilities associated with the target, develop or re-use an exploit for those vulnerabilities, and then launch their exploitation operation.
  • Belgium’s data protection authority (DPA) (Autorité de protection des données in French or Gegevensbeschermingsautoriteit in Dutch) (APD-GBA) has reportedly found that the Transparency & Consent Framework (TCF) developed by the Interactive Advertising Bureau (IAB) violates the General Data Protection Regulation (GDPR). The Real-Time Bidding (RTB) system used for online behavioral advertising allegedly transmits the personal information of European Union residents without their consent even before a popup appears on their screen asking for consent. The APD-GBA is the lead DPA in the EU in investigating the RTB and will likely now circulate their findings and recommendations to other EU DPAs before any enforcement will commence.
  • None Of Your Business (noyb) announced “[t]he Irish High Court has granted leave for a “Judicial Review” against the Irish Data Protection Commission (DPC) today…[and] [t]he legal action by noyb aims to swiftly implement the [Court of Justice for the European Union (CJEU)] Decision prohibiting Facebook’s” transfer of personal data from the European Union to the United States (U.S.) Last month, after the DPC directed Facebook to stop transferring the personal data of EU citizens to the U.S., the company filed suit in the Irish High Court to stop enforcement of the order and succeeded in staying the matter until the court rules on the merits of the challenge.
    • noyb further asserted:
      • Instead of making a decision in the pending procedure, the DPC has started a second, new investigation into the same subject matter (“Parallel Procedure”), as widely reported (see original reporting by the WSJ). No logical reasons for the Parallel Procedure was given, but the DPC has maintained that Mr Schrems will not be heard in this second case, as he is not a party in this Parallel Procedure. This Paralell procedure was criticised by Facebook publicly (link) and instantly blocked by a Judicial Review by Facebook (see report by Reuters).
      • Today’s Judicial Review by noyb is in many ways the counterpart to Facebook’s Judicial Review: While Facebook wants to block the second procedure by the DPC, noyb wants to move the original complaints procedure towards a decision.
      • Earlier this summer, the CJEU struck down the adequacy decision for the agreement between the EU and (U.S. that had provided the easiest means to transfer the personal data of EU citizens to the U.S. for processing under the General Data Protection Regulation (GDPR) (i.e. the EU-U.S. Privacy Shield). In the case known as Schrems II, the CJEU also cast doubt on whether standard contractual clauses (SCC) used to transfer personal data to the U.S. would pass muster given the grounds for finding the Privacy Shield inadequate: the U.S.’s surveillance regime and lack of meaningful redress for EU citizens. Consequently, it has appeared as if data protection authorities throughout the EU would need to revisit SCCs for transfers to the U.S., and it appears the DPC was looking to stop Facebook from using its SCC. Facebook is apparently arguing in its suit that it will suffer “extremely significant adverse effects” if the DPC’s decision is implemented.
  • Most likely with the aim of helping British chances for an adequacy decision from the European Union (EU), the United Kingdom’s Information Commissioner’s Office (ICO) published guidance that “discusses the right of access [under the General Data Protection Regulation (GDPR)] in detail.” The ICO explained “is aimed at data protection officers (DPOs) and those with specific data protection responsibilities in larger organisations…[but] does not specifically cover the right of access under Parts 3 and 4 of the Data Protection Act 2018.”
    • The ICO explained
      • The right of access, commonly referred to as subject access, gives individuals the right to obtain a copy of their personal data from you, as well as other supplementary information.
  • The report the House Education and Labor Ranking Member requested from the Government Accountability Office (GAO) on the data security and data privacy practices of public schools. Representative Virginia Foxx (R-NC) asked the GAO “to review the security of K-12 students’ data. This report examines (1) what is known about recently reported K-12 cybersecurity incidents that compromised student data, and (2) the characteristics of school districts that experienced these incidents.” Strangely, the report did have GAO’s customary conclusions or recommendations. Nonetheless, the GAO found:
    • Ninety-nine student data breaches reported from July 1, 2016 through May 5, 2020 compromised the data of students in 287 school districts across the country, according to our analysis of K-12 Cybersecurity Resource Center (CRC) data (see fig. 3). Some breaches involved a single school district, while others involved multiple districts. For example, an attack on a vendor system in the 2019-2020 school year affected 135 districts. While information about the number of students affected was not available for every reported breach, examples show that some breaches affected thousands of students, for instance, when a cybercriminal accessed 14,000 current and former students’ personally identifiable information (PII) in one district.
    • The 99 reported student data breaches likely understate the number of breaches that occurred, for different reasons. Reported incidents sometimes do not include sufficient information to discern whether data were breached. We identified 15 additional incidents in our analysis of CRC data in which student data might have been compromised, but the available information was not definitive. In addition, breaches can go undetected for some time. In one example, the personal information of hundreds of thousands of current and former students in one district was publicly posted for 2 years before the breach was discovered.
    • The CRC identified 28 incidents involving videoconferences from April 1, 2020 through May 5, 2020, some of which disrupted learning and exposed students to harm. In one incident, 50 elementary school students were exposed to pornography during a virtual class. In another incident in a different district, high school students were targeted with hate speech during a class, resulting in the cancellation that day of all classes using the videoconferencing software. These incidents also raise concerns about the potential for violating students’ privacy. For example, one district is reported to have instructed teachers to record their class sessions. Teachers said that students’ full names were visible to anyone viewing the recording.
    • The GAO found gaps in the protection and enforcement of student privacy by the United States government:
      • [The Department of] Education is responsible for enforcing Family Educational Rights and Privacy Act (FERPA), which addresses the privacy of PII in student education records and applies to all schools that receive funds under an applicable program administered by Education. If parents or eligible students believe that their rights under FERPA have been violated, they may file a formal complaint with Education. In response, Education is required to take appropriate actions to enforce and deal with violations of FERPA. However, because the department’s authority under FERPA is directly related to the privacy of education records, Education’s security role is limited to incidents involving potential violations under FERPA. Further, FERPA amendments have not directly addressed educational technology use.
      • The “Children’s Online Privacy Protection Act” (COPPA) requires the Federal Trade Commission (FTC) to issue and enforce regulations concerning children’s privacy. The COPPA Rule, which took effect in 2000 and was later amended in 2013, requires operators of covered websites or online services that collect personal information from children under age 13 to provide notice and obtain parental consent, among other things. COPPA generally applies to the vendors who provide educational technology, rather than to schools directly. However, according to FTC guidance, schools can consent on behalf of parents to the collection of students’ personal information if such information is used for a school-authorized educational purpose and for no other commercial purpose.
  • Upturn, an advocacy organization that “advances equity and justice in the design, governance, and use of technology,” has released a report showing that United States (U.S.) law enforcement agencies have multiple means of hacking into encrypted or protected smartphones. There have long been the means and vendors available in the U.S. and abroad for breaking into phones despite the claims of a number of nations like the Five Eyes (U.S., the United Kingdom, Australia, Canada, and New Zealand) that default end-to-end encryption was a growing problem that allowed those preying on children and engaged in terrorism to go undetected. In terms of possible bias, Upturn is “is supported by the Ford Foundation, the Open Society Foundations, the John D. and Catherine T. MacArthur Foundation, Luminate, the Patrick J. McGovern Foundation, and Democracy Fund.”
    • Upturn stated:
      • Every day, law enforcement agencies across the country search thousands of cellphones, typically incident to arrest. To search phones, law enforcement agencies use mobile device forensic tools (MDFTs), a powerful technology that allows police to extract a full copy of data from a cellphone — all emails, texts, photos, location, app data, and more — which can then be programmatically searched. As one expert puts it, with the amount of sensitive information stored on smartphones today, the tools provide a “window into the soul.”
      • This report documents the widespread adoption of MDFTs by law enforcement in the United States. Based on 110 public records requests to state and local law enforcement agencies across the country, our research documents more than 2,000 agencies that have purchased these tools, in all 50 states and the District of Columbia. We found that state and local law enforcement agencies have performed hundreds of thousands of cellphone extractions since 2015, often without a warrant. To our knowledge, this is the first time that such records have been widely disclosed.
    • Upturn argued:
      • Law enforcement use these tools to investigate not only cases involving major harm, but also for graffiti, shoplifting, marijuana possession, prostitution, vandalism, car crashes, parole violations, petty theft, public intoxication, and the full gamut of drug-related offenses. Given how routine these searches are today, together with racist policing policies and practices, it’s more than likely that these technologies disparately affect and are used against communities of color.
      • We believe that MDFTs are simply too powerful in the hands of law enforcement and should not be used. But recognizing that MDFTs are already in widespread use across the country, we offer a set of preliminary recommendations that we believe can, in the short-term, help reduce the use of MDFTs. These include:
        • banning the use of consent searches of mobile devices,
        • abolishing the plain view exception for digital searches,
        • requiring easy-to-understand audit logs,
        • enacting robust data deletion and sealing requirements, and
        • requiring clear public logging of law enforcement use.

Coming Events

  • The Federal Communications Commission (FCC) will hold an open commission meeting on 27 October, and the agency has released a tentative agenda:
    • Restoring Internet Freedom Order Remand – The Commission will consider an Order on Remand that would respond to the remand from the U.S. Court of Appeals for the D.C. Circuit and conclude that the Restoring Internet Freedom Order promotes public safety, facilitates broadband infrastructure deployment, and allows the Commission to continue to provide Lifeline support for broadband Internet access service. (WC Docket Nos. 17-108, 17-287, 11- 42)
    • Establishing a 5G Fund for Rural America – The Commission will consider a Report and Order that would establish the 5G Fund for Rural America to ensure that all Americans have access to the next generation of wireless connectivity. (GN Docket No. 20-32)
    • Increasing Unlicensed Wireless Opportunities in TV White Spaces – The Commission will consider a Report and Order that would increase opportunities for unlicensed white space devices to operate on broadcast television channels 2-35 and expand wireless broadband connectivity in rural and underserved areas. (ET Docket No. 20-36)
    • Streamlining State and Local Approval of Certain Wireless Structure Modifications – The Commission will consider a Report and Order that would further accelerate the deployment of 5G by providing that modifications to existing towers involving limited ground excavation or deployment would be subject to streamlined state and local review pursuant to section 6409(a) of the Spectrum Act of 2012. (WT Docket No. 19-250; RM-11849)
    • Revitalizing AM Radio Service with All-Digital Broadcast Option – The Commission will consider a Report and Order that would authorize AM stations to transition to an all-digital signal on a voluntary basis and would also adopt technical specifications for such stations. (MB Docket Nos. 13-249, 19-311)
    • Expanding Audio Description of Video Content to More TV Markets – The Commission will consider a Report and Order that would expand audio description requirements to 40 additional television markets over the next four years in order to increase the amount of video programming that is accessible to blind and visually impaired Americans. (MB Docket No. 11-43)
    • Modernizing Unbundling and Resale Requirements – The Commission will consider a Report and Order to modernize the Commission’s unbundling and resale regulations, eliminating requirements where they stifle broadband deployment and the transition to next- generation networks, but preserving them where they are still necessary to promote robust intermodal competition. (WC Docket No. 19-308)
    • Enforcement Bureau Action – The Commission will consider an enforcement action.
  • The Senate Commerce, Science, and Transportation Committee will hold a hearing on 28 October regarding 47 U.S.C. 230 titled “Does Section 230’s Sweeping Immunity Enable Big Tech Bad Behavior?” with testimony from:
    • Jack Dorsey, Chief Executive Officer of Twitter;
    • Sundar Pichai, Chief Executive Officer of Alphabet Inc. and its subsidiary, Google; and 
    • Mark Zuckerberg, Chief Executive Officer of Facebook.
  • On 29 October, the Federal Trade Commission (FTC) will hold a seminar titled “Green Lights & Red Flags: FTC Rules of the Road for Business workshop” that “will bring together Ohio business owners and marketing executives with national and state legal experts to provide practical insights to business and legal professionals about how established consumer protection principles apply in today’s fast-paced marketplace.”
  • On 10 November, the Senate Commerce, Science, and Transportation Committee will hold a hearing to consider nominations, including Nathan Simington’s to be a Member of the Federal Communications Commission.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Mehmet Turgut Kirkgoz from Pixabay

PRC Response To U.S. Clean Networks

The PRC responds to  the U.S.’ Clean Networks with call for international, multilateral standards

In a speech given by the People’s Republic of China’s (PRC) Foreign Minister Wang Yi, the PRC proposed international, multilateral cooperation in addressing data security around the globe. In doing, Wang took some obvious shots at recent policies announced by the United States (U.S.) and longer term actions such as surveillance by the National Security Agency (NSA). The PRC floated a “Global Initiative on Data Security” that would, on its face, seem to argue against actions being undertaken by Beijing against the U.S. and some of its allies. For example, this initiative would bar the stealing of “important data,” yet the PRC stands accused of hacking Australia’s Parliament. Nonetheless, the PRC is likely seeking to position itself as more internationalist than the U.S., which under President Donald Trump has become more isolationist and unilateralist in its policies. The PRC is also calling for the rule of law, especially around “security issues,” most likely a reference to the ongoing trade/national security dispute between the two nations playing out largely in their technology sectors.

Wang’s speech came roughly a month after the U.S. Department of State unveiled its Clean Networks program, an initiative aimed at countering the national security risks posed by PRC technology companies, hardware, software, and apps (see here for more analysis.) He even went so far as to condemn unilateral actions by one nation in particular looking to institute a “clean” networks program. Wang framed this program as aiming to blunt the PRC’s competitive advantage by playing on national security fears. The Trump Administration has sought to persuade, cajole, and lean on other nations to forgo use of Huawei equipment and services in building their next generation 5G networks with some success.

And yet, since the Clean Networks program lacks much in the way of apparent enforcement mechanisms, the Department of States’s announcement may have had more to do with optics as the Trump Administration and many of its Republican allies in Congress have pinned the blame on COVID-19 on the PRC and cast the country as the primary threat to the U.S. This has played out as the Trump Administration has been choking off access to advanced semiconductors and chips to PRC firms, banned TikTok and WeChat, and order ByteDance to sell musical.ly, the app and platform that served as the fulcrum by which TikTok was launched in the U.S.

Wang asserted the PRC “believes that to effectively address the risks and challenges to data security, the following principles must be observed:

  • First, uphold multilateralism. Pursuing extensive consultation and joint contribution for shared benefits is the right way forward for addressing the deficit in global digital governance. It is important to develop a set of international rules on data security that reflect the will and respect the interests of all countries through broad-based participation. Bent on unilateral acts, a certain country keeps making groundless accusations against others in the name of “clean” network and used security as a pretext to prey on enterprises of other countries who have a competitive edge. Such blatant acts of bullying must be opposed and rejected.
  • Second, balance security and development. Protecting data security is essential for the sound growth of digital economy. Countries have the right to protect data security according to law. That said, they are also duty-bound to provide an open, fair and non-discriminatory environment for all businesses. Protectionism in the digital domain runs counter to the laws of economic development and the trend of globalization. Protectionist practices undermine the right of global consumers to equally access digital services and will eventually hold back the country’s own development.
  • Third, ensure fairness and justice. Protection of digital security should be based on facts and the law. Politicization of security issues, double standards and slandering others violate the basic norms governing international relations, and seriously disrupt and hamper global digital cooperation and development.

Wang continued, “[i]n view of the new issues and challenges emerging in this field, China would like to propose a Global Initiative on Data Security, and looks forward to the active participation of all parties…[and] [l]et me briefly share with you the key points of our Initiative:

  • First, approach data security with an objective and rational attitude, and maintain an open, secure and stable global supply chain.
  • Second, oppose using ICT activities to impair other States’ critical infrastructure or steal important data.
  • Third, take actions to prevent and put an end to activities that infringe upon personal information, oppose abusing ICT to conduct mass surveillance against other States or engage in unauthorized collection of personal information of other States.
  • Fourth, ask companies to respect the laws of host countries, desist from coercing domestic companies into storing data generated and obtained overseas in one’s own territory.
  • Fifth, respect the sovereignty, jurisdiction and governance of data of other States, avoid asking companies or individuals to provide data located in other States without the latter’s permission.
  • Sixth, meet law enforcement needs for overseas data through judicial assistance or other appropriate channels.
  • Seventh, ICT products and services providers should not install backdoors in their products and services to illegally obtain user data.
  • Eighth, ICT companies should not seek illegitimate interests by taking advantage of users’ dependence on their products.

As mentioned in the opening paragraph of this article, the U.S. and many of its allies and partners would argue the PRC has transgressed a number of these proposed rules. However, the Foreign Ministry was very clever in how they drafted and translated these principles, for in the second key principle, the PRC is proposing that no country should use “ICT activities to impair other States’ critical infrastructure.” And yet, two international media outlets reported that the African Union’s (AU) computers were transmitting reams of sensitive data to Shanghai daily between 2012 and 2017. If this claim is true, and the PRC’s government was behind the exfiltration, is it fair to say the AU’s critical infrastructure was impaired? One could argue the infrastructure was not even though there was apparently massive data exfiltration. Likewise, in the third key principle, the PRC appears to be condemning mass surveillance of other states, but just this week a PRC company was accused of compiling the personal information of more than 2.4 million worldwide, many of them in influential positions like the Prime Ministers of the United Kingdom and Australia. And yet, if this is the extent of the surveillance, it is not of the same magnitude as U.S. surveillance over the better part of the last two decades. Moreover, the PRC is not opposing a country using mass surveillance of its own people as the PRC is regularly accused of doing, especially against its Uighur minority.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Photo by Hanson Lu on Unsplash

Further Reading, Other Developments, and Coming Events (13 August)

Here are Further Reading, Other Developments, and Coming Events:

Coming Events

  • On 18 August, the National Institute of Standards and Technology (NIST) will host the “Bias in AI Workshop, a virtual event to develop a shared understanding of bias in AI, what it is, and how to measure it.”
  • The United States’ Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) announced that its third annual National Cybersecurity Summit “will be held virtually as a series of webinars every Wednesday for four weeks beginning September 16 and ending October 7:”
    • September 16: Key Cyber Insights
    • September 23: Leading the Digital Transformation
    • September 30: Diversity in Cybersecurity
    • October 7: Defending our Democracy
    • One can register for the event here.
  • The Senate Judiciary Committee’s Antitrust, Competition Policy & Consumer Rights Subcommittee will hold a hearing on 15 September titled “Stacking the Tech: Has Google Harmed Competition in Online Advertising?.” In their press release, Chair Mike Lee (R-UT) and Ranking Member Amy Klobuchar (D-MN) asserted:
    • Google is the dominant player in online advertising, a business that accounts for around 85% of its revenues and which allows it to monetize the data it collects through the products it offers for free. Recent consumer complaints and investigations by law enforcement have raised questions about whether Google has acquired or maintained its market power in online advertising in violation of the antitrust laws. News reports indicate this may also be the centerpiece of a forthcoming antitrust lawsuit from the U.S. Department of Justice. This hearing will examine these allegations and provide a forum to assess the most important antitrust investigation of the 21st century.

Other Developments

  • Senate Intelligence Committee Acting Chair Marco Rubio (R-FL) and Vice Chairman Mark Warner (D-VA) released a statement indicating the committee had voted to adopt the fifth and final volume of its investigation of the Russian Federation’s interference in the 2016 election. The committee had submitted the report to the Intelligence Community for vetting and have received the report with edits and redactions. The report could be released sometime over the next few weeks.  Rubio and Warner stated “the Senate Intelligence Committee voted to adopt the classified version of the final volume of the Committee’s bipartisan Russia investigation. In the coming days, the Committee will work to incorporate any additional views, as well as work with the Intelligence Community to formalize a properly redacted, declassified, publicly releasable version of the Volume 5 report.” The Senate Intelligence Committee’s has released four previous reports:
  • The National Institute of Standards and Technology (NIST) is accepting comments until 11 September on draft Special Publication 800-53B, “Control Baselines for Information Systems and Organizations,” a guidance document that will serve a key role in the United States government’s efforts to secure and protect the networks and systems it operates and those run by federal contractors. NIST explained:
    • This publication establishes security and privacy control baselines for federal information systems and organizations and provides tailoring guidance for those baselines. The use of the security control baselines is mandatory, in accordance with OMB Circular A-130 [OMB A-130] and the provisions of the Federal Information Security Modernization Act4 [FISMA], which requires the implementation of a set of minimum controls to protect federal information and  information systems. Whereas use of the privacy control baseline is not mandated by law or [OMB A-130], SP 800-53B, along with other supporting NIST publications, is designed to help organizations identify the security and privacy controls needed to manage risk and satisfy the security and privacy requirements in FISMA, the Privacy Act of 1974 [PRIVACT], selected OMB policies (e.g., [OMB A-130]), and designated Federal Information Processing Standards (FIPS), among others
  • The United States Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) released an “Election Vulnerability Reporting Guide
    to provide “election administrators with a step-by-step guide, list of resources, and a template for establishing a successful vulnerability disclosure program to address possible vulnerabilities in their state and local election systems…[and] [t]he six steps include:
    • Step 1: Identify Systems Where You Would Accept Security Testing, and those Off-Limits
    • Step 2: Draft an Easy-to-Read Vulnerability Disclosure Policy (See Appendix III)
    • Step 3: Establish a Way to Receive Reports/Conduct Follow-On Communication
    • Step 4: Assign Someone to Thank and Communicate with Researchers
    • Step 5: Assign Someone to Vet and Fix the Vulnerabilities
    • Step 6: Consider Sharing Information with Other Affected Parties
  • The United Kingdom’s Information Commissioner’s Office (ICO) has issued “Guidance on AI and data protection” that “clarifies how you can assess the risks to rights and freedoms that AI can pose from a data protection perspective; and the appropriate measures you can implement to mitigate them.” The ICO explained “[w]hile data protection and ‘AI ethics’ overlap, this guidance does not provide generic ethical or design principles for your use of AI.” The ICO stated “[i]t corresponds to data protection principles, and is structured as follows:
    • part one addresses accountability and governance in AI, including data protection impact assessments (DPIAs);
    • part two covers fair, lawful and transparent processing, including lawful bases, assessing and improving AI system performance, and mitigating potential discrimination;
    • part three addresses data minimisation and security; and
    • part four covers compliance with individual rights, including rights related to automated decision-making.
  •  20 state attorneys general wrote Facebook Chief Executive Officer Mark Zuckerberg and Chief Operating Officer Sheryl Sandberg “to request  that  you  take  additional  steps  to prevent   Facebook   from   being used   to   spread   disinformation   and   hate   and   to   facilitate discrimination.” They also asked “that you take more steps to provide redress for users who fall victim to intimidation and harassment, including violence and digital abuse.” The attorneys general said that “[b]ased on our collective experience, we believe that Facebook should take additional actions including the following steps—many of which are highlighted in Facebook’s recent Civil Rights Audit—to strengthen its commitment to civil rights and fighting disinformation and discrimination:
    • Aggressively enforce Facebook policies against hate speech and organized hate organizations: Although Facebook has developed policies against hate speech and organizations that peddle it, we remain concerned that Facebook’s policies on Dangerous Individuals and Organizations, including but not limited to its policies on white nationalist and white supremacist content, are not enforced quickly and comprehensively enough. Content that violates Facebook’s own policies too often escapes removal just because it comes as coded language, rather than specific magic words. And even where Facebook takes steps to address a particular violation, it often fails to proactively address the follow-on actions by replacement or splinter groups that quickly emerge.
    • Allow public, third-party audits of hate content and enforcement: To gauge the ongoing progress of Facebook’s enforcement efforts, independent experts should be permitted access to the data necessary to conduct regular, transparent third-party audits of hate and hate-related misinformation on the platform, including any information made available to the Global Oversight Board. As part of this effort, Facebook should capture data on the prevalence of different forms of hate content on the platform, whether or not covered by Facebook’s own community standards, thus allowing the public to determine whether enforcement of anti-hate policies differs based on the type of hate content at issue.
    • Commit to an ongoing, independent analysis of Facebook’s content population scheme and the prompt development of best practices guidance: By funneling users toward particular types of content, Facebook’s content population scheme, including its algorithms, can push users into extremist online communities that feature divisive and inflammatory messages, often directed at particular groups. Although Facebook has conducted research and considered programs to reduce this risk, there is still no mandatory guidance for coders and other teams involved in content population. Facebook should commit to an ongoing, independent analysis of its content population scheme, including its algorithms, and also continuously implement mandatory protocols as best practices are identified to curb bias and prevent recommendations of hate content and groups.
    • Expand policies limiting inflammatory advertisements that vilify minority groups: Although Facebook currently prohibits ads that claim that certain people, because of their membership in a protected group, pose a threat to the physical safety of communities or the nation, its policies still allow attacks that characterize such groups as threats to national culture or values. The current prohibition should be expanded to include such ads.
  • New Zealand’s Ministry of Statistics “launched the Algorithm Charter for Aotearoa New Zealand” that “signals that [the nation’s agencies] are committed to being consistent, transparent and accountable in their use of algorithms.”
    • The Ministry explained “[t]he Algorithm Charter is part of a wider ecosystem and works together with existing tools, networks and research, including:
      • Principles for the Safe and Effective Use of Data and Analytics (Privacy Commissioner and Government Chief Data Steward, 2018)
      • Government Use of Artificial Intelligence in New Zealand (New Zealand Law Foundation and Otago University, 2019)
      • Trustworthy AI in Aotearoa – AI Principles (AI Forum New Zealand, 2020)
      • Open Government Partnership, an international agreement to increase transparency.
      • Data Protection and Use Policy (Social Wellbeing Agency, 2020)
      • Privacy, Human Rights and Ethics Framework (Ministry of Social Development).
  • The European Union (EU) imposed its first cyber sanctions under its Framework for a Joint EU Diplomatic Response to Malicious Cyber Activities (aka the cyber diplomacy toolbox) against six hackers and three entities from the Russian Federation, the People’s Republic of China (PRC) and the Democratic People’s Republic of Korea for attacks against the against the Organisation for the Prohibition of Chemical Weapons (OPCW) in the Netherlands, the malware attacks known as Petya and WannaCry, and Operation Cloud Hopper. The EU’s cyber sanctions follow sanctions the United States has placed on a number of people and entities from the same nations and also indictments the U.S. Department of Justice has announced over the years. The sanctions are part of the effort to levy costs on nations and actors that conduct cyber attacks. The EU explained:
    • The attempted cyber-attack was aimed at hacking into the Wi-Fi network of the OPCW, which, if successful, would have compromised the security of the network and the OPCW’s ongoing investigatory work. The Netherlands Defence Intelligence and Security Service (DISS) (Militaire Inlichtingen- en Veiligheidsdienst – MIVD) disrupted the attempted cyber-attack, thereby preventing serious damage to the OPCW.
    • “WannaCry” disrupted information systems around the world by targeting information systems with ransomware and blocking access to data. It affected information systems of companies in the Union, including information systems relating to services necessary for the maintenance of essential services and economic activities within Member States.
    • “NotPetya” or “EternalPetya” rendered data inaccessible in a number of companies in the Union, wider Europe and worldwide, by targeting computers with ransomware and blocking access to data, resulting amongst others in significant economic loss. The cyber-attack on a Ukrainian power grid resulted in parts of it being switched off during winter.
    • “Operation Cloud Hopper” has targeted information systems of multinational companies in six continents, including companies located in the Union, and gained unauthorised access to commercially sensitive data, resulting in significant economic loss.
  • The United States’ Federal Communications Commission (FCC) is asking for comments on the Department of Commerce’s the National Telecommunications and Information Administration’s (NTIA) petition asking the agency to start a rulemaking to clarify alleged ambiguities in 47 USC 230 regarding the limits of the liability shield for the content others post online versus the liability protection for “good faith” moderation by the platform itself. The NTIA was acting per direction in an executive order allegedly aiming to correct online censorship. Executive Order 13925, “Preventing Online Censorship” was issued in late May after Twitter factchecked two of President Donald Trump’s Tweets regarding false claims made about mail voting in California in response to the COVID-19 pandemic. Comments are due by 2 September.
  • The Australian Competition & Consumer Commission (ACCC) released for public consultation a draft of “a mandatory code of conduct to address bargaining power imbalances between Australian news media businesses and digital platforms, specifically Google and Facebook.” The government in Canberra had asked the ACCC to draft this code earlier this year after talks broke down between the Australian Treasury
    • The ACCC explained
      • The code would commence following the introduction and passage of relevant legislation in the Australian Parliament. The ACCC released an exposure draft of this legislation on 31 July 2020, with consultation on the draft due to conclude on 28 August 2020. Final legislation is expected to be introduced to Parliament shortly after conclusion of this consultation process.
    • This is not the ACCC’s first interaction with the companies. Late last year, the ACCC announced a legal action against Google “alleging they engaged in misleading conduct and made false or misleading representations to consumers about the personal location data Google collects, keeps and uses” according to the agency’s press release. In its initial filing, the ACCC is claiming that Google mislead and deceived the public in contravention of the Australian Competition Law and Android users were harmed because those that switched off Location Services were unaware that their location information was still be collected and used by Google for it was not readily apparent that Web & App Activity also needed to be switched off.
    • A year ago, the ACCC released its final report in its “Digital Platforms Inquiry” that “proposes specific recommendations aimed at addressing some of the actual and potential negative impacts of digital platforms in the media and advertising markets, and also more broadly on consumers.”
  • The United States’ Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) issued “released core guidance documentation for the Trusted Internet Connections (TIC) program, developed to assist agencies in protecting modern information technology architectures and services.” CISA explained “In accordance with the Office of Management and Budget (OMB) Memorandum (M) 19-26: Update to the TIC Initiative, TIC 3.0 expands on the original initiative to drive security standards and leverage advances in technology to secure a wide spectrum of agency network architectures.” Specifically, CISA released three core guidance documents:
    • Program Guidebook (Volume 1) – Outlines the modernized TIC program and includes its historical context
    • Reference Architecture (Volume 2) – Defines the concepts of the program to guide and constrain the diverse implementations of the security capabilities
  • Senators Ron Wyden (D-OR), Bill Cassidy (R-LA) and ten other Members wrote the Federal Trade Commission (FTC) urging the agency “to investigate widespread privacy violations by companies in the advertising technology (adtech) industry that are selling private data about millions of Americans, collected without their knowledge or consent from their phones, computers, and smart TVs.” They asked the FTC “to use its authority to conduct broad industry probes under Section 6(b) of the FTC Act to determine whether adtech companies and their data broker partners have violated federal laws prohibiting unfair and deceptive business practices.” They argued “[t]he FTC should not proceed with its review of the Children’s Online Privacy Protection Act (COPPA) Rule before it has completed this investigation.”
  •  “100 U.S. women lawmakers and current and former legislators from around the world,” including Speaker of the House Nancy Pelosi (D-CA), sent a letter to Facebook CEO Mark Zuckerberg and COO Sheryl Sandberg urging the company “to take decisive action to protect women from rampant and increasing online attacks on their platform that have caused many women to avoid or abandon careers in politics and public service.” They noted “[j]ust a few days ago, a manipulated and widely shared video that depicted Speaker Pelosi slurring her speech was once again circulating on major social media platforms, gaining countless views before TikTok, Twitter, and YouTube all removed the footage…[and] [t]he video remains on Facebook and is labeled “partly false,” continuing to gain millions of views.” The current and former legislators “called on Facebook to enforce existing rules, including:
    • Quick removal of posts that threaten candidates with physical violence, sexual violence or death, and that glorify, incite or praise violence against women; disable the relevant accounts, and refer offenders to law enforcement.
    • Eliminate malicious hate speech targeting women, including violent, objectifying or dehumanizing speech, statements of inferiority, and derogatory sexual terms;
    • Remove accounts that repeatedly violate terms of service by threatening, harassing or doxing or that use false identities to attack women leaders and candidates; and
    • Remove manipulated images or videos misrepresenting women public figures.
  • The United States’ Departments of Commerce and Homeland Security released an update “highlighting more than 50 activities led by industry and government that demonstrate progress in the drive to counter botnet threats.” in May 2018, the agencies submitted “A Report to the President on Enhancing the Resilience of the Internet and Communications Ecosystem Against Botnets and Other Automated, Distributed Threats” that identified a number of steps and prompted a follow on “A Road Map Toward Resilience Against Botnets” released in November 2018.
  • United States (U.S.) Secretary of Commerce Wilbur Ross and European Commissioner for Justice Didier Reynders released a joint statement explaining that “[t]he U.S. Department of Commerce and the European Commission have initiated discussions to evaluate the potential for an enhanced EU-U.S. Privacy Shield framework to comply with the July 16 judgment of the Court of Justice of the European Union in the Schrems II case.”
    • Maximillian Schrems filed a complaint against Facebook with Ireland’s Data Protection Commission (DPC) in 2013, alleging that the company’s transfer of his personal data violated his rights under European Union law because of the mass U.S. surveillance revealed by former National Security Agency (NSA) contractor Edward Snowden. Ultimately, this case resulted in a 2015 Court of Justice of the European Union (CJEU) ruling that invalidated the Safe Harbor agreement under which the personal data of EU residents was transferred to the US by commercial concerns. The EU and US executed a follow on agreement, the EU-U.S. Privacy Shield, that was designed to address some of the problems the CJEU turned up, and the U.S. passed a law, the “Judicial Redress Act of 2015” (P.L. 114-126), to provide EU citizens a way to exercise their EU rights in US courts via the “Privacy Act of 1974.”
    • However, Schrems continued and soon sought to challenge the legality of the European Commission’s signing off on the Privacy Shield agreement, the adequacy decision issued in 2016, and also the use of standard contractual clauses (SCC) by companies for the transfer of personal data to the US. The CJEU struck down the adequacy decision, throwing into doubt many entities’ transfers out of the EU into the U.S. but upheld SCCs in a way that suggested EU data protection authorities (DPA) may need to review all such agreements to ensure they comply with EU law.
  • The European Commission (EC) announced an “an in-depth investigation to assess the proposed acquisition of Fitbit by Google under the EU Merger Regulation.” The EC voiced its concern “that the proposed transaction would further entrench Google’s market position in the online advertising markets by increasing the already vast amount of data that Google could use for personalisation of the ads it serves and displays.” The EC detailed its “preliminary competition concerns:
    • Following its first phase investigation, the Commission has concerns about the impact of the transaction on the supply of online search and display advertising services (the sale of advertising space on, respectively, the result page of an internet search engine or other internet pages), as well as on the supply of ”ad tech” services (analytics and digital tools used to facilitate the programmatic sale and purchase of digital advertising). By acquiring Fitbit, Google would acquire (i) the database maintained by Fitbit about its users’ health and fitness; and (ii) the technology to develop a database similar to Fitbit’s one.
    • The data collected via wrist-worn wearable devices appears, at this stage of the Commission’s review of the transaction, to be an important advantage in the online advertising markets. By increasing the data advantage of Google in the personalisation of the ads it serves via its search engine and displays on other internet pages, it would be more difficult for rivals to match Google’s online advertising services. Thus, the transaction would raise barriers to entry and expansion for Google’s competitors for these services, to the ultimate detriment of advertisers and publishers that would face higher prices and have less choice.
    • At this stage of the investigation, the Commission considers that Google:
      • is dominant in the supply of online search advertising services in the EEA countries (with the exception of Portugal for which market shares are not available);
      • holds a strong market position in the supply of online display advertising services at least in Austria, Belgium, Bulgaria, Croatia, Denmark, France, Germany, Greece, Hungary, Ireland, Italy, Netherlands, Norway, Poland, Romania, Slovakia, Slovenia, Spain, Sweden and the United Kingdom, in particular in relation to off-social networks display ads;
      • holds a strong market position in the supply of ad tech services in the EEA.
    • The Commission will now carry out an in-depth investigation into the effects of the transaction to determine whether its initial competition concerns regarding the online advertising markets are confirmed.
    • In addition, the Commission will also further examine:
      • the effects of the combination of Fitbit’s and Google’s databases and capabilities in the digital healthcare sector, which is still at a nascent stage in Europe; and
      • whether Google would have the ability and incentive to degrade the interoperability of rivals’ wearables with Google’s Android operating system for smartphones once it owns Fitbit.
    • In February after the deal had been announced, the European Data Protection Board (EDPB) made clear it position that Google and Fitbit will need to scrupulously observe the General Data Protection Regulation’s privacy and data security requirements if the body is sign off on the proposed $2.2 billion acquisition. Moreover, at present Google has not informed European Union (EU) regulators of the proposed deal. The deal comes at a time when both EU and U.S. regulators are already investigating Google for alleged antitrust and anticompetitive practices, and the EDPB’s opinion could carry weight in this process.
  • The United States’ (U.S.) Department of Homeland Security released a Privacy Impact Assessment for the U.S. Border Patrol (USPB) Digital Forensics Programs that details how it may conduct searches of electronic devices at the U.S. border and ports of entry. DHS explained
    • As part of USBP’s law enforcement duties, USBP may search and extract information from electronic devices, including: laptop computers; thumb drives; compact disks; digital versatile disks (DVDs); mobile phones; subscriber identity module (SIM) cards; digital cameras; vehicles; and other devices capable of storing electronic information.
    • Last year, a U.S. District Court held that U.S. Customs and Border Protection (CPB) and U.S. Immigration and Customs Enforcement’s (ICE) current practices for searches of smartphones and computers at the U.S. border are unconstitutional and the agency must have reasonable suspicion before conducting such a search. However, the Court declined the plaintiffs’ request that the information taken off of their devices be expunged by the agencies. This ruling follows a Department of Homeland Security Office of the Inspector General (OIG) report that found CPB “did not always conduct searches of electronic devices at U.S. ports of entry according to its Standard Operating Procedures” and asserted that “[t]hese deficiencies in supervision, guidance, and equipment management, combined with a lack of performance measures, limit [CPB’s] ability to detect and deter illegal activities related to terrorism; national security; human, drug, and bulk cash smuggling; and child pornography.”
    • In terms of a legal backdrop, the United States Supreme Court has found that searches and seizures of electronic devices at borders and airports are subject to lesser legal standards than those conducted elsewhere in the U.S. under most circumstances. Generally, the government’s interest in securing the border against the flow of contraband and people not allowed to enter allow considerable leeway to the warrant requirements for many other types of searches. However, in recent years two federal appeals courts (the Fourth and Ninth Circuits) have held that searches of electronic devices require suspicion on the part of government agents while another appeals court (the Eleventh Circuit) held differently. Consequently, there is not a uniform legal standard for these searches.
  • The Inter-American Development Bank (IDB) and the Organization of Americans States (OAS) released their second assessment of cybersecurity across Latin America and the Caribbean that used the Cybersecurity Capacity Maturity Model for Nations (CMM) developed at University of Oxford’s Global Cyber Security Capacity Centre (GSCC). The IDB and OAS explained:
    • When the first edition of the report “Cybersecurity: Are We Ready in Latin America and the Caribbean?” was released in March 2016, the IDB and the OAS aimed to provide the countries of Latin America and the Caribbean (LAC) not only with a picture of the state of cybersecurity but also guidance about the next steps that should be pursued to strengthen national cybersecurity capacities. This was the first study of its kind, presenting the state of cybersecurity with a comprehensive vision and covering all LAC countries.
    • The great challenges of cybersecurity, like those of the internet itself, are of a global nature. Therefore, it is undeniable that the countries of LAC must continue to foster greater cooperation among themselves, while involving all relevant actors, as well as establishing a mechanism for monitoring, analysis, and impact assessment related to cybersecurity both nationally and regionally. More data in relation to cybersecurity would allow for the introduction of a culture of cyberrisk management that needs to be extended both in the public and private sectors. Countries must be prepared to adapt quickly to the dynamic environment around us and make decisions based on a constantly changing threat landscape. Our member states may manage these risks by understanding the impact on and the likelihood of cyberthreats to their citizens, organizations, and national critical infrastructure. Moving to the next level of maturity will require a comprehensive and sustainable cybersecurity policy, supported by the country’s political agenda, with allocation of  financial resources and qualified human capital to carry it out.
    • The COVID-19 pandemic will pass, but events that will require intensive use of digital technologies so that the world can carry on will continue happening. The challenge of protecting our digital space will, therefore, continue to grow. It is the hope of the IDB and the OAS that this edition of the report will help LAC countries to have a better understanding of their current state of cybersecurity capacity and be useful in the design of the policy initiatives that will lead them to increase their level of cyberresilience.
  • The European Data Protection Supervisor (EDPS) issued an opinion on “the European Commission’s action plan for a comprehensive Union policy on preventing money laundering and terrorism financing (C(2020)2800 final), published on 7 May 2020.” The EDPS asserted:
    • While  the  EDPS acknowledges the  importance  of  the  fight  against money  laundering  and terrorism financing as an objective of general interest, we call for the legislation to strike a balance between the interference with the fundamental rights of privacy and personal data protection and  the measures that  are  necessary  to  effectively  achieve  the  general  interest goals on anti-money  laundering  and  countering the  financing  of terrorism (AML/CFT) (the principle of proportionality).
    • The EDPS recommends that the Commission monitors the effective implementation of the existing  AML/CFT  framework while ensuring that the  GDPR  and  the  data  protection framework are respected and complied with. This is particularly relevant for the works on the interconnection of central bank account mechanisms and beneficial ownership registers that should be largely inspired by the principles of data minimisation, accuracy and privacy-by-design and by default.  

Further Reading

  • China already has your data. Trump’s TikTok and WeChat bans can’t stop that.” By Aynne Kokas – The Washington Post. This article persuasively makes the case that even if a ban on TikTok and WeChat were to work, and there are substantive questions as to how a ban would given how widely the former has been downloaded, the People’s Republic of China (PRC) is almost certainly acquiring massive reams of data on Americans through a variety of apps, platforms, and games. For example, Tencent, owner of WeChat, has a 40% stake in Epic Games that has Fortnite, a massively popular multiplayer game (if you have never heard of it, ask one of the children in your family). Moreover, a recent change to PRC law mandates that companies operating in the PRC must share their data bases for cybersecurity reviews, which may be an opportunity aside from hacking and exfiltrating United States entities, to access data. In summation, if the Trump Administration is serious about stopping the flow of data from the U.S. to the PRC, these executive orders will do very little.
  • Big Tech Makes Inroads With the Biden Campaign” by David McCabe and Kenneth P. Vogel – The New York Times. Most likely long before former Vice President Joe Biden clinched the Democratic nomination, advisers volunteered to help plot out his policy positions, a process that intensified this year. Of course, this includes technology policy, and many of those volunteering for the campaign’s Innovation Policy Committee have worked or are working for large technology companies directly or as consultants or lobbyists. This piece details some of these people and their relationships and how the Biden campaign is managing possible conflicts of interest. Naturally, those on the left wing of the Democratic Party calling for tighter antitrust, competition, and privacy regulation are concerned that Biden might be pulled away from these positions despite his public statements arguing that the United States government needs to get tougher with some practices.
  • A Bible Burning, a Russian News Agency and a Story Too Good to Check Out” By Matthew Rosenberg and Julian E. Barnes – The New York Times. The Russian Federation seems to be using a new tactic with some success for sowing discord in the United States that is the information equivalent of throwing fuel onto a fire. In this case, a fake story manufactured by a Russian outlet was seized on by some prominent Republicans, in part, because it fits their preferred world view of protestors. In this instance, a Russian outlet created a fake story amplifying an actual event that went viral. We will likely see more of this, and it is not confined to fake stories intended to appeal to the right. The same is happening with content meant for the left wing in the United States.
  • Facebook cracks down on political content disguised as local news” by Sara Fischer – Axios. As part of its continuing effort to crack down on violations of its policies, Facebook will no longer allow groups with a political viewpoint to masquerade as news. The company and outside experts have identified a range of instances where groups propagating a viewpoint, as opposed to reporting, have used a Facebook exemption by pretending to be local news outlets.
  • QAnon groups have millions of members on Facebook, documents show” By Ari Sen and Brandy Zadrozny – NBC News. It appears as if some Facebooks are leaking the results of an internal investigation that identified more than 1 million users who are part of QAnon groups. Most likely these employees want the company to take a stronger stance on the conspiracy group QAnon like the company has with COVID-19 lies and misinformation.
  • And, since Senator Kamala Harris (D-CA) was named former Vice President Joe Biden’s (D-DE) vice presidential pick, this article has become even more relevant than when I highlighted it in late July: “New Emails Reveal Warm Relationship Between Kamala Harris And Big Tech” – HuffPost. Obtained via an Freedom of Information request, new email from Senator Kamala Harris’ (D-CA) tenure as her state’s attorney general suggest she was willing to overlook the role Facebook, Google, and others played and still play in one of her signature issues: revenge porn. This article makes the case Harris came down hard on a scammer running a revenge porn site but did not press the tech giants with any vigor to take down such material from their platforms. Consequently, the case is made if Harris is former Vice President Joe Biden’s vice presidential candidate, this would signal a go easy approach on large companies even though many Democrats have been calling to break up these companies and vigorously enforce antitrust laws. Harris has largely not engaged on tech issues during her tenure in the Senate. To be fair, many of these companies are headquartered in California and pump billions of dollars into the state’s economy annually, putting Harris in a tricky position politically. Of course, such pieces should be taken with a grain of salt since it may have been suggested or planted by one of Harris’ rivals for the vice president nomination or someone looking to settle a score.
  • Unwanted Truths: Inside Trump’s Battles With U.S. Intelligence Agencies” by Robert Draper – The New York Times. A deeply sourced article on the outright antipathy between President Donald Trump and Intelligence Community officials, particularly over the issue of how deeply Russia interfered in the election in 2016. A number of former officials have been fired or forced out because they refused to knuckle under to the White House’s desire to soften or massage conclusions of Russia’s past and current actions to undermine the 2020 election in order to favor Trump.
  • Huawei says it’s running out of chips for its smartphones because of US sanctions” By Kim Lyons – The Verge and “Huawei: Smartphone chips running out under US sanctions” by Joe McDonald – The Associated Press. United States (U.S.) sanctions have started biting the Chinese technology company Huawei, which announced it will likely run out of processor chips for its smartphones. U.S. sanctions bar any company from selling high technology items like processors to Huawei, and this capability is not independently available in the People’s Republic of China (PRC) at present.
  • Targeting WeChat, Trump Takes Aim at China’s Bridge to the World” By Paul Mozur and Raymond Zhong – The New York Times. This piece explains WeChat, the app, the Trump Administration is trying to ban in the United States (U.S.) without any warning. It is like a combination of Facebook, WhatsApp, news app, and payment platform and is used by more than 1.2 billion people.
  • This Tool Could Protect Your Photos From Facial Recognition” By Kashmir Hill – The New York Times. Researchers at the University of Chicago have found a method of subtly altering photos of people that appears to foil most facial recognition technologies. However, a number of experts interviewed said it is too late to stop companies like AI Clearview.
  • I Tried to Live Without the Tech Giants. It Was Impossible.” By Kashmir Hill – The New York Times. This New York Times reporter tried living without the products of large technology companies, which involved some fairly obvious challenges and some that were not so obvious. Of course, it was hard for her to skip Facebook, Instagram, and the like, but cutting out Google and Amazon proved hardest and basically impossible because of the latter’s cloud presence and the former’s web presence. The fact that some of the companies cannot be avoided if one wants to be online likely lends weight to those making the case these companies are anti-competitive.
  • To Head Off Regulators, Google Makes Certain Words Taboo” by Adrianne Jeffries – The Markup. Apparently, in what is a standard practice at large companies, employees at Google were coached to avoid using certain terms or phrases that antitrust regulators would take notice of such as: “market,” “barriers to entry,” and “network effects.” The Markup obtained a 16 August 2019 document titled “Five Rules of Thumb For Written Communications” that starts by asserting “[w]ords matter…[e]specially in antitrust laws” and goes on to advise Google’s employees:
    • We’re out to help users, not hurt competitors.
    • Our users should always be free to switch, and we don’t lock anyone in.
    • We’ve got lots of competitors, so don’t assume we control or dominate any market.
    • Don’t try and define a market or estimate our market share.
    • Assume every document you generate, including email, will be seen by regulators.
  • Facebook Fired An Employee Who Collected Evidence Of Right-Wing Pages Getting Preferential Treatment” By Craig Silverman and Ryan Mac – BuzzFeed News. A Facebook engineer was fired after adducing proof in an internal communications system that the social media platform is more willing to change false and negative ratings to claims made by conservative outlets and personalities than any other viewpoint. If this is true, it would be opposite to the narrative spun by the Trump Administration and many Republicans in Congress. Moreover, Facebook’s incentives would seem to align with giving conservatives more preferential treatment because many of these websites advertise on Facebook, the company probably does not want to get crosswise with the Administration, sensational posts and content drive engagement which increases user numbers that allows for higher ad rates, and it wants to appear fair and impartial.
  • How Pro-Trump Forces Work the Refs in Silicon Valley” By Ben Smith – The New York Times. This piece traces the nearly four decade old effort of Republicans to sway mainstream media and now Silicon Valley to its viewpoint.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Photo credit: Gerd Altmann on Pixabay

EDPB Issues FAQs On Privacy Shield Decision

While the EDPB does not provide absolute answers on how US entities looking to transfer EU personal data should proceed, the agencies provide their best thinking on what the path forward looks like.

First things first, if you would like to receive my Technology Policy Update, email me. You can find some of these Updates from 2019 and 2020 here.

On 24 July, the European Data Protection Board (EDPB) has addressed, in part, the implications of the recent decision that struck down the European Union-United States Privacy Shield, an agreement that had allowed US companies to transfer and process the personal data of EU citizens. The EDPB fully endorsed the view that the United States’ (US) surveillance regime, notably Section 702 of the “Foreign Intelligence Surveillance Act” (FISA) and Executive Order (EO) 12333, makes most transfers to the US illegal except perhaps if entities holding and using the data take extra steps to protect it. The EDPB references another means that allows for transfers to possibly continue but that generally requires informed and explicit consent from each and every EU person involved. Finally, the EDPB does not address whether the European Commission (EC) and the US are able to execute a third agreement that would be legal under EU law.

The EDPB, which is comprised of the European Union’s (EU) data protection authorities (DPAs), has formally adopted a document spelling out its view on if data transfers under Privacy Shield to the US are still legal and how companies should proceed in using standard contractual clauses (SCCs) and Binding Corporate Rules (BCR), two alternative means of transferring data aside from Privacy Shield. The EDPB’s views suggest the DPAs and supervisory authorities (SA) in each EU nation are going to need to work on a case-by-case basis regarding the latter two means, for the EDPB stressed these are to be evaluated individually. Given recent criticism of how nations are funding and resourcing their DPAs, there may be capacity issues in managing this new work alongside existing enforcement and investigation matters. Moreover, the EDPB discusses use of the exceptions available in Article 49 of the General Data Privacy Regulation (GDPR), stressing that most such transfers are to be occasional.

In last week’s decision, the Court of Justice of the European Union (CJEU) invalidated the European Commission’s adequacy decision on the EU-US Privacy Shield, thus throwing into question all transfers of personal data from the EU into the US that relied on this means. The CJEU was more circumspect in ruling on the use of standard contractual clauses (SCC), another way to legally transfer personal data out of the EU in compliance with the bloc’s law. The court seems to suggest there may be cases in which the use of SCCs may be inadequate given a country’s inadequate protections of the data of EU residents, especially with respect to national security and law enforcement surveillance. The EDPB issued a statement when the decision was made supporting the CJEU but has now adopted a more detailed explanation of its views on the implications of the decision for data controllers, data processors, other nations, EU DPAs and SAs.

In “Frequently Asked Questions (FAQ) on the judgment of the CJEU in Case C-311/18 -Data Protection Commissioner v Facebook Ireland Ltd and Maximillian Schrems,” the EDPB explains its current thinking on the decision, much of which is built on existing guidance and interpretation of the GDPR. The EDPB explained that the FAQ “aims at presenting answers to some frequently asked questions received by SAs and will be developed and complemented along with further analysis, as the EDPB continues to examine and assess the judgment of the CJEU.”

Here are notable excerpts:

  • Is there any grace period during which I can keep on transferring data to the U.S. without assessing my legal basis for the transfer? No, the Court has invalidated the Privacy Shield Decision without maintaining its effects, because the U.S. law assessed by the Court does not provide an essentially equivalent level of protection to the EU. This assessment has to be taken into account for any transfer to the U.S.
  • I was transferring data to a U.S. data importer adherent to the Privacy Shield, what should I do now? Transfers on the basis of this legal framework are illegal. Should you wish to keep on transferring data to the U.S., you would need to check whether you can do so under the conditions laid down below.
  • I am using SCCs with a data importer in the U.S., what should I do? The Court found that U.S. law (i.e., Section 702 FISA and EO 12333) does not ensure an essentially equivalent level of protection. Whether or not you can transfer personal data on the basis of SCCs will depend on the result of your assessment, taking into account the circumstances of the transfers, and supplementary measures you could put in place. The supplementary measures along with SCCs, following a case-by-case analysis of the circumstances surrounding the transfer, would have to ensure that U.S. law does not impinge on the adequate level of protection they guarantee. If you come to the conclusion that, taking into account the circumstances of the transfer and possible supplementary measures, appropriate safeguards would not be ensured, you are required to suspend or end the transfer of personal data. However, if you are intending to keep transferring data despite this conclusion, you must notify your competent SA.
  • I am using Binding Corporate Rules (“BCRs”) with an entity in the U.S., what should I do? Given the judgment of the Court, which invalidated the Privacy Shield because of the degree of interference created by the law of the U.S. with the fundamental rights of persons whose data are transferred to that third country, and the fact that the Privacy Shield was also designed to bring guarantees to data transferred with other tools such as BCRs, the Court’s assessment applies as well in the context of BCRs, since U.S. law will also have primacy over this tool.
  • Whether or not you can transfer personal data on the basis of BCRs will depend on the result of your assessment, taking into account the circumstances of the transfers, and supplementary measures you could put in place. These supplementary measures along with BCRs, following a case-by-case analysis of the circumstances surrounding the transfer, would have to ensure that U.S. law does not impinge on the adequate level of protection they guarantee. If you come to the conclusion that, taking into account the circumstances of the transfer and possible supplementary measures, appropriate safeguards would not be ensured, you are required to suspend or end the transfer of personal data. However if you are intending to keep transferring data despite this conclusion, you must notify your competent SA.
  • Can I rely on one of the derogations of Article 49 GDPR to transfer data to the U.S.? It is still possible to transfer data from the EEA to the U.S. on the basis of derogations foreseen in Article 49 GDPR provided the conditions set forth in this Article apply. The EDPB refers to its guidelines on this provision. In particular, it should be recalled that when transfers are based on the consent of the data subject, it should be:
    • explicit,
    • specific for the particular data transfer or set of transfers (meaning that the data exporter must make sure to obtain specific consent before the transfer is put in place even if this occurs after the collection of the data has been made),and
    • informed, particularly as to the possible risks of the transfer (meaning the data subject should also informed of the specific risks resulting from the fact that their data will be transferred to a country that does not provide adequate protection and that no adequate safeguards aimed at providing protection for the data are being implemented).
  • With regard to transfers necessary for the performance of a contract between the data subject and the controller, it should be borne in mind that personal data may only be transferred when the transfer is occasional. It would have to be established on a case-by-case basis whether data transfers would be determined as “occasional” or “non-occasional”. In any case, this derogation can only be relied upon when the transfer is objectively necessary for the performance of the contract.
  • In relation to transfers necessary for important reasons of public interest(which must be recognized in EU or Member States’ law), the EDPB recalls that the essential requirement for the applicability of this derogation is the finding of an important public interest and not the nature of the organisation, and that although this derogation is not limited to data transfers that are “occasional”, this does not mean that data transfers on the basis of the important public interest derogation can take place on a large scale and in a systematic manner. Rather, the general principle needs to be respected according to which the derogations as set out in Article 49 GDPR should not become “the rule” in practice, but need to be restricted to specific situations and each data exporter needs to ensure that the transfer meets the strict necessity test.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Maret H. from Pixabay

CCPA Regulations Finalized

Final CCPA regulations submitted, but it is not clear if they will be approved by 1 July as required by the statute. However, if a follow-on ballot initiative becomes law, these regulations could be moot or greatly changed.

First things first, if you would like to receive my Technology Policy Update, email me. You can find some of these Updates from 2019 and 2020 here.

The Office of California Attorney General Xavier Becerra has submitted its finalized regulations to implement the “California Consumer Privacy Act” (CCPA) (AB 375) to the Office of Administrative Law (OAL), typically the last step in the regulatory process in California. The Office of the Attorney General (OAG) is requesting expedited review so the regulations may become effective on 1 July as required by the CCPA. However, the OAL has 30 days per California law to review regulations to ensure compliance with California’s Administrative Procedure Act (APA). However, under Executive Order N-40-20, issued in response to the COVID-19 pandemic, the OAL has been given an additional 60 days beyond the 30 statutory days to review regulations, so it is possible the CCPA regulations are not effective on 1 July. In fact, it could be three months before they are effective, meaning early September.

With respect to the substance, the final regulations are very similar to the third round of regulations circulated for comment in March, in part, in response to legislation passed and signed into law last fall that modified the CCPA. The OAG released other documents along with the finalized regulations:

For further reading on the third round of proposed CCPA regulations, see this issue of the Technology Policy Update, for the second round, see here, and for the first round, see here. Additionally, to read more on the legislation signed into law last fall, modifying the CCPA, see this issue.

Moreover, Californians for Consumer Privacy have submitted the “California Privacy Rights Act” (CPRA) for the November 2020 ballot. This follow on statute to the CCPA could again force the legislature into making a deal that would revamp privacy laws in California as happened when the CCPA was added to the ballot in 2018. It is also possible this statute remains on the ballot and is added to California’s laws. In either case, much of the CCPA and its regulations may be moot or in effect for only the few years it takes for a new privacy regulatory structure to be established as laid out in the CPRA. See here for more detail.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

OMB Submits Annual FISMA Report On Federal Cybersecurity, Noting 8% Fewer Incidents

The federal civilian government’s cybersecurity metrics keep trending in positive directions, a development the Administration claims can be attributed to its policies.  

First things first, if you would like to receive my Technology Policy Update, email me. You can find some of these Updates from 2019 and 2020 here.

This week, the Office of Management and Budget (OMB) submitted its annual report on the status of federal cybersecurity per the “Federal Information Security Modernization Act of 2014” (FISMA) (P.L. 113-283) and reported continuing progress on account of Trump Administration measures to shore up the federal government’s cybersecurity. The number of cybersecurity incidents is down and the number of agencies deemed to be managing risk has increased.

In terms of methodology, OMB collects the cybersecurity incidents reported to the Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA). In the preface, OMB added that the

report also incorporates OMB’s analysis of agency application of the intrusion detection and prevention capabilities, as required by Section 226(c)(l)B) of the Cybersecurity Act of 2015, P.L. No. 114-113. OMB obtained information from the Department of Homeland Security (DHS), Chief Information Officers (CIOs) and Inspectors General (IGs) from across the Executive Branch to compile this report. This report primarily includes Fiscal Year 2019 data reported by agencies to OMB and DHS on or before October 31, 2019.

OMB claimed “[a]gencies reported 28,581 cybersecurity incidents in FY 2019, an 8% decrease over the 31,107 incidents that agencies reported in FY 2018…[and] [t]he decline in incidents is correlated with the continued maturation of agencies’ information security programs.” In last year’s FISMA report, OMB stated “31,107 incidents [were] reported by Federal agencies, and validated with US-CERT, across nine attack vector categories…[and] [t]his represents a 12% decrease from FY 2017, when agencies reported 35,277 incidents.” OMB further asserted “[i]n FY 2019, a total of 72 agencies received an overall rating of “Managing Risk” in the annual cybersecurity Risk Management Assessment (RMA) process…up from 33 agencies in FY 2017 and 62 agencies in FY 2018.”

Of the 28,581 incidents reported in FY 2019, three incidents were determined by agencies to meet the threshold for major incidents in accordance with the definition in OMB Memorandum M-20-04, Fiscal Year 2019-2020 Guidance on Federal Information Security and Privacy Management Requirements. A summary these major incidents is provided below, as well as their rating on the CISA Cyber Incident Scoring System:

  • On December 3, 2019, DHS declared a major incident after determining that the Federal Emergency Management Agency (FEMA) National Emergency Management Information System Information Assurance (NEMIS-IA) system continued to send sensitive PII of disaster victims to a contractor responsible for meeting temporary shelter needs long after it was no longer required. FEMA took immediate steps to mitigate the incident by discontinuing the unnecessary sharing of PII with the contractor. Furthermore, a DHS-FEMA joint assessment team conducted a security assessment to revise the architecture of the system to meet the requirements of the DHS Sensitive Systems Policy Directive. An estimated 2.5 million hurricane survivors were impacted. The impact of this breach is Low (Green).
  • On January 31, 2019, DHS declared a major incident after determining potential unauthorized sharing of disaster survivors’ PII by FEMA with a third-party volunteer organization. The organization had an approved Information Sharing Access Agreement (ISAA) with FEMA, but the agreement did not cover several data elements. FEMA amended the FEMA-State Agreement with the State of Texas on February 7, 2019, to further clarify that the third-party organization should have the same level of access these data elements as the State. An estimated 895,000 individuals were impacted. The impact of this breach is Minimal (blue).
  • On June 3, 2019, DHS declared a major incident following a ransomware attack at a contractor that manufacturer’s license plate readers (LPR) utilized by U.S. Customs and Border Protection (CBP) at multiple US Border Patrol check points across the United States. CBP learned the contractor had taken unauthorized copies of images collected by CBP to their company network. The copied files included license plates images and facial images of the profile and front of travelers inside of a vehicle. These images were subsequently exfiltrated during the cyberattack on the company. The impact of this breach is Negligible (White).

In terms of the policy backdrop, OMB attributed the positive trend lines in the metrics being tracked annually in recent FISMA reports:

  • The President’s Management Agenda (PMA) sets a clear goal to modernize the Federal Government’s information systems. The path forward will continue to rely on the maturation of cybersecurity efforts across Federal agencies in order to reduce operational risk and provide secure services for the American public. In September 2018, the President released the National Cyber Strategy, which outlined objectives for defending the homeland and promoting American prosperity by protecting public and private systems and information and promoting a secure digital economy. The first fully articulated cybersecurity strategy in 15 years, the National Cyber Strategy builds and expands upon the work begun under Executive Order 13800. Strengthening the Cybersecuritv of Federal Networks and Critical Infrastructure. (Executive Order 13800) released in May 2017 to enhance cybersecurity risk management across the Federal Government. Executive Order13800 recognizes the importance of mission delivery, service quality and securing citizens’ information even as malicious cyber actors seek to disrupt those services.
  • This report highlights that Fiscal Year (FY) 2019 has begun to show the cybersecurity improvements due to the decisive actions the Administration has taken to address high risk areas for the Federal Government. Updated policies around High Value Assets (HVAs), Trusted Internet Connections (TIC), and Identity Credential and Access Management (ICAM) have been coupled with Department of Homeland Security (DHS) programs and directives to empower agencies to mitigate risks across the Federal Government. We have efforts underway to further enhance cybersecurity in the areas of supply chain risk, Security Operations Center (SOC) maturation, and third party privacy risk. As progress continues, the executive and legislative branch must continue its collaboration to confirm there is sustained momentum for addressing these critical capability gaps.

Here are some key charts from the report:

This chart shows significant lag in federal agencies adopting EINSTEIN, a point sure to be made and explored at the next Congressional hearing on FISMA the House Oversight and Reform Committee holds.
The report helpfully breaks out cybersecurity spending, figures not easily teased out from agency budget documents.
This chart is split between this and the following image and it shows how well non-governmental entities have implemented the Senior Agency Official for Privacy (SAOP) measures by required by Executive Order 13719 Establishment of the Federal Privacy Council.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.