Further Reading, Other Developments, and Coming Events ( 4 September)

Here is today’s Further Reading, Other Developments, and Coming Events.

Coming Events

  • The United States-China Economic and Security Review Commission will hold a hearing on 9 September on “U.S.-China Relations in 2020: Enduring Problems and Emerging Challenges” to “evaluate key developments in China’s economy, military capabilities, and foreign relations, during 2020.”
  • On 10 September, the General Services Administration (GSA) will have a webinar to discuss implementation of Section 889 of the “John S. McCain National Defense Authorization Act (NDAA) for FY 2019” (P.L. 115-232) that bars the federal government and its contractors from buying the equipment and services from Huawei, ZTE, and other companies from the People’s Republic of China.
  • The Federal Communications Commission (FCC) will hold a forum on 5G Open Radio Access Networks on 14 September. The FCC asserted
    • Chairman [Ajit] Pai will host experts at the forefront of the development and deployment of open, interoperable, standards-based, virtualized radio access networks to discuss this innovative new approach to 5G network architecture. Open Radio Access Networks offer an alternative to traditional cellular network architecture and could enable a diversity in suppliers, better network security, and lower costs.
  • The Senate Judiciary Committee’s Antitrust, Competition Policy & Consumer Rights Subcommittee will hold a hearing on 15 September titled “Stacking the Tech: Has Google Harmed Competition in Online Advertising?.” In their press release, Chair Mike Lee (R-UT) and Ranking Member Amy Klobuchar (D-MN) asserted:
    • Google is the dominant player in online advertising, a business that accounts for around 85% of its revenues and which allows it to monetize the data it collects through the products it offers for free. Recent consumer complaints and investigations by law enforcement have raised questions about whether Google has acquired or maintained its market power in online advertising in violation of the antitrust laws. News reports indicate this may also be the centerpiece of a forthcoming antitrust lawsuit from the U.S. Department of Justice. This hearing will examine these allegations and provide a forum to assess the most important antitrust investigation of the 21st century.
  • The United States’ Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) announced that its third annual National Cybersecurity Summit “will be held virtually as a series of webinars every Wednesday for four weeks beginning September 16 and ending October 7:”
    • September 16: Key Cyber Insights
    • September 23: Leading the Digital Transformation
    • September 30: Diversity in Cybersecurity
    • October 7: Defending our Democracy
    • One can register for the event here.
  • On 22 September, the Federal Trade Commission (FTC) will hold a public workshop “to examine the potential benefits and challenges to consumers and competition raised by data portability.”
  • The Senate Judiciary Committee’s Antitrust, Competition Policy & Consumer Rights Subcommittee will hold a hearing on 30 September titled ““Oversight of the Enforcement of the Antitrust Laws” with Federal Trade Commission Chair Joseph Simons and United States Department of Justice Antitrust Division Assistant Attorney General Makan Delhrahim.
  • The Federal Communications Commission (FCC) will hold an open meeting on 30 September, but an agenda is not available at this time.

Other Developments

  • The United States (U.S.) Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA) and the Election Assistance Commission (EAC) “released the Election Risk Profile Tool, a user-friendly assessment tool to equip election officials and federal agencies in prioritizing and managing cybersecurity risks to the Election Infrastructure Subsector.” The agencies stated “[t]he new tool is designed to help state and local election officials understand the range of risks they face and how to prioritize mitigation efforts…[and] also addresses areas of greatest risk, ensures technical cybersecurity assessments and services are meeting critical needs, and provides a sound analytic foundation for managing election security risk with partners at the federal, state and local level.”
    • CISA and the EAC explained “[t]he Election Risk Profile Tool:
      • Is a user-friendly assessment tool for state and local election officials to develop a high-level risk profile across a jurisdiction’s specific infrastructure components;
      • Provides election officials a method to gain insights into their cybersecurity risk and prioritize mitigations;
      • Accepts inputs of a jurisdiction’s specific election infrastructure configuration; and
      • Outputs a tailored risk profile for jurisdictions, which identifies specific areas of highest risk and recommends associated mitigation measures that the jurisdiction could implement to address the risk areas.
  • The cybersecurity agencies of the Five Eyes nations have released a Joint Cybersecurity Advisory: Technical Approaches to Uncovering and Remediating Malicious Activity that “highlights technical approaches to uncovering malicious activity and includes mitigation steps according to best practices.” The agencies asserted “[t]he purpose of this report is to enhance incident response among partners and network administrators along with serving as a playbook for incident investigation.”
    • The Australian Cyber Security Centre, Canada’s Communications Security Establishment, the United States’ Cybersecurity and Infrastructure Security Agency, the United Kingdom’s National Cyber Security Centre, and New Zealand’s National Cyber Security Centre and Computer Emergency Response Team summarized the key takeaways from the Joint Advisory:
      • When addressing potential incidents and applying best practice incident response procedures:
      • First, collect and remove for further analysis:
        • Relevant artifacts,
        • Logs, and
        • Data.
      • Next, implement mitigation steps that avoid tipping off the adversary that their presence in the network has been discovered.
      • Finally, consider soliciting incident response support from a third-party IT security organization to:
        • Provide subject matter expertise and technical support to the incident response,
        • Ensure that the actor is eradicated from the network, and
        • Avoid residual issues that could result in follow-up compromises once the incident is closed.
  • The United States’ (U.S.) Department of Justice (DOJ) and Federal Trade Commission (FTC) signed an Antitrust Cooperation Framework with their counterpart agencies from Australia, Canada, New Zealand, And United Kingdom. The Multilateral Mutual Assistance and Cooperation Framework for Competition Authorities (Framework) “aims to strengthen cooperation between the signatories, and provides the basis for a series of bilateral agreements among them focused on investigative assistance, including sharing confidential information and cross-border evidence gathering.” Given that a number of large technology companies are under investigation in the U.S., the European Union (EU) and elsewhere, signaling a shift in how technology multinationals are being viewed, this agreement may enable cross-border efforts to collectively address alleged abuses. However, the Framework “is not intended to be legally binding and does not give rise to legal rights or obligations under domestic or international law.” The Framework provides:
    • Recognising that the Participants can benefit by sharing their experience in developing, applying, and enforcing Competition Laws and competition policies, the Participants intend to cooperate and provide assistance, including by:
      • a) exchanging information on the development of competition issues, policies and laws;
      • b) exchanging experience on competition advocacy and outreach, including to consumers, industry, and government;
      • c) developing agency capacity and effectiveness by providing advice or training in areas of mutual interest, including through the exchange of officials and through experience-sharing events;
      • d) sharing best practices by exchanging information and experiences on matters of mutual interest, including enforcement methods and priorities; and
      • e) collaborating on projects of mutual interest, including via establishing working groups to consider specific issues.
  • Dynasplint Systems alerted the United States Department of Health and Human Services (HHS) that it suffered a breach affecting more than 100,000 people earlier this year. HHS’ Office of Civil Rights (OCR) is investigating possible violations of Health Insurance Portability and Accountability Act regulations regarding the safeguarding of patients’ health information. If Dynasplint failed to properly secure patient information or its systems, OCR could levy a multimillion dollar fine for the size breach. For example, in late July, OCR fined a company over $1 million for the theft of an unencrypted laptop that exposed the personal information of a little more than 20,000 people.
    • Dynasplint, a Maryland manufacturer of range of motion splints, explained:
      • On June 4, 2020, the investigation determined that certain information was accessed without authorization during the incident.
      • The information may have included names, addresses, dates of birth, Social Security numbers, and medical information.
      • Dynasplint Systems reported this matter to the FBI and will provide whatever cooperation is necessary to hold perpetrators accountable.
  • The California Legislature has sent two bills to Governor Gavin Newsom (D) that would change how technology is regulated in the state, including one that would alter the “California Consumer Privacy Act” (AB 375) (CCPA) if the “California Privacy Rights Act” (CPRA) (Ballot Initiative 24) is not enacted by voters in the November election. The two bills are:
    • AB 1138 would amend the recently effective “Parent’s Accountability and Child Protection Act” would bar those under the age of 13 from opening a social media account unless the platform got the explicit consent from their parents. Moreover, “[t]he bill would deem a business to have actual knowledge of a consumer’s age if it willfully disregards the consumer’s age.”
    •  AB 1281 would extend the carveout for employers to comply with the CCPA from 1 January 2021 to 1 January 2022. The CCPA “exempts from its provisions certain information collected by a business about a natural person in the course of the natural person acting as a job applicant, employee, owner, director, officer, medical staff member, or contractor, as specified…[and also] exempts from specified provisions personal information reflecting a written or verbal communication or a transaction between the business and the consumer, if the consumer is a natural person who is acting as an employee, owner, director, officer, or contractor of a company, partnership, sole proprietorship, nonprofit, or government agency and whose communications or transaction with the business occur solely within the context of the business conducting due diligence regarding, or providing or receiving a product or service to or from that company, partnership, sole proprietorship, nonprofit, or government agency.” AB 1281 “shall become operative only” if the CPRA is not approved by voters.
  • Senators Senator Shelley Moore Capito (R-WV), Amy Klobuchar (D-MN) and Jerry Moran (R-KS) have written “a letter to Federal Trade Commission (FTC) Chairman Joseph Simons urging the FTC to take action to address the troubling data collection and sharing practices of the mobile application (app) Premom” and “to request information on the steps that the FTC plans to take to address this issue.” They asserted:
    • A recent investigation from the International Digital Accountability Council (IDAC) indicated that Premom may have engaged in deceptive consumer data collection and processing, and that there may be material differences between Premom’s stated privacy policies and its actual data-sharing practices. Most troubling, the investigation found that Premom shared its users’ data without their consent.
    • Moore Capito, Klobuchar, and Moran stated “[i]n light of these concerning reports, and given the critical role that the FTC plays in enforcing federal laws that protect consumer privacy and data under Section 5 of the Federal Trade Commission Act and other sector specific laws, we respectfully ask that you respond to the following questions:
      • 1. Does the FTC treat persistent identifiers, such as the non-resettable device hardware identifiers discussed in the IDAC report, as personally identifiable information in relation to its general consumer data security and privacy enforcement authorities under Section 5 of the FTC Act?  
      • 2. Is the FTC currently investigating or does it plan to investigate Premom’s consumer data collection, transmission, and processing conduct described in the IDAC report to determine if the company has engaged in deceptive practices?
      • 3. Does the FTC plan to take any steps to educate users of the Premom app that the app may still be sharing their personal data without their permission if they have not updated the app? If not, does the FTC plan to require Premom to conduct such outreach?
      • 4. Please describe any unique or practically uncommon uses of encryption by the involved third-party companies receiving information from Premom that could be functionally interpreted to obfuscate oversight of the involved data transmissions.
      • 5. How can the FTC use its Section 5 authority to ensure that mobile apps are not deceiving consumers about their data collection and sharing practices and to preempt future potentially deceptive practices like those Premom may have engaged in?

Further Reading

  • Justice Dept. Plans to File Antitrust Charges Against Google in Coming Weeks” By Katie Benner and Cecilia Kang – The New York Times; “The Justice Department could file a lawsuit against Google this month, overriding skepticism from its own top lawyers” By Tonty Romm – The Washington Post; “There’s a partisan schism over the timing of a Google antitrust lawsuit” By Timothy B. Lee – Ars Technica. The New York Times explains in its deeply sourced article that United States Department of Justice (DOJ) attorneys want more time to build a better case against Google, but that Attorney General William Barr is pressing for the filing of a suit as early as the end of this month in order for the Trump Administration to show voters it is taking on big tech. Additionally, a case against a tech company would help shore up the President’s right flank as he and other prominent conservatives continue to insist in the absence of evidence that technology companies are biased against the right. The team of DOJ attorneys has shrunk from 40 to about 20 as numerous lawyers asked off the case once it was clear what the Attorney General wanted. These articles also throw light on to the split between Republican and Democratic state attorneys general in the case they have been working on with the former accusing the latter of stalling for time in the hopes a Biden DOJ will be harsher on the company and the latter accusing the former of trying to file a narrow case while Donald Trump is still President that would impair efforts to address the range of Google’s alleged antitrust abuses.
  • Facebook Moves to Limit Election Chaos in November” By Mike Isaac – The New York Times. The social network giant unveiled measures to fight misinformation the week before the United States election and afterwards should people try to make factually inaccurate claims about the results. Notably, political advertisements will be banned a week before the 3 November election, but this seems like pretty weak tea considering it will be business as usual until late October. Even though the company frames these moves as “additional steps we’re taking to help secure the integrity of the U.S. elections by encouraging voting, connecting people to authoritative information, and reducing the risks of post-election confusion,” the effect of misinformation, disinformation, and lies that proliferate on Facebook will have likely already taken root by late October. It is possible the company still wants the advertising revenue it would forgo if it immediately banned political advertising. Another proposed change is to provide accurate information about voting generally and COVID-19 and voting. In fact, the platform corrected a post of President Donald Trump’s that expressed doubts about mail-in voting.
  • Washington firm ran fake Facebook accounts in Venezuela, Bolivia and Mexico, report finds” By Craig Timberg and Elizabeth Dwoskin – The Washington Post. In tandem with taking down fake content posted by the Internet Research Agency, Facebook also removed accounts traced back to a Washington, D.C. public relations firm, CLS Strategies, that was running multiple accounts to support the government in Bolivia and the opposition party in Venezuela, both of which are right wing. Using information provided by Facebook, Stanford University’s Internet Observatory released a report stating that “Facebook removed a network of 55 Facebook accounts,4 2 Pages and 36 Instagram accounts attributed to the US-based strategic communications firm CLS Strategies for engaging in coordinated inauthentic behavior (CIB).” Stanford asserted these key takeaways:
    • 11 Facebook pages related to Bolivia mainly supported Bolivia’s Interim President Jeanine Áñez and disparaged Bolivia’s former president Evo Morales. All had similar creation dates and manager location settings.
    • Venezuela-focused assets supported and promoted Venezuelan opposition leaders but changed in tone in 2020, reflecting factional divides in the opposition and a turn away from Juan Guaidó.
    • In addition to fake accounts, removed Facebook accounts include six profiles that match the names and photos of CLS Strategies employees listed publicly on their website and appear to be their real accounts.
    • CLS Strategies has a disclosed contract with the Bolivian government to provide strategic communications counsel for Bolivia’s 2020 elections and to strengthen democracy and human rights in Bolivia.
    • Coordinated inauthentic behavior reports from Facebook and Twitter have increasingly included assets linked to marketing and PR firms originating and acting around the world. The firms’ actions violate the platforms’ terms by operating internationally and failing to identify their origins and motivations to users.
    • In its release on the issue, Facebook explained:
      • In August, we removed three networks of accounts, Pages and Groups. Two of them — from Russia and the US — targeted people outside of their country, and another from Pakistan focused on both domestic audiences in Pakistan and also in India. We have shared information about our findings with law enforcement, policymakers and industry partners.
  • Belarusian Officials Shut Down Internet With Technology Made by U.S. Firm” By Ryan Gallagher – Bloomberg. A United States firm, Sandvine, sold deep packet inspection technology to the government in Belarus through a Russian intermediary. The technology was ostensibly to be used by the government to fend off dangers to the nation’s networks but was instead deployed to shut down numerous social media and news sites on the internet the day of the election. However, Belarusian activists quickly determined how to use workarounds, launching the current unrest that threatens to topple the regime. The same company’s technology has been used elsewhere in the world to cut off access to the internet as detailed by the University of Toronto’s Citizen Lab in 2018.
  • Canada has effectively moved to block China’s Huawei from 5G, but can’t say so” – Reuters. In a move reminiscent of how the People’s Republic of China (PRC) tanked Qualcomm’s proposed purchase of NXP Semiconductors in 2018, Canada has effectively barred Huawei from its 5G networks by not deciding, which eventually sent a signal to its telecommunications companies to use Ericsson and Nokia instead. This way, there is no public announcement or policy statement the PRC can object to, and the country toes the line with its other Five Eyes partners that have banned Huawei in varying degrees. Additionally, given that two Canadian nationals are being held because Huawei Chief Financial Officer Meng Wanzhou is being detained in Canada awaiting extradition to the Unted States to face criminal charges, Ottawa needs to manage its relations with the PRC gingerly.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Simon Steinberger from Pixabay

U.S. Vulnerabilities Disclosure Process Unveiled

Federal civilian agencies will need to have up and running programs to accept and act on vulnerabilities in their public facing systems within two years.

This week, the Trump Administration published final guidance and orders to civilian United States agencies on how they need to be accepting and using vulnerabilities researchers have turned up and submitted. Regularizing this process is supposed to both help agencies learn of and mitigate vulnerabilities and to encourage researchers to submit them. However, instead of establishing one program each agency will use, the Administration is opting to let each agency set up its own system within broad guidelines according to an enumerated timeline. Within two years, all federal “internet-accessible systems or services” at a civilian agency must be part of this vulnerability disclosure process. As with most federal cybersecurity efforts, the success of this initiative will depend on agency buy-in and follow through from the White House.

The Office of Management and Budget (OMB) issued the memorandum, M-20-32, “Improving Vulnerability Identification, Management, and Remediation,” to provide “[f]ederal agencies with guidance for obtaining and managing their vulnerability research programs.” And, pursuant to this memorandum, the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA) issuing mandatory direction to civilian agencies in establishing their Vulnerability Disclosure Policy (VDP).

OMB stated

Federal agencies should continue to align their coordinated vulnerability disclosure (CVD) programs with internationally recognized standards (i.e. International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) 29147 and ISO/IEC 30111) to the extent possible, consistent with Federal law and policy. CVD can expand the diversity of thinking involved in vulnerability identification and substantively improve the cybersecurity posture of Federal information systems.

OMB stated

Maintaining processes, procedures, and toolsets to identify, manage, and remediate vulnerabilities (i.e., managing the full vulnerability life cycle), no matter how they are discovered, is key to sustaining a risk-aware enterprise cybersecurity program. While many Federal agencies already maintain certain capabilities to discover vulnerabilities, such as penetration testing or receiving threat and vulnerability information from the Department of Homeland Security (DHS), agencies can benefit from closer partnerships with the reporters who choose to use their skills to find and report vulnerabilities on Federal information systems as a means to improving national cybersecurity.

OMB directed

In order to improve vulnerability identification, management, and remediation, Federal agencies shall implement VDPs that address the following areas:

  • Clearly Worded VDP: Agency VDPs shall clearly articulate which systems are in scope and the set of security research activities that can be performed against them to protect those who would report vulnerabilities. Federal agencies shall provide clear assurances that good-faith security research5 is welcomed and authorized.
  • Clearly Identified Reporting Mechanism: Each Federal agency shall clearly and publicly identify where and how Federal information system vulnerabilities should be reported.
  • Timely Feedback: Federal agencies shall provide timely feedback to good-faith vulnerability reporters. Once a vulnerability is reported, those who report them deserve to know they are being taken seriously and that action is being taken. Agencies should establish clear expectations for regular follow-up communications with the vulnerability reporter, to include an agency-defined timeline for coordinated disclosure.
  • Unencumbered Remediation: To streamline communication and collaboration, Federal agencies shall ensure vulnerability reports are available to system owners within 48 hours of submission, and shall establish a channel for system owners to communicate with vulnerability reporters, as appropriate.
  • Good-Faith Security Research is Not an Incident or Breach: Good-faith security research does not itself constitute an incident or breach under the Federal Information Security Modernization Act of2014 (FISMA) or 0MB Memorandum M-17-12, Preparing for and Responding to a Breach of Personally Identifiable Information. However, in the process of assessing and responding to vulnerabilities reported according to agencies’ VDPs, agencies shall work with their senior agency officials for privacy (SAOPs) to evaluate affected Federal information systems for breaches that occurred outside the scope of the good-faith security research (e.g., a breach that occurred before the research was conducted) and follow the requirements outlined in M-17-12. Pursuant to M-17-12, agencies may impose stricter standards consistent with their missions, authorities, circumstances, and identified risks.

As mentioned, CISA issued Binding Operational Directive (BOD) 20-01, “which requires individual federal civilian executive branch (FCEB) agencies to develop and publish a VDP for their internet-accessible systems and services, and maintain processes to support their VDP” according to the agency’s press release. The agency added that “[t]his BOD is part of CISA’s agency-wide priority to make 2020 the “year of vulnerability management,” with a particular focus on making vulnerability disclosure to the civilian executive branch easier for the public.”

CISA stated:

Cybersecurity is a public good that is strongest when the public is given the ability to contribute. A key component to receiving cybersecurity help from the public is to establish a formal policy that describes the activities that can be undertaken in order to find and report vulnerabilities in a legally authorized manner. Such policies enable federal agencies to remediate vulnerabilities before they can be exploited by an adversary – to immense public benefit.

CISA explained

  • A vulnerability is a “[w]eakness in an information system, system security procedures, internal controls, or implementation that could be exploited or triggered by a threat source.” Vulnerabilities are often found in individual software components, in systems comprised of multiple components, or in the interactions between components and systems. They are typically exploited to weaken the security of a system, its data, or its users, with impact to their confidentiality, integrity, or availability. The primary purpose of fixing vulnerabilities is to protect people, maintaining or enhancing their safety, security, and privacy.
  • Vulnerability disclosure is the “act of initially providing vulnerability information to a party that was not believed to be previously aware”. The individual or organization that performs this act is called the reporter.
  • Agencies should recognize that “a reporter or anyone in possession of vulnerability information can disclose or publish the information at any time,” including without prior notice to the agency. Such uncoordinated disclosure could result in exploitation of the vulnerability before the agency has had a chance to address it and could have legal consequences for the reporter as well. A key benefit of a vulnerability disclosure policy is to reduce risk to agency infrastructure and the public by incentivizing coordinated disclosure so there is time to fix the vulnerability before it is publicly known.
  • A VDP is similar to, but distinct from, a “bug bounty.” In bug bounty programs, organizations pay for valid and impactful findings of certain types of vulnerabilities in their systems or products. A financial reward can incentivize action and may attract people who might not otherwise look for vulnerabilities. This may also result in a higher number of reports or an increase in low-quality submissions. Organizations engaged in bug bounties will frequently use third-party platforms and service vendors to assist in managing and triaging bug reports. Bug bounties may be offered to the general public or may only be offered to select researchers or those who meet certain criteria. While bug bounties can enhance security, this directive does not require agencies to establish bug bounty programs.

Late last year, OMB and CISA released draft vulnerability disclosure documents for comment from stakeholders: A Request for Comments on Improving Vulnerability Identification, Management, and Remediation and a draft Binding Operational Directive (BOD).

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by methodshop from Pixabay

Further Reading, Other Developments, and Coming Events (2 September)

Here is today’s Further Reading, Other Developments, and Coming Events

Coming Events

  • The United States-China Economic and Security Review Commission will hold a hearing on 9 September on “U.S.-China Relations in 2020: Enduring Problems and Emerging Challenges” to “evaluate key developments in China’s economy, military capabilities, and foreign relations, during 2020.”
  • On 10 September, the General Services Administration (GSA) will have a webinar to discuss implementation of Section 889 of the “John S. McCain National Defense Authorization Act (NDAA) for FY 2019” (P.L. 115-232) that bars the federal government and its contractors from buying the equipment and services from Huawei, ZTE, and other companies from the People’s Republic of China.
  • The Federal Communications Commission (FCC) will hold a forum on 5G Open Radio Access Networks on 14 September. The FCC asserted
    • Chairman [Ajit] Pai will host experts at the forefront of the development and deployment of open, interoperable, standards-based, virtualized radio access networks to discuss this innovative new approach to 5G network architecture. Open Radio Access Networks offer an alternative to traditional cellular network architecture and could enable a diversity in suppliers, better network security, and lower costs.
  • The Senate Judiciary Committee’s Antitrust, Competition Policy & Consumer Rights Subcommittee will hold a hearing on 15 September titled “Stacking the Tech: Has Google Harmed Competition in Online Advertising?.” In their press release, Chair Mike Lee (R-UT) and Ranking Member Amy Klobuchar (D-MN) asserted:
    • Google is the dominant player in online advertising, a business that accounts for around 85% of its revenues and which allows it to monetize the data it collects through the products it offers for free. Recent consumer complaints and investigations by law enforcement have raised questions about whether Google has acquired or maintained its market power in online advertising in violation of the antitrust laws. News reports indicate this may also be the centerpiece of a forthcoming antitrust lawsuit from the U.S. Department of Justice. This hearing will examine these allegations and provide a forum to assess the most important antitrust investigation of the 21st century.
  • The United States’ Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) announced that its third annual National Cybersecurity Summit “will be held virtually as a series of webinars every Wednesday for four weeks beginning September 16 and ending October 7:”
    • September 16: Key Cyber Insights
    • September 23: Leading the Digital Transformation
    • September 30: Diversity in Cybersecurity
    • October 7: Defending our Democracy
    • One can register for the event here.
  • On 22 September, the Federal Trade Commission (FTC) will hold a public workshop “to examine the potential benefits and challenges to consumers and competition raised by data portability.”
  • The Senate Judiciary Committee’s Antitrust, Competition Policy & Consumer Rights Subcommittee will hold a hearing on 30 September titled ““Oversight of the Enforcement of the Antitrust Laws” with Federal Trade Commission Chair Joseph Simons and United States Department of Justice Antitrust Division Assistant Attorney General Makan Delhrahim.
  • The Federal Communications Commission (FCC) will hold an open meeting on 30 September, but an agenda is not available at this time.

Other Developments

  • The Department of Commerce’s Bureau of Industry and Security (BIS) released for comment an advanced notice of proposed rulemaking to implement a provision from a 2018 rewrite of the United States (U.S.) export control of certain technology, namely “foundational technology” in this case. The Export Control Reform Act (ECRA) (P.L. 115-232) required the Department of Commerce to establish “a regular, ongoing interagency process to identify emerging and foundational technologies,” and Commerce began the process with an advanced notice of proposed rulemaking to identify only emerging technologies in November 2018. Yet the agency has not followed up with draft regulations on managing the export control process for emerging technologies. BIS explained
    • Pursuant to the Export Control Reform Act of 2018, BIS and its interagency partners are engaged in a process to identify emerging and foundational technologies that are essential to the national security of the United States. Foundational technologies essential to the national security are those that may warrant stricter controls if a present or potential application or capability of that technology poses a national security threat to the United States. In order to determine if technologies are foundational, BIS will evaluate specific items, including items currently subject only to anti-terrorism (AT) controls on the CCL or those designated as EAR99.
    • Under ECRA, emerging and foundational technologies are those technologies that are essential to the national security of the United States and are not critical technologies described in Section 721(a)(6)(A)(i)-(v) of the Defense Production Act of 1950, as amended (DPA).
    • Section 1758 of ECRA requires that foundational technologies be identified, and that BIS establish appropriate controls for that technology under the EAR. At a minimum, such controls would apply to countries subject to an embargo, including an arms embargo, imposed by the United States.
    • ECRA also requires that the interagency process is to take into account:
      • The development of foundational technologies in foreign countries;
      • The effect export controls may have on the development of such technologies in the United States; and
      • The effectiveness of export controls imposed pursuant to ECRA on limiting the proliferation of foundational technologies to foreign countries.
  • The Privacy Commissioner of Canada Daniel Therrien responded to an inquiry from Members of Parliament “about the privacy implications of the federal government’s COVID-19 exposure notification application (COVID Alert) and the ArriveCAN application.” The OPC explained
    • Our review of the COVID Alert application highlighted serious weaknesses with our current federal privacy legislation. In this case, the government took the position that its privacy laws do not apply in light of its assertion that personal information is not collected by the application. Further, while the design of the application is good, and that the government has agreed to be subject to an independent review, the government was not bound to make these commitments. The government chose to respect the principles put forth in our guidance documents because public trust is vital to the application’s success. However, without robust laws, other programs and applications could be introduced in the future that are not so privacy-sensitive.
  • The Department of Commerce’s Bureau of Industry and Security (BIS) “added 24 Chinese companies to the Entity List for their role in helping the Chinese military construct and militarize the internationally condemned artificial islands in the South China Sea,” including a number of technology companies. BIS explained:
    • The Entity List is a tool utilized by BIS to restrict the export, re-export, and transfer (in-country) of items subject to the Export Administration Regulations (EAR) to persons (individuals, organizations, companies) reasonably believed to be involved, or to pose a significant risk of becoming involved, in activities contrary to the national security or foreign policy interests of the United States.
    • Additionally, in a related action, “the Department of State will begin imposing visa restrictions on People’s Republic of China (PRC) individuals responsible for, or complicit in, either the large-scale reclamation, construction, or militarization of disputed outposts in the South China Sea, or the PRC’s use of coercion against Southeast Asian claimants to inhibit their access to offshore resources.” The Department of State stated that “[t]hese individuals will now be inadmissible into the United States, and their immediate family members may be subject to these visa restrictions as well.”
  • The Trump Administration announced “more than $1 billion in awards for the establishment of 12 new AI and QIS research and development (R&D) institutes nationwide,” a substantial portion of which Congress would need to appropriate in future years. The White House claimed the National Science Foundation’s (NSF) Artificial Intelligence (AI) Research Institutes and the Department of Energy’s (DOE) quantum information science (QIS) Research Centers “will serve as national R&D hubs for these critical industries of the future, spurring innovation, supporting regional economic growth, and training our next generation workforce.”
  • The Trump Administration explained:
    • The National Science Foundation and additional Federal partners are awarding $140 million over five years to a total of seven NSF-led AI Research Institutes. These collaborative research and education institutes will focus on a range of AI R&D areas, such as machine-learning, synthetic manufacturing, precision agriculture, and forecasting prediction. Research will take place at universities around the country, including the University of Oklahoma at Norman, the University of Texas at Austin, the University of Colorado at Boulder, the University of Illinois at Urbana-Champaign, the University of California at Davis, and the Massachusetts Institute of Technology.
    • NSF anticipates making additional AI Research Institute awards in the coming years, with more than $300 million in total awards, including contributions from partner agencies, expected by next summer. Overall, NSF invests more than $500 million in artificial intelligence activities annually and is the largest Federal driver of nondefense AI R&D.
    • To establish the QIS Research Centers, DOE is announcing up to $625 million over five years to five centers that will be led by DOE National Laboratory teams at Argonne, Brookhaven, Fermi, Oak Ridge, and Lawrence Berkeley National Laboratories. Each QIS Center will incorporate a collaborative research team spanning multiple institutions as well as scientific and engineering disciplines. The private sector and academia will be providing another $300 million in contributions for the centers.

Further Reading

  • Facebook takes down Russian operation that recruited U.S. journalists, amid rising concerns about election misinformation” By Elizabeth Dwoskin and Craig Timberg – The Washington Post; “Russians Again Targeting Americans With Disinformation, Facebook and Twitter Say” By Sheera Frenkel and Julian E. Barnes; “Russian internet trolls hired U.S. journalists to push their news website, Facebook says” By Kevin Collier and Ken Dilanian – NBC News. In what is more evidence that the Russian Federation’s tactics have changed even though its goals have not, Facebook and Twitter announced the takedown of content written by Americans for a fake new source created and run by the Internet Research Agency. The purported online publications, Peace Data, has posted a number of articles aimed at turning far left voters off to the Biden-Harris campaign. In a sign of evolution, however, they hired freelance American journalists to write content that was then amplified elsewhere on the internet. A very curious aspect of this incident is why the FBI merely tipped off Facebook and Twitter instead of a more vigorous approach to addressing efforts to again create distrust and chaos in a U.S. election. One of the articles claims the FBI does not respond to state-sponsored influence operations as they may not be against U.S. law.
  • Big Tech Embraces New Cold War Nationalism” By JS Tan – Foreign Policy. This piece argues that Silicon Valley’s worldview and strategies have changed now in large part because of the rise of companies from the People’s Republic of China (PRC) like Huawei, TikTok, Tencent, and Alibaba. Now companies like Facebook and Google are discarding their internationalist, neoliberal approach and have aligned themselves with the United States (U.S.) government for a variety of reasons, including an inability to compete fairly inside the PRC. However, Silicon Valley and Washington’s interests on the PRC may be aligned, but in a number of other, very significant ways, especially with the current government, there are considerable differences.
  • Amazon Is Spying on Its Workers in Closed Facebook Groups, Internal Reports Show” By Lauren Kaori Gurley and Joseph Cox – Vice. Another article about the online giant’s distaste for unions and labor organizing activity. In this piece, we learn that Amazon is monitoring public posts by Amazon Flex drivers and possibly even penetrating closed or private groups on platforms like Facebook and hen reportedly extensively inside the company on The other day, Vice broke a story about Amazon posting two positions for intelligence analysts to help the company track labor organizing. The company took down the positions after the story was posted.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by WikiImages from Pixabay

CPRA From Another View

Let’s see how the CPRA would work from the view of a Californian.

Of course, I analyzed California Ballot Proposition 24, the “California Privacy Rights Act,” at some length in a recent issue, but I think taking on the proposed rewrite of the “California Consumer Privacy Act” (AB 375) from a different angle may provide value in understanding what this law would and would not do. In this piece, I want to provide a sense of what the California resident would be presented with under the new privacy statute.

As noted in my article the other day, as under the CCPA, the CPRA would still not allow people to deny businesses the right to collect and process their personal information unlike some of the bills pending in Congress. Californians could stop the sale or sharing of personal information, but not the collection and processing of personal data short of forgoing or limiting online interactions and many in-person interactions. A person could request the deletion of personal information collected and processed subject to certain limitations and exceptions businesses are sure to read as broadly as possible.

So, businesses subject to the CPRA will have to inform people at the point of collection “the categories of personal information to be collected and the purposes for which the categories of personal Information are collected or used and whether such Information is sold or shared.” Easy enough, as far as this goes. I live in Sacramento, and I log into Facebook, there should be notice about the categories of personal information (e.g. data such as IP address, physical address, name, geolocation data, browsing history, etc.) As a citizen of California afforded privacy rights by the CPRA, I would not be able to tell Facebook not to collect and process these sorts of data. I would be able to ask that they delete these data and to stop their selling or sharing of these data subject to significant limitations on these rights. Therefore, a baseline assumption in the CPRA, as in the CCPA, that it is either in the public interest that data collection and processing are a net good for California, its people, and its businesses, or a concession that it is too late to stop such practices, for strong law stopping some of these practice will result in these companies, some of which are headquartered in the state, to stop offering their free services and/or leave the state.

In the same notice described in the preceding paragraph, I would also be told whether Facebook sells or shares my personal information. I would also be alerted as to whether “sensitive personal information” is being collected and if these are being sold or shared.

Of course, with both categories of information collected from people in California, the use of the data must be compatible with the disclosed purpose for collection. And, so, presumably, the notice provided to me would include the why of the data collection, but whatever the purpose, so long as it is disclosed to me, it would be legal, generally speaking, under the CPRA. The only limitation seems to be purposes incompatible with the context in which the personal information was collected

My personal data could not be stored by a business indefinitely, for the law limits storage for each disclosed purpose for any the time necessary to undertake and complete the purpose.

It must also be stressed that Californians will all but certainly be presented with notice in the physical world when they shop in larger grocery store chains, national or large regional retailers, airlines, car rental firms, etc. In the case of hotels, car rental firms, and airlines, just to name three categories of businesses likely to be covered by the CPRA and to be collecting data on people, the notice may be appended to the boilerplate contractual language no one I know reads. It may be written in the clearest language imaginable, but a person must be advised of what data are being collected, the purpose of the collection and use, and whether it is being sold and shared. For the privacy purist, the only way not to have one’s information collected would be to not engage with these companies. Likewise, walking into a retail establishment large enough to qualify as a business under the CPRA may entail seeing notice posted somewhere in the store, possibly alongside information indicating customers are under surveillance by camera, that personal information is being collected.

I would be able to immediately ask the business to delete my personal information, but it would be allowed to keep this on hand during the period it is completing a transaction or providing goods or services. But there is language that may be interpreted broadly by a business to keep my personal information such as an exception to conduct a product recall or to anticipate future transactions as part of our ongoing business relationship. I would expect this to be very broadly read in favor of keeping personal data. Nonetheless, if it is a service or product used frequently, say, Amazon, then I would need to go back after every use and request my personal information be deleted. But if I placed four Amazon orders a month, the platform could reasonably deny my request because it is occurring in the course of an ongoing business transaction. There are other possible grounds on which a business might not delete a person’s personal or sensitive personal information such as ensuring the security and integrity of the service and product with the caveat that my personal information would have to somehow be “reasonably necessary and proportionate for those purposes.” Would the business make this determination? Subject to guidance or regulations?

However, the exception to the right to delete that is nearly opaque is “[t]o enable solely internal uses that are reasonably aligned with the expectations of the consumer based on the consumer’s relationship with the business and compatible with the context in which the consumer provided the information.” It is not clear to me the sort of “internal uses” this encapsulates. Data processing so the business can better target the person? This provision is drafted so broadly the new privacy agency must explicate it so businesses and Californians understand what this entails. Also, keep in mind, if I lived in California, I would have to repeat these deletion requests for each and every business collecting information on me.

I would be able to correct my personal information with a business but only with “commercially reasonable efforts,” suggesting cases in which correction are difficult would allow businesses to decline my request. For anyone who has ever tried to correct one’s personal information with a company, the frustration attendant on such endeavors can be significant. A major American automaker switched two letters my wife’s last name, and no matter how many times we asked that her name be spelled correctly, this massive corporation could not or would not make this change. This may end up as a right that is largely without effect.

I would be able to ask for and receive my personal information after a fashion. For example, I would be able to ask for and obtain the exact personal information the business has collected itself but only the categories of information obtained through means other direct collection (i.e. data brokers and other businesses.). To make this section even more convoluted, I would also receive the categories of personal information the business has directly collected on me. Moreover, I could learn the commercial or business purposes for collection and processing and the third parties with whom my personal information is sold or shared. However, if a business includes all this and other information on its website as part of its privacy policy, it would only have to send me the specific pieces of personal information it has collected directly from me. Whatever the case, I would generally only be able to receive information from the previous 12 months.

Separately from the aforementioned rights, I could also learn to whom a business is selling, sharing, and disclosing my information. However, if we drew a Venn Diagram between this right and the previous one, the most significant right bestowed by this section of the CPRA would be that of learning “[t]he categories of personal information that the business disclosed about the consumer for a business purpose and the categories of persons to whom It was disclosed for a business purpose.”

The CPRA would provide me the right to opt out of a business selling or sharing my personal information, and businesses would need to alert people of this right. If I were between the age of 13 and 16, I would need to opt in to selling or sharing my personal information. Moreover, for my children under the age of 13, I, or my wife, would need to opt in for their personal information to be sold or shared.

I would also be able to limit the use and disclosure of my sensitive personal information to an uncertain extent. The CPRA makes clear this is not an absolute right, and businesses would be able to use a number of exceptions to continue using this class of information. For example, a business would be able to do so “to ensure security and Integrity to the extent the use of the consumer’s personal Information is reasonably necessary and proportionate for these purposes.” Likewise, a business could use sensitive personal information for “[s]hort-term, transient use, including but not limited to non-personalized advertising shown as part of a consumer’s current Interaction with the business.” There are other exceptions, and the new California state agency established by the CPRA would be able to promulgate regulations to further define those situations in which use and disclosure may continue against my wishes.

Otherwise, a business would be unable to use or disclose my sensitive personal information once I elect to stop this practice. However, this right pertains only to the use of this type of information to infer my characteristics subject to the drafting of regulations.  

I would not be discriminated against for exercising any of the rights the CPRA grants me with a significant catch on which I’ll say more in a moment. This right would stop businesses from denying me goods or services, charging me a different price, or providing a different level of service or quality. And yet, a business would be able to charge me a different price or rate or give me a lesser level of service or product “if that difference is reasonably related to the value provided to the business by the consumer’s data.” This strikes me as a situation where the exception will eat the rule, for any business with any level of resources will make the claim that the value of my personal information is vital to providing me a service or product for free, and if I deny them the use of this information, the value proposition has changed and I must be charged to have the same level of service, or alternatively without payment, the business could only provide me with a lesser level of service or product. It is my guess that this right would be functionally null.

Moreover, this section is tied to loyalty and reward programs, which would also be exempt from this right so long as the case could be made that the value of my data justifies the difference in price or service. It is not hard to see to incentive structure here being such that businesses would likely establish new programs in order to pressure people in California not to exercise rights in the CPRA and to continue using their personal information in the current fashion. Of course, this is this provision “[a] business shall not use financial incentive practices that are unjust, unreasonable, coercive, or usurious in nature,” but where exactly is the line between a business offering a rewards or loyalty program purportedly tied to the value of the data it collects and processes and these sorts of practices. It may be very hard to divine and will likely require a case-by-case process to delineate the legal from the illegal.

I would generally have two ways to exercise the rights I would be given under the CPRA unless the business only operates online, and then it would be by email. The business would have 45 days after verifying my request for my personal information or to correct or delete to comply, and this would need to be free of charge. However, this 45-day period may be extended once so long as the business informs me. It would seem 90 days would become the de facto norm. A business may also be able to demand “authentication of the consumer that is reasonable in light of the nature of the personal information requested.” The intent is obviously for a business to be sure someone is not malicious or mischievously trying to change someone else’s information in what may come to be an extension of doxing or other vexatious practices seen elsewhere in the online realm. However, this may also likely be read liberally by some businesses as a means of trying up another barrier in the way of my exercise of these rights.

I would be wise as a California resident to understand some of the global limitations of the rights bestowed by the CPRA. For example, all bets are off with respect to a business’ compliance “with federal, state, or local laws or…with a court order or subpoena to provide Information.” A business would be within its legal rights to comply, my wishes be damned. Moreover, law enforcement agencies can direct businesses bot to delete my personal information for up to 90 days while a proper court order is obtained. Moreover, likely as an incentive for businesses, deidentified personal information is not subject to the obligations placed on businesses, and the same is true of “aggregate consumer information.” Obviously, a business would ideally use the safe harbor of deidentification where possible in order to render stolen data less desirable and valuable to thieves. Of course, at least one study has shown that deidentified data can be used to identify and link to people fairly easily and another stated “numerous supposedly anonymous datasets have recently been released and re-identified.” This may be less safe a harbor for my personal information than the drafters of the CPRA intend.

It also bears mention that some publicly available information shall not be considered personal information under the CPRA. The catch here is that not all of my personal information in the public sphere meets the terms of this exception, for new language in the CPRA to modify the CCPA definition makes clear the information has to be “lawfully obtained,” “truthful” and “a matter of public concern.” Additionally, businesses would be barred from using personal information made widely available that is probably not being disclosed lawfully (e.g. someone plastering my home address on social media.) And yet, the California Department of Motor Vehicles (DMV) has been selling the personal information of people to private investigators, bail bondsmen, and others, a legally sanctioned activity, but allowing this practice to funnel the personal information of Californians to businesses and data brokers would arguably not be a matter of public concern. Therefore, this exception may be written tightly enough to anticipate and forestall likely abuses.

Like the CCPA, the CPRA does not bear on use of my personal information in areas of like already regulated, often by the federal government such as health information or credit information. Any rights I would have with respect to these realms would remain unaffected by the CPRA.

I would receive protection in the event of specified types of data breaches, namely if my personal information were neither encrypted nor redacted, the CPRA’s breach provisions come into play. Under the CCPA, if my personal information were not encrypted but was redacted and stolen, a breach would occur, and the same was true if it were not redacted but encrypted. So, this seems to be a weakening of the trigger that would allow me to sue if my personal information were subject to unauthorized exfiltration or access, theft, or disclosure. Additionally, if my “email address in combination with a password or security question and answer that would permit access to the account” are exposed or stolen, I could also sue. Moreover, any unauthorized stealing, accessing, disclosing, or exposure of my personal information must be due to a “business’s violation of the duty to implement and maintain reasonable security procedures and practices appropriate to the nature of the information to protect the personal information” before a breach could occur.

Once a breach has occurred, however, I can sue for between $100-750 per incident plus actual damages but only after giving a business 30 days to cure the breach if possible. If there are no tangible monetary damages, as is often the case in breaches, then I would be left to weigh suing to recover the statutory damages. But if it’s one breach or a handful of breaches, it may not be worth the time and effort it takes to litigate, meaning this is likely the circumstances in which class actions will thrive.

Alternatively, the California Privacy Protection Agency will be empowered to bring actions against businesses that violate the CPRA, but the bill is silent on whether I would be made whole if I did not sue and the agency recovers money from the business. This is not entirely clear.

Finally, there are provisions that contemplate technological means for people to make their preferences under the CPRA known to many businesses at the same time or with minimal repetitive effort. I suppose this envisions someone designing an app that one could use that would do the hard work for you. This seems like language designed to seed the ground in California for developers to create and offer CPRA compliant products. Likewise, one could designate a person to undertake this work for you, which also suggests a market opportunity for an entity that can make the economics of such a business model work. In any event, I would likely be charged for using a service like either of these, leading one to the uncomfortable conclusion that these provisions may drive a greater bifurcation in the world of technology between the haves and haves not.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by OpenClipart-Vectors from Pixabay

TikTok Sues Trump Administration

TikTok files a longshot lawsuit that may soon be moot if the company’s operations in the U.S. are sold.     

No one in the White House or Administration should be terribly surprised that TikTok decided to sue over the 6 August “Executive Order on Addressing the Threat Posed by TikTok.” The company is alleging the President and his Administration exceeded the bounds of authority granted by Congress and violated the company’s rights under the United States (U.S.) Constitution. The company wants a court to stop the Trump Administration from moving forward with implementing the executive order (EO) and for the court to deem the EO unconstitutional and illegal. It is possible the court rules on whether it will enjoin the Trump Administration in the short term, but it will likely take much more time to decide on the substance of the case. In any event, this suit could soon be moot if ByteDance sells off its U.S. operations of TikTok to a U.S. company, for the EO would likely be rescinded in such a case.

The EO bars all transactions between U.S. entities and people, starting 45 days after issuance of the EO, with TikTok and their subsidiaries. Specifically, “to the extent permitted under applicable law: any transaction [is prohibited] by any person, or with respect to any property, subject to the jurisdiction of the United States, with [ByteDance], or its subsidiaries, in which any such company has any interest…” The Trump Administration claimed:

TikTok, a video-sharing mobile application owned by the Chinese company ByteDance Ltd., has reportedly been downloaded over 175 million times in the United States and over one billion times globally.  TikTok automatically captures vast swaths of information from its users, including Internet and other network activity information such as location data and browsing and search histories.  This data collection threatens to allow the Chinese Communist Party access to Americans’ personal and proprietary information — potentially allowing China to track the locations of Federal employees and contractors, build dossiers of personal information for blackmail, and conduct corporate espionage.

In the suit filed in United States federal court in Northern California, TikTok is asking for an injunction to stop enforcement of the EO and a declaration that it is illegal. The company specifically asserts:

The executive order and, necessarily, any implementing regulations are unlawful and unconstitutional for a number of independent reasons:

  • By banning TikTok with no notice or opportunity to be heard (whether before or after the fact), the executive order violates the due process protections of the Fifth Amendment.
  • The order is ultra vires because it is not based on a bona fide national emergency and authorizes the prohibition of activities that have not been found to pose “an unusual and extraordinary threat.”
  • The order is ultra vires because its prohibitions sweep broadly to prohibit any transactions with ByteDance, although the purported threat justifying the order is limited to TikTok, just one of ByteDance’s businesses.
  • The order is ultra vires because it restricts personal communications and the transmission of informational materials, in direct violation of International Emergency Economic Powers Act (IEEPA).
  • IEEPA lacks any intelligible principle to guide or constrain the President’s action and thereby violates the non-delegation doctrine, as the President’s overbroad and unjustified claim of authority in this matter confirms.
  • By demanding that Plaintiffs make a payment to the U.S. Treasury as a condition for the sale of TikTok, the President has taken Plaintiffs’ property without compensation in violation of the Fifth Amendment.
  • By preventing TikTok Inc. from operating in the United States the executive order violates TikTok Inc.’s First Amendment rights in its code, an expressive means of communication.

In a press release, TikTok contended

To be clear, we far prefer constructive dialogue over litigation. But with the [EO] threatening to bring a ban on our US operations – eliminating the creation of 10,000 American jobs and irreparably harming the millions of Americans who turn to this app for entertainment, connection, and legitimate livelihoods that are vital especially during the pandemic – we simply have no choice.

It bears note that rarely have suits against the use of a President’s use of IEEPA succeeded, notably on many of the same grounds TikTok is using. Courts have rejected claims that a President’s use of these powers violate the Fifth and First Amendments and the non-delegation doctrine.

Additionally, a TikTok employee has filed suit against the Trump Administration, making some of the same arguments against the EO, but contending further

Given the severe civil and criminal penalties in place for violating the Executive Order, and the overbroad nature of its language, it is obvious that TikTok and its employees, as well as other companies involved in the process of distributing wages and salaries to U.S. employees, such as ADP, banks, and credit companies, would not dare to engage in any activity that might be construed as a violation. The broad language of the order necessarily will create a chilling effect for any person or entity that has contracted with or that does business with TikTok.

Of course, there is litigation pending against TikTok for alleged violations, including one case before the same court in Northern California. A college student filed suit, arguing:

Unknown to its users, however, is that TikTok also includes Chinese surveillance software. TikTok clandestinely has vacuumed up and transferred to servers in China vast quantities of private and personally-identifiable user data that can be employed to identify, profile and track the location and activities of users in the United States now and in the future. TikTok also has surreptitiously taken user content, such as draft videos never intended for publication, without user knowledge or consent. In short, TikTok’s lighthearted fun comes at a heavy cost. Meanwhile, TikTok unjustly profits from its secret harvesting of private and personally-identifiable user data by, among other things, using such data to derive vast targeted-advertising revenues and profits. Its conduct violates statutory, Constitutional, and common law privacy, data, and consumer protections.

The plaintiff asserted TikTok violated the following U.S. and California laws and common law legal doctrines:

  • Computer Fraud and Abuse Act, 18 U.S.C. § 1030
  • California Comprehensive Data Access and Fraud Act, Cal. Pen. C. § 502
  • Right to Privacy – California Constitution
  • Intrusion upon Seclusion
  • California Unfair Competition Law, Bus. & Prof. C. §§ 17200 et seq.
  • California False Advertising Law, Bus. & Prof. C. §§ 17500 et seq.
  • Negligence
  • Restitution / Unjust Enrichment

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Further Reading, Other Developments, and Coming Events (28 August)

Here is today’s Further Reading, Other Developments, and Coming Events.

Coming Events

  • On 10 September, the General Services Administration (GSA) will have a webinar to discuss implementation of Section 889 of the “John S. McCain National Defense Authorization Act (NDAA) for FY 2019” (P.L. 115-232) that bars the federal government and its contractors from buying the equipment and services from Huawei, ZTE, and other companies from the People’s Republic of China.
  • The Federal Communications Commission (FCC) will hold a forum on 5G Open Radio Access Networks on 14 September. The FCC asserted
    • Chairman [Ajit] Pai will host experts at the forefront of the development and deployment of open, interoperable, standards-based, virtualized radio access networks to discuss this innovative new approach to 5G network architecture. Open Radio Access Networks offer an alternative to traditional cellular network architecture and could enable a diversity in suppliers, better network security, and lower costs.
  • The Senate Judiciary Committee’s Antitrust, Competition Policy & Consumer Rights Subcommittee will hold a hearing on 15 September titled “Stacking the Tech: Has Google Harmed Competition in Online Advertising?.” In their press release, Chair Mike Lee (R-UT) and Ranking Member Amy Klobuchar (D-MN) asserted:
    • Google is the dominant player in online advertising, a business that accounts for around 85% of its revenues and which allows it to monetize the data it collects through the products it offers for free. Recent consumer complaints and investigations by law enforcement have raised questions about whether Google has acquired or maintained its market power in online advertising in violation of the antitrust laws. News reports indicate this may also be the centerpiece of a forthcoming antitrust lawsuit from the U.S. Department of Justice. This hearing will examine these allegations and provide a forum to assess the most important antitrust investigation of the 21st century.
  • The United States’ Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) announced that its third annual National Cybersecurity Summit “will be held virtually as a series of webinars every Wednesday for four weeks beginning September 16 and ending October 7:”
    • September 16: Key Cyber Insights
    • September 23: Leading the Digital Transformation
    • September 30: Diversity in Cybersecurity
    • October 7: Defending our Democracy
    • One can register for the event here.
  • On 22 September, the Federal Trade Commission (FTC) will hold a public workshop “to examine the potential benefits and challenges to consumers and competition raised by data portability.”
  • The Senate Judiciary Committee’s Antitrust, Competition Policy & Consumer Rights Subcommittee will hold a hearing on 30 September titled ““Oversight of the Enforcement of the Antitrust Laws” with Federal Trade Commission Chair Joseph Simons and United States Department of Justice Antitrust Division Assistant Attorney General Makan Delhrahim.
  • The Federal Communications Commission (FCC) will hold an open meeting on 30 September, but an agenda is not available at this time.

Other Developments

  • Members of the British Parliament have written the United Kingdom’s (UK) Information Commissioner’s Office (ICO) “about the Government’s approach to data protection and privacy during the COVID-19 pandemic, and also the ICO’s approach to ensuring the Government is held to account.” The MPs argued in the letter addressed to UK ICO Commissioner Elizabeth Denham
    • During the crisis, the Government has paid scant regard to both privacy concerns and data protection duties. It has engaged private contractors with problematic reputations to process personal data, as highlighted by Open Democracy and Foxglove. It has built a data store of unproven benefit. It chose to build a contact tracing proximity App that centralised and stored more data than was necessary, without sufficient safeguards, as highlighted by the Human Rights Committee. On releasing the App for trial, it failed to notify yourselves in advance of its Data Protection Impact Assessment – a fact you highlighted to the Human Rights Committee.
    • Most recently, the Government has admitted breaching their data protection obligations by failing to conduct an impact assessment prior to the launch of their Test and Trace programme. They have only acknowledged this failing in the face of a threat of legal action by Open Rights Group. The Government have highlighted your role at every turn, citing you as an advisor looking at the detail of their work, and using you to justify their actions.
    • The MPs added:
      • In this context, Parliamentarians and the public need to be able to rely on the Regulator. However, the Government not only appears unwilling to understand its legal duties, it also seems to lack any sense that it needs your advice, except as a shield against criticism.
      • Regarding Test and Trace, it is imperative that you take action to establish public confidence – a trusted system is critical to protecting public health. The ICO has powers to compel documents to understand data processing, contractual relations and the like (Information Notices). The ICO has powers to assess what needs to change (Assessment Notices). The ICO can demand particular changes are made (Enforcement notices). Ultimately the ICO has powers to fine Government, if it fails to adhere to the standards which the ICO is responsible for upholding.
  • The Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) has released a 5G strategy that flows from a Trump Administration strategy released earlier this year. CISA is not asserting it has much authority in how the private sector will build, roll out, source, and secure 5G and is instead looking to capitalize on its role as the United States government’s cybersecurity agency for the civilian part of the government. As such, CISA is proposing to advise private sector stakeholders and provide its expertise so that the next generation of wireless communications in the U.S. is safe, stable, and secure. CISA is putting forth five initiatives that seeks to position CISA as a key stakeholder in assisting the larger U.S. efforts and individual companies and entities.
    • In the “National Strategy To Secure 5G,” the Trump Administration tied its overarching effort to foster 5G development and to cement the U.S.’s role as the preeminent technological power in the world to its 2018 United States National Cyber Strategy.
    • The Administration asserted
      • This National Strategy to Secure 5G expands on how the United States Government will secure 5G infrastructure domestically and abroad. 5G infrastructure will be an attractive target for criminals and foreign adversaries due to the large volume of data it transmits and processes as well as the support that 5G will provide to critical infrastructure. Criminals and foreign adversaries will seek to steal information transiting the networks for monetary gain and exploit these systems and devices for intelligence collection and surveillance. Adversaries may also disrupt or maliciously modify the public and private services that rely on communications infrastructure. Given these threats, 5G infrastructure must be secure and reliable to maintain information security and address risks to critical infrastructure, public health and safety, and economic and national security.
    • CISA noted the four lines of efforts from the “National Strategy To Secure 5G” are:
      • Facilitating domestic 5G rollout;
      • Assessing the risks and identifying core security principles for 5G infrastructure;
      • Managing the risks to our economic and national security from the use of 5G infrastructure; and
      • Promoting responsible global development and deployment of 5G infrastructure.
    • CISA stated
      • [it] leads 5G risk management efforts so the United States can fully benefit from all the advantages 5G connectivity promises to bring. In support of CISA’s operational priority to secure 5G, as outlined in the CISA Strategic Intent, the CISA 5G Strategy establishes five strategic initiatives that stem from the four lines of effort defined in the National Strategy to Secure 5G. Guided by three core competencies: Risk Management, Stakeholder Engagement, and Technical Assistance, these initiatives include associated objectives to ensure there are policy, legal, security, and safety frameworks in place to fully leverage 5G technology while managing its significant risks. With the support of CISA and its partners, the CISA 5G Strategy seeks to advance the development and deployment of a secure and resilient 5G infrastructure, one that enables enhanced national security, technological innovation, and economic opportunity for the United States and its allied partners.
    • CISA laid out the five initiatives:
      • Strategic Initiative 1: Support 5G policy and standards development by emphasizing security and resilience
        • The development of 5G policies and standards serve as the foundation for securing 5G’s future communications infrastructure. Those entities that shape the future of these policies and standards position themselves as global leaders and help facilitate secure deployment and commercialization of 5G technologies. To prevent attempts by threat actors to influence the design and architecture of 5G networks, it is critical that these foundational elements be designed and implemented with security and resilience from the start.
        • DESIRED OUTCOME: Threat actors are unable to maliciously influence the design and architecture of 5G networks.
      • Strategic Initiative 2: Expand situational awareness of 5G supply chain risks and promote security measures
        • Between untrusted components, vendors, equipment, and networks, 5G supply chain security is under constant threat. For example, while certain 5G equipment may be from a trusted vendor, supporting components manufactured or handled by untrusted partners or malicious actors could negate any security measures in place. These compromised components have the potential to affect the connectivity and security of transmitted data and information.
        • DESIRED OUTCOME: Malicious or inadvertent vulnerabilities within the 5G supply chain are successfully prevented or mitigated.
      • Strategic Initiative 3: Partner with stakeholders to strengthen and secure existing infrastructure to support future 5G deployments
        • Before moving to a standalone infrastructure, the first iterations of 5G deployment will work alongside existing 4G LTE infrastructure and core networks. While 5G architecture is designed to be more secure, 5G’s specifications and protocols stem from previous networks, which contain legacy vulnerabilities. For example, the overlay of 4G and 5G networks has the potential for a malicious actor to carry out a downgrade attack, where they could force a user on a 5G network to use 4G in order to exploit known vulnerabilities against them. These inherent vulnerabilities, along with new and unidentified risks, will require the collaboration of industry and government to develop and communicate security enhancements to support secure 5G deployments.
        • DESIRED OUTCOME: Secure 5G deployment, void of legacy vulnerabilities and untrusted components.
      • Strategic Initiative 4: Encourage innovation in the 5G marketplace to foster trusted 5G vendors
        • As 5G is deployed, there is an emphasis on ensuring that state-influenced entities do not dominate the 5G marketplace. To address this concern, CISA will work with its partners to support R&D initiatives and prize programs that result in secure and resilient 5G technologies and capabilities. By supporting these types of efforts, CISA will help drive innovation and establish a trusted vendor community for the future of 5G.
        • DESIRED OUTCOME: Increased number of trusted vendors in the 5G marketplace to address risks posed by limited competition and proprietary solutions.
      • Strategic Initiative 5: Analyze potential 5G use cases and share information on identified risk management strategies
        • The enhanced capabilities of 5G technologies will support an array of new functions and devices, introducing a plethora of potential use cases. With the potential for the connection of billions of devices on a network, also known as massive Machine-Type Communication (mMTC), applications like smart cities will require increased security to safeguard connected devices from potential threats and vulnerabilities. To ensure the security and integrity of these devices, CISA will communicate known vulnerabilities and risk management strategies for use cases associated with securing the Nation’s critical functions.
        • DESIRED OUTCOME: New vulnerabilities introduced by deployments of 5G technology are clearly understood and managed.
  • The Office of Management and Budget (OMB) released new guidance on grants and agreements federal agencies must generally follow that further implements a ban on using United States (U,S.) government funds on buying services or equipment from Huawei, ZTE, and other companies from the People’s Republic of China (PRC). Section 889 of the “John S. McCain National Defense Authorization Act (NDAA) for FY 2019” (P.L. 115-232) bars federal agencies, federal contractors, and recipients of federal funds from buying or using these services. Two regulations have been issued previously pertaining to agencies and contractors, and this notice governs the recipients of federal funding. However, the explanatory portion of the notice that discusses Section 889 differs from the actual regulatory text, giving rise to possible confusion over the scope and extent of the ban on the recipients of federal funding from buying or paying for banned services and equipment.
    • In the body of the notice, OMB stated:
      • OMB revised 2 CFR to align with section 889 of the NDAA for FY 2019 (NDAA 2019). The NDAA 2019 prohibits the head of an executive agency from obligating or expending loan or grant funds to procure or obtain, extend or renew a contract to procure or obtain, or enter into a contract (or extend or renew a contract) to procure or obtain the equipment, services, or systems prohibited systems as identified in NDAA 2019. To implement this requirement, OMB is adding a new section, 2 CFR 200.216 Prohibition on certain telecommunication and video surveillance services or equipment, which prohibit Federal award recipients from using government funds to enter into contracts (or extend or renew contracts) with entities that use covered telecommunications equipment or services. This prohibition applies even if the contract is not intended to procure or obtain, any equipment, system, or service that uses covered telecommunications equipment or services. As described in section 889 of the NDAA 2019, covered telecommunications equipment or services includes:
        • Telecommunications equipment produced by Huawei Technologies Company or ZTE Corporation (or any subsidiary or affiliate of such entities).
      • For the purpose of public safety, security of government facilities, physical security surveillance of critical infrastructure, and other national security purposes, video surveillance and telecommunications equipment produced by Hytera Communications Corporation, Hangzhou Hikvision Digital Technology Company, or Dahua Technology Company (or any subsidiary or affiliate of such entities).
      • Telecommunications or video surveillance services provided by such entities or using such equipment.
      • Telecommunications or video surveillance equipment or services produced or provided by an entity that the Secretary of Defense, in consultation with the Director of the National Intelligence or the Director of the Federal Bureau of Investigation, reasonably believes to be an entity owned or controlled by, or otherwise connected to, the government of a covered foreign country.
    • In the rule itself, it is provided that the ban extends to the recipients and subrecipients themselves and not contractors using the banned services or equipment:
      • (a) Recipients and subrecipients are prohibited from obligating or expending loan or grant funds to:
        • (1) Procure or obtain;
        • (2) Extend or renew a contract to procure or obtain; or
        • (3) Enter into a contract (or extend or renew a contract) to procure or obtain equipment, services, or systems that uses covered telecommunications equipment or services as a substantial or essential component of any system, or as critical technology as part of any system.
  • The United States (U.S.) Department of Justice (DOJ) announced a major reorganization of its Antitrust Division through the creation of “the Office of Decree Enforcement and Compliance and a Civil Conduct Task Force” and a shuffling of subject area matters “among its six civil sections in order to build expertise based on current trends in the economy.”
    • The DOJ explained
      • The Office of Decree Enforcement and Compliance will have primary responsibility for enforcing judgments and consent decrees in civil matters.  It will also advise the Antitrust Division’s criminal sections when parties seek credit at the charging stage for their corporate compliance programs.  The office will work closely with division attorneys, monitors, and compliance officers to ensure the effective implementation of and compliance with antitrust judgments.  Additionally, the office will be the Antitrust Division’s primary contact for complainants who have information regarding potential violations of those final judgments.
      • The second change to the Antitrust Division’s civil enforcement program is the creation of the Civil Conduct Task Force.  This dedicated group of Division attorneys will work across the civil sections and field offices to identify conduct investigations that require additional focus and resources.  As an independent group, the task force will have the dedicated resources and a consistent mandate to investigate and, ultimately, prosecute civil conduct violations of the antitrust laws.
      • The third change announced today is the realignment of certain responsibilities within the Antitrust Division’s six civil sections. The allocation of commodities among sections has evolved over the years, and today’s announcement is a recognition that technology has reshaped the competitive dynamics in several industries that the Antitrust Division analyzes on a regular basis.
      • Specifically, the currently named Media, Entertainment, and Professional Services Section will shift attention to financial services, fintech, and banking.  Those commodities were previously divided across three other civil sections.  The currently named Telecommunications and Broadband Section will expand its portfolio to concentrate on media, entertainment, and telecommunications industries. Lastly, the currently named Technology and Financial Services section will focus full time on technology markets and the competitive characteristics of platform business models.
  • A class action was filed in British court against Marriott for data breaches between 2014 and 2018 exposed the personal data of people worldwide. This action follows the United Kingdom’s (UK) Information Commissioner’s Office’s (ICO) intention to fine Marriott “£99,200,396 for infringements of the General Data Protection Regulation (GDPR)” in 2019, but this enforcement action was extended through mid-2020 by the ICO. It is unclear when, or even if, the ICO will conclude its investigation and action against Marriott given the UK’s pending exit from the European Union and the GDPR. Theoretically, the ICO may be able to use the UK’s data protection law, and it is telling the class action is filed under both the GDPR and the UK’s data protection law in effect during most of the period in which the breaches occurred.
    • The law firm handling the class action asserted
      • It is believed the data breach began when the systems of the Starwood Hotels group were compromised following a hack on its reservation network, which is believed to have first occurred in 2014. Marriott International acquired the Starwood Hotels group in 2016 but the exposure of customer information was not discovered until 2018. The guests’ personal data affected by the breach included information such as guests’ names, email and postal addresses, telephone numbers, gender and credit card information.
  • The Federal Highway Administration (FHWA), a component agency of the United States (U.S.) Department of Transportation (DOT), asked for input on a draft rule “to ensure that States meet specific registration, notification, and coordination requirements to facilitate broadband infrastructure deployment in the right-of-way (ROW) of applicable Federal-aid highway projects.” The agency was directed to undertake this rulemaking by language in the “MOBILE NOW Act” that was enacted as part of “The Consolidated Appropriations Act, 2018” (P.L. 115-141). The FHWA explained “[o]nce the regulations take effect, the Section 607 requirements will apply to each State that receives funds under [the section of the United States Code that governs highway funding and projects], including the District of Columbia and the Commonwealth of Puerto Rico.” The agency added:
    • FHWA recognizes that it is in the public interest for utility facilities to use jointly the ROW of public roads and streets when such use and occupancy do not adversely affect highway or traffic safety, or otherwise impair the highway or its aesthetic quality, and does not conflict with Federal, State, or local laws and regulations. The opportunity for such joint use avoids the additional cost of acquiring separate ROW for the exclusive accommodation of utilities. As a result, the ROW of highways is often used to provide public services to abutting residents as well as to serve conventional highway needs.
    • Utility facilities, unlike most other fixed objects that may be present within the highway environment, are not owned nor are their operations directly controlled by State or local public agencies. Federal laws and FHWA regulations contained in 23 U.S.C. 109, 111, 116, and 123 and 23 CFR parts 1, 635, 645, and 710 regulate the accommodation, relocation, and reimbursement of utilities located within the highway ROW. State departments of transportation (State DOT) are required to develop Utility Accommodation policies that meet these regulations. 23 CFR 645.211.

Further Reading

  • New Zealand stock exchange hit by cyber attack for second day” By Martin Farrer – The Guardian. A powerful offshore Distributed Denial of Service (DDoS) attack took down the nation’s stock exchange for the second day in a row. Given the apparent sophistication and resources necessary to execute this attack, according to experts, one wonders if either of the Pacific Rim’s most active, capable nation-state hackers may be responsible: the People’s Republic of China or the Democratic People’s Republic of Korea.
  • Israeli phone hacking company faces court fight over sales to Hong Kong” by Patrick Howell O’Neill – MIT Technology Review. Human rights attorneys have filed suit in Tel-Aviv to force the Ministry of Defence to end exports of Cellebrite’s phone hacking technology to repressive regimes like Hong Kong and Belarus. It is not clear Israel ever granted Cellebrite an export license, and the Ministry is being closed mouth on the issue. Previous filings assert Cellebrite’s technology has been used over 4,000 times in Hong Kong to hack into the phones of dissidents and activists even though many were using device encryption. Given that Cellebrite sells its technology widely throughout the world, perhaps the claims of some Five Eyes nations, including the United States, United Kingdom, and Australia, are overblown?
  • Armed militias mobilize on social media hours before deadly Kenosha shooting” – The Atlantic Counsel’s Digital Forensic Research Lab. As it turns out, Facebook and reddit posts and pages were encouraging armed individuals and militias to go to Kenosha, Wisconsin ostensibly to ensure protests over the police shooting of an African American man in the back did not result in violence or looting. An alarming number of these posts called for violence against the protestors, and at least one person heeded this call by shooting and killing two protestors.
  • Facebook chose not to act on militia complaints before Kenosha shooting” By Russell Brandom – The Verge. Even with people submitting complaints that various users and groups were inciting violence in Kenosha, Wisconsin, Facebook moderators declined to take down most of the material…until the day after a person shot and killed two protestors.
  • Tech’s deepening split over ads and privacy” By Kyle Daly – Axios. This piece summarizes some of the internecine fighting in Silicon Valley over privacy, which, as the author points out is driven by, or perhaps more kindly, happens to coincide with each companies’ interest. For example, Apple faces antitrust scrutiny in the United States and European Union and does not earn much revenue from advertising, so it is easy for them to propose changes to their iOS that would give users much more control over the data companies could collect. This would hurt some of Apple’s rivals like Facebook. What is not mentioned here is that should Microsoft win the TikTok sweepstakes, it is all but certain it’s position on stricter privacy controls will change, for the video sharing app s built on harvesting data from users.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Free-Photos from Pixabay

CPRA Analyzed

The CCPA follow on bill on the ballot in California will significantly change how the state regulates privacy, which will set the de facto standard for the U.S. in the absence of federal legislation.      

With the “California Privacy Rights Act” (CPRA) having been successfully added to the November ballot on which Californians will vote in November, it is worth taking a deeper look at a bill. This bill would replace the “California Consumer Privacy Act” (CCPA) (AB 375), which just came into full effect with the publishing of final regulations on 14 August. Nonetheless, as the Office of the Attorney General was drafting regulations, the organization that pushed for passage of the CCPA, Californians for Consumer Privacy (CCP), completed the drafting of a follow on bill. CCP Chair and Founder Alistair Mactaggart explained his reasoning for a second ballot initiative: “[f]irst, some of the world’s largest companies have actively and explicitly prioritized weakening the CCPA…[and] [s]econd, technological tools have evolved in ways that exploit a consumer’s data with potentially dangerous consequences.” Moreover, if polling released earlier this month by CCP is close to being accurate, then an overwhelming majority of Californians support enactment of the CPRA, meaning a significantly new privacy scheme will come into effect in the new few years in California.

Of course, it may be fair to assert this bill looks to solve a number of problems created by the rush in June 2018 to draft a bill all parties could accept in order to get the CCPA removed from the ballot. Consequently, the CCPA package that was enacted was sloppily drafted in some places with inconsistent provisions that necessitated two rounds of legislation to fix or clarify the CCPA.

As under the CCPA, the CPRA would still not allow people to deny businesses the right to collect and process their personal information unlike some of the bills pending in Congress. Californians could stop the sale or sharing of personal information, but not the collection and processing of personal data short of forgoing or limiting online interactions and many in-person interactions. A person could request the deletion of personal information collected and processed subject to certain limitations and exceptions businesses are sure to read as broadly as possible. Additionally, a new agency would be created to police and enforce privacy rights, but legitimate questions may be posed about its level of resources. Nonetheless, the new statute would come into effect on 1 January 2023, leaving the CCPA as the law of California in the short term, and then requiring businesses and people to adjust to the new regime.

In the findings section CCP explicitly notes the bills introduced to weaken or rollback the CCPA as part of the reason as to why the CPRA should be enacted. Changes to the California Code made by ballot initiative are much harder to change or modify than the legislative route for enacting statutes. Notably, the CPRA would limit future amendments to only those in furtherance of the act, which would rule out any attempts to weaken or dilute the new regime. Moreover, the bill looks at privacy rights through the prism of an imbalance in information and is founded on the notion that should people in California have more information and real choice in how and when their personal data is shared, proceeded, and collected, then the most egregious data practices would stop. Of course, this conceptual framework differs from the one used by others in viewing data collection and processing as being more like pollution or air quality, situations any one individual is going to have limited impact over, thus necessitating collective government action to address deleterious effects. In the view of the CCP, Californians will be on better footing to negotiate their privacy with companies like Facebook and Google. Notably, the CCP asserted:

  • In the same way that Ingredient labels on foods help consumers shop more effectively, disclosure around data management practices will help consumers become more informed counterparties In the data economy, and promote competition, Additionally, if a consumer can tell a business not to sell his or her data, then that consumer will not have to scour a privacy policy to see whether the business Is, In fact, selling that data, and the resulting savings in time Is worth, in the aggregate, a tremendous amount of money.
  • Consumers should have the information and tools necessary to limit the use of their information to non-invasive, pro-privacy advertising, where their personal information is not sold to or shared with hundreds of businesses they’ve never heard of, If they choose to do so. Absent these tools, it will be virtually Impossible for consumers to fully understand these contracts they are essentially entering into when they interact with various businesses.

The CPRA would change the notification requirements for businesses interested in collecting, processing, and sharing personal data in Section 1798.100 of the Civil Code (i.e. language added by the CCPA and some of the follow bills the legislature passed.) This requirement would be binding on the companies that control collection and not just the entities doing the actual collecting, which suggests concern that the ultimate user of personal data would be shielded from revealing its identity to people. Worse still, the CCPA language may create an incentive to use front companies or third parties to collect personal data. Moreover, the CPRA makes clear that if a company is using another company to collect personal data it will ultimately control, it may meet its notice requirements by posting prominently on its website all the enumerated information. This may be a loophole large companies may use to avoid informing people who is controlling data collection.

The new language that tightens the information people must be provided as part of this notice, namely the purposes for which personal data is collected or used and whether the entity is proposing to sell or share this information. Moreover, the CPRA would mandate that the notice also include any additional purposes for which personal data are collected and used “that are incompatible with the disclosed purpose for which the personal information was collected.”

The changes to Section 1798.100 and the underlying CCPA language that would remain will apply to a new category of information created by the CPRA: “sensitive personal information.” This term is defined to mean:

  • personal Information that reveals
    • a consumer’s social security, driver’s license, state Identification card, or passport number;
    • a consumer’s account log-In, financial account, debit card, or credit card number in combination with any required security or access code, password, or credentials allowing access to an account;
    • a consumer’s precise geolocation;
    • a consumer’s racial or ethnic origin, religious or philosophical beliefs, or union membership;
    • the contents of a consumer’s mail, email and text messages, unless the business Is the Intended recipient of the communication;
    • a consumer’s genetic data; and
  • the processing of biometric Information for the purpose of uniquely identifying a consumer;
  • personal Information collected and analyzed concerning a consumer’s health; or
  • personal Information collected and analyzed concerning a consumer’s sex life or sexual orientation.

However, should any of these data be “publicly available” as defined by the CPRA, then it is no longer subject to the heightened requirements normally due this new class of information. For example, the new notice people must be given will list the categories of sensitive personal information collected and the purposes for which such information is collected or used. Additionally, people must be told whether this subset of personal data will be shared or sold.

The CPRA would limit collection, use, processing, and sharing of personal data to purposes “reasonably necessary and proportionate” to achieve the purpose of the information collection. Quite clearly, much will hang on what turns out to be “reasonable,” and this may be construed by the new data protection agency in regulation and ultimately courts in litigation. However, this provision also allows the “collection, use, retention, and sharing of a consumer’s personal information…for another disclosed purpose that Is compatible with the context in which the personal information was collected.” This will also need fleshing out either by regulation or litigation, or both. This seems to allow companies to specific another purpose for its data activities so long as it is compatible with the context of collection. And yet, it is not clear what would determine compatibility. If a person is agreeing to a grocery store chain’s data activities, might the company legally try to collect information regarding a person’s health?

This section also requires businesses to enter into contracts with third parties, service providers, and contractors to ensure they follow the CPRA and to specify that information sold or shared by the business is for limited and specific purposes.

Businesses are obligated to use “reasonable security procedures and practices appropriate to the nature of the personal information to protect the personal Information from unauthorized or illegal access, destruction, use, modification, or disclosure.” This is a familiar construct that contemplates a sliding scale of security measures with lesser steps being all right for less valuable information, say deidentified data, but with higher standards being needed for more sensitive personal data. The challenge in such a regime is that reasonable minds might theoretically disagree about reasonable measures, but it may be the purview of the caselaw construing the CPRA that will point the way to how businesses should secure information.

Section 1798.105 spells out a person’s right to delete personal information and expands the obligation of businesses to direct their service providers and contractors to delete information upon receipt of a valid request. Third parties would be notified of deletion requests and expected to also delete unless doing so would be impossible or “involves disproportionate effort,” a term likely to be given as expansive a reading as possible by many businesses. There still are numerous exceptions for deletion requests, many of which will also likely be read expansively by businesses reluctant to honor deletion requests, including:

  • Complete the transaction for which the personal information was collected, fulfill the terms of a written warranty or product recall conducted In accordance with federal low, provide a good or service requested by the consumer, or reasonably anticipated by the consumer within the context of a business’s ongoing business relationship with the consumer, or otherwise perform a contract between the business and the consumer.
  • Help to ensure security and integrity to the extent the use of the consumer’s personal information Is reasonably necessary and proportionate for those purposes.
  • Debug to identify and repair errors that Impair existing Intended functionality.
    Exercise free speech, ensure the right of another consumer to exercise that consumer’s right of free speech, or exercise another right provided for by law.
  • To enable solely internal uses that are reasonably aligned with the expectations of the consumer based on the consumer’s relationship with the business and compatible with the context in which the consumer provided the Information.
  • Comply with a legal obligation.

However, the CPRA eliminated the exception that could be used to deny deletion requests in the CCPA that allowed a business to “use the consumer’s personal information, internally, in a lawful manner that is compatible with the context in which the consumer provided the information.”

The CPRA creates a new section, 1798.106, titled “Consumers’ Right to Correct Inaccurate Personal Information,” that requires businesses to correct inaccurate personal information in light of the type of information and why it is being processed. Businesses must disclose that people have this right if a person submits a verifiable request to correct inaccurate personal information. However, companies are only required to make reasonable commercial efforts in correcting inaccurate personal information. It appears that a rulemaking is necessary to flesh out what would be a reasonable commercial efforts.

Section 1798.110 is amended by the CPRA but more or less keeps the CCPA’s right to know about and access being collected about them with some significant changes. For example, there is an expansion of one of the categories businesses must provide to people who utilize this right. This category is the commercial or business purpose for which collection and selling of personal information must be given to requesters currently under the CCPA. Under the CPRA, businesses would also need to inform people of the other entities with whom they share personal information, thus closing a significant loophole, for companies like Facebook share people’s information but do not sell it. Under the CCPA, a Facebook would not need to divulge to a person with which companies it is sharing one’s information.

Also, the CPRA would deem in compliance those companies that post on its website the categories of personal information, the sources of this information, its business or commercial purposes, and the categories of third parties to whom personal information is disclosed. It seems likely many companies will go this route, meaning the only personal information they would need to furnish upon a request would be the specific pieces of information on the person making the request. And yet, the CRPA strikes the CCPA requirement that businesses keep personal information for one-time transactions or to reidentify or link to these data.

Section 1798.115 of the CCPA would also be changed by expanding the universe of data a person may request and receive regarding how their personal information is shared and sold. The CPRA keeps the basic structure of the CCPA in this regard and merely expands it to include shared as well as sold for the following:

  • The categories of personal information sold or shared and the categories of third parties to whom such information was sold or shared
  • The categories of personal information disclosed about a person for business purposes and the categories of persons to whom such information was disclosed

Third parties would be barred from selling or sharing personal information that has been sold to or shared with them unless they provide explicit notice and people have the opportunity to opt-out.

The CRPA similarly changes Section 1798.120 (aka the right to opt out of the sharing or selling of one’s personal information). However, it keeps the CCPA’s right to opt out of sales or sharing at any time. Likewise, teenagers between 13-16 would need to affirmatively agree to selling or sharing, and for any child less than 13, his or her parents must affirmatively agree.

A new Section 1798.121, a “Consumers’ Right to Limit Use and Disclosure of Sensitive Personal Information,” would allow people to stop businesses from collecting and using sensitive personal information in some cases. As a general matter, if a person limits collection or use of this class of information, then the business would be limited to “that use which is necessary to perform the services or provide the goods reasonably expected by an average consumer who requests such goods or services” subject to some of the exceptions embodied in the definition of “business purpose.” Businesses may, however, provide notice of additional uses of sensitive personal information a person must further limit if these new uses ore objectionable or unwanted.

The CPRA changes the provision barring retaliation against people who opt out of certain practices or use their CCPA rights. The general prohibition on punishing people who use their rights under the bill with different prices, services, or products would be maintained. The CPRA would expand this protection to employees, contractors, and applicants for employment. However, the CPRA keeps the CCPA exemption for so-called loyalty programs to offer different prices or services but only if the difference is reasonably related to the value the person’s data provides to the business. The CCPA contains language requiring the linkage to be directly related, this is change may seen as a subtle weakening of the connection between the value of a person’s data and the rewards or prices offered through membership in a loyalty program. This will almost certainly result in businesses in California using current programs or establishing new programs to press people to share personal information in exchange for better prices or services. After all, all they would need to do is show the value of the person’s data is reasonably related to the advantages of membership. Like other similar provisions in the bill, regulation and litigation will define the parameters of what is reasonably related. Like the CCPA, the new bill would require people to opt into such programs, and should a person refuse, the business would need to wait 12 months before making the offer again.

Many of the previously discussed changes to the CCPA necessitate alterations to a key section of the statute, Section 1798.130, that details notice, disclosure, correction, and deletion requests. Businesses with physical locations must still offer two means for people to make such requests, but the CPRA would allow online businesses to merely make available an email address. Anyone who has ever tried to resolve disputes and problems via email knows this process can often be frustrating, but the new statute would allow companies like Facebook or Google to merely offer an email address.

The new 1798.130 also makes clear the 45-day window for businesses to deliver required information to people after receiving a verified request also includes making requested corrections and deletions. A potential hurdle is established for requests, however. In light of the type of information in question, a business may seek to authenticate a person’s identity before granting the request but may not force a person to create an account with the business if they do not have one. To be fair, this provision may be aimed at the mischief that could be created if a person decides to impersonate someone else and ask businesses to delete their personal information. There are likely even other such possible situations in which havoc could be wreaked by a malicious person.

In any event, the disclosure of information would need to cover the previous 12 months under the CPRA, and after new regulations are put in place, people would be able to ask for and receive information stretching back before the preceding 12 months. But such a request could be denied on the grounds of impossibility or disproportionate effort. Presumably the new regulations would address when these types of situations may be the case. Another limitation on this right is that businesses would not need to provide information before 1 January 2022.

If a person submits requests to learn what type of personal information has been collected or has been sold or shared to a business’ contractor or service provider, they have no obligation to respond. And yet, these entities must assist a business that receives such requests.

The CPRA stipulates that businesses are required to provide the following types of information if person asks for the data the entity has:

the categories of sources from which the consumer’s personal information was collected; the business or commercial purpose for collecting, or selling or sharing the consumer’s personal information; and the categories of third parties to whom the business discloses the consumer’s personal information.

A business is also obligated to provide the “specific pieces of personal information obtained from the consumer in a format that is easily understandable to the average consumer, and to the extent technically feasible, in a structured, commonly used, machine-readable format, which also may be transmitted to another entity at the consumer’s request without hindrance.”

Regarding the type of information a business must give to people who ask to know what, if any, information was sold or shared about them, a business must furnish two lists:

  • A list of the categories of personal information It has sold or shared about consumers in the preceding 12 months by reference to the enumerated category or categories in [the revised definition of personal information and new definition of sensitive personal information] that most closely describe the personal Information sold or shared, or If the business has not sold or shared consumers’ personal information in the preceding 12 months, the business shall prominently disclose that fact in Its privacy policy.
  • A list of the categories of personal information it has disclosed about consumers for a business purpose in the preceding 12 months by reference to the enumerated category in subdivision (c) that most closely describes the personal information disclosed, or If the business has not disclosed consumers’ personal information for a business purpose In the preceding 12 months, the business shall disclose that fact.

The categories of personal information a business must provide are “information that identifies, relates to, describes, is reasonably capable of being associated with, or could reasonably be linked, directly or indirectly, with a particular consumer or household. Personal information includes, but is not limited to, the following if it identifies, relates to, describes, is reasonably capable of being associated with, or could be reasonably linked, directly or indirectly, with a particular consumer or household:

(A) Identifiers such as a real name, alias, postal address, unique personal Identifier, online identifier, Internet Protocol address, email address, account name, social security number, driver’s license number, passport number, or other similar identifiers.

(B) Any personal information described in subdivision (e) of Section 1798.80.

(C) Characteristics of protected classifications under California or federal law.

(D) Commercial information, including records of personal property, products or services purchased, obtained, or considered, or other purchasing or consuming histories or tendencies.

(E) Biometric information.

(F) Internet or other electronic network activity information, including, but not limited to, browsing history, search history, and information regarding a consumer’s interaction with an Internet website, application, or advertisement.

(G) Geolocation data.

(H) Audio, electronic, visual, thermal, olfactory, or similar Information. (I) Professional or employment-related Information.

(J) Education information, defined as information that is not publicly available personally Identifiable information as defined In the Family Educational Rights and Privacy Act (20 U.S.C. section 1232g, 34 C.F.R. Part 99).

(K) Inferences drawn from any of the Information identified in this subdivision to create a profile about a consumer reflecting the consumer’s preferences, characteristics, psychological trends, predispositions, behavior, attitudes, intelligence, abilities, and aptitudes.

The CPRA modifies the CCPA standards on links on a business’ website allowing people to opt out of the sale or sharing of personal information. It also adds a requirement that such a link be placed on a website to allow people to opt out of the use or disclosure of sensitive personal information. A business would now be allowed to have one link for both if it wants, and it would also be allowed to remind people of the advantages of being a member of the business’ loyalty program and any associated charges or fees with not joining. This provision would seem to allow some businesses, at least those who can make the case of a reasonable relation between the discounts provided and the value of personal information, to pose a possibly uncomfortable dilemma to people: your privacy or your money. Put another way, the CPRA may well result in a price being put on one’s privacy with those of means or those intensely dedicated to privacy being able or willing to limit these practices while everyone else will acquiesce in the face of higher prices of worse services or products. Additionally, companies would not need to have links on their website if they allow for opting out through their platform, technology, or app.

If a person opts out, companies would have to wait 12 months before asking again whether they would not mind allowing the business to sell or share their personal information or use of disclose their sensitive personal information. But, one should bear in mind that even if a person opts out of the sale or sharing of personal information, a business may still collect or process it subject to other requirements in the CPRA. This right is limited to the dissemination of personal information through sales or a sharing arrangement.

The CPRA revises some key definitions and introduces new definitions, the most significant of which was discussed earlier: sensitive personal information. Another key change is to criteria for businesses subject to the CPRA. Each of the three thresholds for becoming a regulated business are changed:

  • First, language is changed to make clear a company must have earned $25 million in gross revenues in the preceding year to qualify on the basis of income.
  • Second, the threshold for the number of people is changed. It is raised from 50,000 to 100,000, and instead of counting people and devices, the latter is stricken and now households may be counted. Obviously, a household will likely include multiple devices, so counting by household allows for a higher threshold generally. Also, the counting is limited to the activities of businesses buying, selling, or sharing personal information, and so mere collection and processing is not counted, meaning if a business does not partake in any of the three enumerated categories, it would not qualify under this prong even if collects and processes the personal information of, say, 1 million Californians.
  • Thirdly, the threshold for businesses deriving 50% or more of their income selling consumers’ personal information is broadened to include sharing, meaning more entities might qualify on the basis of this prong.

Also, of note, the definition of business purpose was altered, and new definitions are provided for consent, contractor, cross-context behavioral advertising, dark pattern, non-personalized advertising and others.

The section on exemptions to the bars in the CCPA is rewritten and expanded by the CPRA. Businesses may disregard the obligations placed on by this privacy statute under a number of circumstances. For example, added circumstances include complying with a subpoena or court order or responding to direction by law enforcement agencies. Moreover, government agencies would able to make emergency requests for personal information to business if acting in good faith, asserts a legal right to do so, and follows with a court order within 3 days. There is also language that adds contractors to the CCPA’s provisions on the liability of a business for violations by its service providers that requires actual knowledge of such violations.

The CPRA keeps the CCPA’s grant of authority to allow people to sue for violations but casually tightens the circumstances under which this may happen to those in which one’s personal information is not encrypted and not redacted. The CCPA allows for a suit if a person’s personal information is neither encrypted nor redacted. Consequently, if a business uses either method of securing information it cannot be sued.

As noted, the bill would establish a California Privacy Protection Agency that would take over enforcement of the revised CCPA from the Office of the Attorney General. It would consist of a five-member board including a chair. At the earlier date of either 1 July 2021 or six months after the new agency informs the Attorney General it is ready to begin drafting rules, it shall have rulemaking authority. However, before this date, the Attorney General may have the authority or opportunity to begin some of the CPRA rulemakings during an interregnum that may serve to complicate implementation. Nonetheless, among other powers, the new agency would be able to investigate and punish violations with fines of up $2,500 per violation except for intentional violations and those involving the personal information of minor children, which could be fined at a rate of $7,500 per violation. Like the Federal Trade Commission, the California Privacy Protection Agency would be able to bring administrative actions inside the agency or go to court to sue. However, this new entity would only be provided $5 million during its first year and $10 million a year thereafter, which begs the question as to whether the new agency will be able to police privacy in California in a muscular way.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by David Mark from Pixabay

Further Reading, Other Developments, and Coming Events (26 August)

Here are today’s Further Reading, Other Developments, and Coming Events.

Coming Events

  • On 10 September, the General Services Administration (GSA) will have a webinar to discuss implementation of Section 889 of the “John S. McCain National Defense Authorization Act (NDAA) for FY 2019” (P.L. 115-232) that bars the federal government and its contractors from buying the equipment and services from Huawei, ZTE, and other companies from the People’s Republic of China.
  • The Federal Communications Commission (FCC) will hold a forum on 5G Open Radio Access Networks on 14 September. The FCC asserted
    • Chairman [Ajit] Pai will host experts at the forefront of the development and deployment of open, interoperable, standards-based, virtualized radio access networks to discuss this innovative new approach to 5G network architecture. Open Radio Access Networks offer an alternative to traditional cellular network architecture and could enable a diversity in suppliers, better network security, and lower costs.
  • The Senate Judiciary Committee’s Antitrust, Competition Policy & Consumer Rights Subcommittee will hold a hearing on 15 September titled “Stacking the Tech: Has Google Harmed Competition in Online Advertising?.” In their press release, Chair Mike Lee (R-UT) and Ranking Member Amy Klobuchar (D-MN) asserted:
    • Google is the dominant player in online advertising, a business that accounts for around 85% of its revenues and which allows it to monetize the data it collects through the products it offers for free. Recent consumer complaints and investigations by law enforcement have raised questions about whether Google has acquired or maintained its market power in online advertising in violation of the antitrust laws. News reports indicate this may also be the centerpiece of a forthcoming antitrust lawsuit from the U.S. Department of Justice. This hearing will examine these allegations and provide a forum to assess the most important antitrust investigation of the 21st century.
  • The United States’ Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) announced that its third annual National Cybersecurity Summit “will be held virtually as a series of webinars every Wednesday for four weeks beginning September 16 and ending October 7:”
    • September 16: Key Cyber Insights
    • September 23: Leading the Digital Transformation
    • September 30: Diversity in Cybersecurity
    • October 7: Defending our Democracy
    • One can register for the event here.
  • On 22 September, the Federal Trade Commission (FTC) will hold a public workshop “to examine the potential benefits and challenges to consumers and competition raised by data portability.” By 21 August, the FTC “is seeking comment on a range of issues including:
    • How are companies currently implementing data portability? What are the different contexts in which data portability has been implemented?
    • What have been the benefits and costs of data portability? What are the benefits and costs of achieving data portability through regulation?
    • To what extent has data portability increased or decreased competition?
    • Are there research studies, surveys, or other information on the impact of data portability on consumer autonomy and trust?
    • Does data portability work better in some contexts than others (e.g., banking, health, social media)? Does it work better for particular types of information over others (e.g., information the consumer provides to the business vs. all information the business has about the consumer, information about the consumer alone vs. information that implicates others such as photos of multiple people, comment threads)?
    • Who should be responsible for the security of personal data in transit between businesses? Should there be data security standards for transmitting personal data between businesses? Who should develop these standards?
    • How do companies verify the identity of the requesting consumer before transmitting their information to another company?
    • How can interoperability among services best be achieved? What are the costs of interoperability? Who should be responsible for achieving interoperability?
    • What lessons and best practices can be learned from the implementation of the data portability requirements in the GDPR and CCPA? Has the implementation of these requirements affected competition and, if so, in what ways?”
  • The Federal Communications Commission (FCC) will hold an open meeting on 30 September, but an agenda is not available at this time.

Other Developments

  • The United States (U.S.) Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency’s (CISA) Assistant Director for Infrastructure Security Brian Harrell has resigned and left CISA. Harrell is returning to the private sector and will be replaced by CISA Deputy Assistant Director Steve Harris in an acting capacity.
  • The Federal Communications Commission (FCC) announced “the successful conclusion of bidding in its auction of Priority Access Licenses in the 3550-3650 MHz band…which was designated as Auction 105, made available the greatest number of spectrum licenses ever in a single FCC auction.” The FCC stated “[t]his 70 megahertz of licensed spectrum will further the deployment of 5G, the next generation of wireless connectivity, as well as the Internet of Things and other advanced spectrum-based services.” The FCC added:
    • Bidding in the auction of 70 megahertz of Priority Access Licenses (PALs) in the 3550-3650 MHz band (Auction 105) concluded today following round 76. Gross proceeds reached $4,585,663,345, and bidders won 20,625 of 22,631, or more than 91.1%, of available licenses. The FCC will release a public notice in a few days providing detailed auction results, including the names of Auction 105 winning bidders, and announcing deadlines for payments and the filing of long-form applications, as well as other post-auction procedures needed for the prompt issuance of licenses. That information, as well as other information about Auction 105, will be available at: https://www.fcc.gov/auction/105.  
  • The United States (U.S.) Federal Bureau of Investigation (FBI) and Cybersecurity and Infrastructure Security Agency (CISA) issued a Joint Cybersecurity Advisory “in response to a voice phishing (vishing) campaign.” The agencies said “[v]ishing is a form of criminal phone fraud, using social engineering over the telephone system to gain access to private personal and financial information for the purpose of financial reward.” Vishing was reportedly key components in the recent Twitter hack and a breach of Israeli defense firms.
    • The FBI and CISA stated:
      • The COVID-19 pandemic has resulted in a mass shift to working from home, resulting in increased use of corporate virtual private networks (VPNs) and elimination of in-person verification. In mid-July 2020, cybercriminals started a vishing campaign—gaining access to employee tools at multiple companies with indiscriminate targeting—with the end goal of monetizing the access. Using vished credentials, cybercriminals mined the victim company databases for their customers’ personal information to leverage in other attacks. The monetizing method varied depending on the company but was highly aggressive with a tight timeline between the initial breach and the disruptive cash-out scheme.
  • At a press conference at the Department of Defense (DOD), Undersecretary of Defense for Acquisition and Sustainment Ellen Lord provided more detail on the waiver the trump Administration granted for some purchases of services and equipment from the People’s Republic of China. Regarding the Section 889 waiver, Lord stated
    • The waiver was granted temporarily by ODNI. It’s only in effect until September 30th in order to provide time to review the full details of the rule implementation using additional information from DOD. 
    • The waiver covers items that are considered low-risk to national security such as food, clothing, maintenance services, construction materials that are not electronic, and numerous other items that ODNI has identified as commodities, low-risk commodities. 
    • The waiver received is not for our major weapons systems or any support activity related to them. The short-term waiver is important so that end-of-fiscal-year activity will not be impacted. We are balancing warfighter readiness and completing end-of-year purchases to avoid issues with expiring funds with rule implementation for the next 45 days. DOD is not seeking a broader waiver request at this time. 
    • As we eliminate Chinese telecommunications equipment form our supply chain, we know that there are challenges for our industry partners, but we are pleased to see the defense industrial base stepping up smartly. This is the right thing for our national security. 
    • We’re pleased to see the efforts of our major primes in being proactive to eliminate the prohibited equipment, and we continue to remain in constant dialogue. We will keep you updated as we move forward. 
  • The United States (U.S.) Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA) has updated its “Essential Critical Infrastructure Workers Guidance” by issuing Version 4.0. CISA stated “[w]hile earlier versions were primarily intended to help officials and organizations identify essential work functions in order to allow them access to their workplaces during times of community restrictions, Version 4.0 identifies those essential workers that require specialized risk management strategies to ensure that they can work safely. It can also be used to begin planning and preparing for the allocation of scare resources used to protect essential workers against COVID-19.”
    • In the guidance, CISA explained
      • This list is intended to help State, local, tribal, territorial officials and organizations endeavor to protect their workers and communities as they continue to reopen in a phased approach, coupled with the need to ensure continuity of functions critical to public health and safety, as well as economic and national security. Decisions informed by this list should also take into consideration worker safety, workplace settings, as well as additional public health considerations based on the specific COVID-19-related concerns of particular jurisdictions. This list is advisory in nature.
    • CISA stressed:
      • It is not, nor should it be considered, a federal directive or standard. Additionally, this advisory list is not intended to be the exclusive list of critical infrastructure sectors, workers, and functions that should continue to work safely during the COVID-19 response across all jurisdictions. (emphasis in the original)
    • CISA asserted
      • The advisory list identifies workers who conduct a range of operations and services that are typically essential to continued critical infrastructure viability, including staffing operations centers, maintaining and repairing critical infrastructure, operating call centers, working construction, and performing operational functions, among others. It also includes workers who support crucial supply chains and enable functions for critical infrastructure. The industries they support represent, but are not limited to, medical and healthcare, telecommunications, information technology systems, defense, food and agriculture, transportation and logistics, energy, water and wastewater, and law enforcement
  • The United States (U.S.) Department of Energy’s (DOE) Artificial Intelligence and Technology Office (AITO) “announced the creation of the First Five Consortium (First Five).” The DOE has adapted Pentagon developed artificial intelligence/machine learning to help U.S. first responders make better, faster decisions in the event of a disaster. However, this effort was co-led by Microsoft and involved a range of other stakeholders.
    • DOE explained
      • Co-Chaired with Microsoft Corporation, First Five was formed in response to the January 2020 White House Executive Forum focused on Humanitarian Assistance and Disaster Response. This cross-cut of industry, government, non-profit, and academia has pledged their in-kind support to develop solutions that will improve the impact mitigation of natural disasters in the United States.
      • DOE’s Pacific Northwest National Laboratory is currently scaling a prototype initially developed by the Department of Defense (DOD) Joint Artificial Intelligence Center (JAIC) that uses deep learning algorithms to provide near real-time data to improve the decision making of our nation’s First Responders. Since 2019, the JAIC has led the development of AI capability through its National Mission Initiatives.
      • To support this work, Microsoft recently established a critical infrastructure team to help advance the nation’s key systems, services, and functions essential to the operation of American society and its economy. Comprehensive data collection together with modeling hold huge promise for forecasting and detecting early signs of coming disasters. The development of life-saving AI algorithms can help responders better focus their aid and make for a faster and safer response. The team will explore avenues to use AI, confidential computing, modernized communications, distributed systems, and cybersecurity to improve disaster resilience, collaborating with DOE, DOD, and others.
  • The Federal Aviation Administration (FAA), Department of Justice (DOJ), Federal Communications Commission (FCC), and Department of Homeland Security (DHS) published “an advisory guidance document to assist non-federal public and private entities interested in using technical tools, systems, and capabilities to detect and mitigate Unmanned Aircraft Systems (UAS).” This guidance document is not binding on entities operating UAS but instead runs through a survey of some federal laws that limit the use of UAS, especially with respect to privacy and surveillance.
  • The agencies stated
    • The advisory is intended to provide an overview of potentially applicable federal laws and regulations, as well as some factors relevant to whether those laws may apply to particular actions or systems. Specifically, this advisory addresses two categories of federal laws that may apply to UAS detection and mitigation capabilities: (1) various provisions of the U.S. criminal code enforced by DOJ; and (2) federal laws and regulations administered by the FAA, DHS, and the FCC. The advisory does not address state and local laws, which UAS detection and mitigation capabilities may also implicate. Neither does it cover potential civil liability flowing from the use of UAS detection and mitigation technologies
    • This advisory is provided for informational purposes only. It is strongly recommended that, prior to the testing, acquisition, installation, or use of UAS detection and/or mitigation systems, entities seek the advice of counsel experienced with both federal and state criminal, surveillance, and communications laws. Entities should conduct their own legal and technical analysis of each UAS detection and/or mitigation system and should not rely solely on vendors’ representations of the systems’ legality or functionality. As part of that analysis, entities should closely evaluate and consider whether the use of UAS detection and mitigation capabilities might impact the public’s privacy, civil rights, and civil liberties. This is particularly important because potential legal prohibitions, as discussed below, are not based on broad classifications of systems (e.g., active versus passive, detection versus mitigation), but instead are based on the functionality of each system and the specific ways in which a system operates and is used. A thorough understanding of both applicable law and the systems’ functionality will ensure important technologies designed to protect public safety, by detecting and/or mitigating UAS threats, are used effectively, responsibly, and legally.
  • A United States Department of Homeland Security (DHS) advisory body has reported to President Donald Trump on software defined networking in response to a request from the Executive Office of the President that it examine “the implications of software-defined networking (SDN) on the Nation’s national security and emergency preparedness (NS/EP) communications and information and communications technology (ICT) infrastructure.”
    • The National Security Telecommunications Advisory Committee (NSTAC) explained
      • In networking, SDN and network functions virtualization (NFV) represent an ongoing shift away from legacy technologies based upon hardware to software based networks that leverage standard, commercial off-the-shelf, or commodity-based hardware.
      • This shift is structurally transforming the ICT ecosystem and allowing networks to become more flexible and adaptive. SDN’s more flexible architecture has proven to be beneficial during the ongoing response to the coronavirus (COVID-19) pandemic.
      • The NSTAC examined best practices for SDN and related technologies; identified the associated challenges and opportunities; and assessed current utilization and corresponding risk mitigations. Building off the recommendations outlined in the 2017 NSTAC Report to the President on Emerging Technologies Strategic Vision, this examination sought to make specific recommendations to the EOP regarding SDN policy.
    • NSTAC made these and other recommendations:
      • The Administration should encourage and support the continued deployment of SDN technology in the U.S. and allied nation ICT environments. Policymakers should consider how to promote the use of open architectures with particular focus on 5G and beyond.
      • The Defense Community and the Intelligence Community (IC) should expand efforts to define their specific requirements and use cases for SDN and related technology specific to their unique needs, which can be shared with private sector SDN providers and relevant standards bodies. In collaboration with the private sector, the Defense Community and IC should also determine how the capabilities might be leveraged for adoption in the national security environment.
      • The Government establish policies to help educate U.S. departments, agencies, and critical infrastructure operators on the full range of SDN and related technology capabilities to enhance their mission performance, improve security, and lower costs.
      • Working with Congress, the Administration should: establish policies and incentives to encourage U.S.-based investment and innovation in research and development of SDN and related technology capabilities and standards; (2) encourage best practices for secure implementation; and (3) promote deployment of these capabilities within the U.S. Government and allied nation ICT environments. Policymakers should also consider updating acquisition strategies and mechanisms around SDN and related technology-based services.
  • The Australian Strategic Policy Institute released a report titled “Hunting The Phoenix” that “focuses on overseas talent-recruitment operations—how the Chinese Communist Party (CCP) goes abroad to hunt or lure” technology talent from abroad as a means of leveling the playing field with the United States (U.S.) and other nations.
    • ASPI asserted
      • The CCP’s use of talent-recruitment activity as a conduit for non-transparent technology transfer presents a substantial challenge to governments and research institutions. Many of those activities fly under the radar of traditional counterintelligence work, yet they can develop into espionage, interference and illegal or unethical behaviour.
      • While this phenomenon may still be poorly understood by many governments and universities, it can often be addressed by better enforcement of existing regulations. Much of the misconduct associated with talent-recruitment programs breaches existing laws, contracts and institutional policies. The fact that it nonetheless occurs at high levels points to a failure of compliance and enforcement mechanisms across research institutions and relevant government agencies. Governments and research institutions should therefore emphasise the need to build an understanding of CCP talent-recruitment work. They must also ensure that they enforce existing policies, while updating them as necessary. This report recommends the introduction of new policies to promote transparency and accountability and help manage conflicts of interest.
    • The United States (U.S.) Department of State provided ASPI with $145,600, which may have resulted in a bias to the final product, so caveat lector.

Further Reading

  • California DMV Is Selling Drivers’ Data to Private Investigators” By Joseph Cox – Vice. In following up on previous articles about various state Departments of Motor Vehicles (DMV) around the United States (U.S.) selling people’s personal information, this reporter got his hands on a list of the entities the California DMV is sharing such information with and it includes private investigators, bails bondsmen, and employers for those employees who drive as part of their duties. Previously, it has been disclosed that the CA DMV made $50 million a year doing this even though the agency claims this amount merely recovers its costs. No word in this article on whether recipients of this information are barred from sharing or selling it. Earlier this month, eight House Democrats and two Members of the California Assembly wrote the DMV with their concern about these practices and the practice of sharing driver’s license photos with law enforcement agencies for facial recognition technology.  
  • Facebook Braces Itself for Trump to Cast Doubt on Election Results” By Mike Isaac and Sheera Frenkel – The New York Times. In an article that seems sourced right out of Facebook headquarters, the reader is treated to the dilemmas facing the social media giant and competitors if President Donald Trump or others use their platforms to try and delegitimize an adverse or uncertain election result. There are plenty of options being discussed, but few decisions being made.
  • America’s Terrible Internet Is Making Quarantine Worse” By Olga Khazan – The Atlantic. The digital divide telecommunications advocates have been decrying for years has been exacerbated during the pandemic. Because the United States (U.S.) opted to treat broadband internet like a consumer product instead of a public utility (as many nations in Western Europe did), there are wide disparities in availability, quality, and speed that are further feeding inequities in the educational system. Affluent students have no trouble with online learning, less wealthy students may not be able to afford service or their service may not allow for Zoom classes. The U.S. may need to use the same methods deployed during the New Deal to rectify differences in electricity availability to close the digital divide.  
  • Trump pressures head of consumer agency to bend on social media crackdown” By Leah Nylen, John Hendel and Betsy Woodruff Swan – Politico. It comes as no surprise that President Donald Trump is leaning on Federal Trade Commission Chair Joe Simons to act according to the former’s executive order purportedly regarding online censorship. The two have met twice and the issue has arisen, but the unnamed sources in the article did not relate the result of the conversation. Before a Senate committee earlier this month, Simons poured cold water on the notion the agency will wade into the fight over implementation of the executive order that could strip away more protection for technology companies under 47 U.S.C. 230.
  • With Hacks and Cameras, Beijing’s Electronic Dragnet Closes on Hong Kong” By Paul Mozur – The New York Times. After passage of the new security law that changed civil liberties in Hong Kong, the police and security services are threatening and arresting pro-democracy activists and politicians. They are also using technological means to press these advocates such as hacking into Facebook accounts and forcing people to provide access to their phones. Many technology companies are refusing to honor requests for information or access from officials and are now treating them the same way they would for requests from Beijing.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Sasin Tipchai from Pixabay

EDPB Steps Into Twitter Investigation

The body that consists of and oversees the EU’s DPAs will use its power under the GDPR to resolve  a dispute between agencies over the punishment for Twitter’s data breaches.

The European Data Protection Board (EDPB) will soon have the opportunity to use a key power for the first time since its inception in order to resolve a dispute among data protection authorities (DPA) in the European Union (EU). Unnamed DPAs have objected to proposed ruling by Ireland’s Data Protection Commission (DPC), the lead DPA investigating 2018 and 2019 Twitter data breaches. Consequently, per the General Data Protection Regulation (GDPR), the disagreement has been handed off to the EDPB, and depending on how resolution of this matter happens, the body could decide Twitter’s punishment, including a possible fine of up to 4% of its worldwide revenue. What’s more, the DPC is the lead agency investigating Facebook’s WhatsApp and Instagram, among other large technology companies, and may have to relinquish those decisions as well if other DPAs disagree with the DPC’s proposed punishment for any wrongdoing.

The DPC submitted its draft decision to other DPAs on the Twitter breach in May in accordance with Article 60 of the GDPR. The DPC stated “[t]he draft decision focusses on whether Twitter International Company has complied with Articles 33(1) and 33(5) of the GDPR” (i.e. the provision pertaining to data breach and proper notification protocol. The DPC further explained

  • This draft decision is one of a number of significant developments in DPC inquiries into “big tech” companies this week. Deputy Commissioner Graham Doyle has confirmed that: “In addition to submitting this draft decision to other EU supervisory authorities, we have this week sent a preliminary draft decision to WhatsApp Ireland Limited for their final submissions which will be taken in to account by the DPC before preparing a draft decision in that matter also for Article 60 purposes.  The inquiry into WhatsApp Ireland examines its compliance with Articles 12 to 14 of the GDPR in terms of transparency including in relation to transparency around what information is shared with Facebook.“
  • The DPC has also completed the investigation phase of a complaint-based inquiry which focuses on Facebook Ireland’s obligations to establish a lawful basis for personal data processing. This inquiry is now in the decision-making phase at the DPC.

Article 65 of the GDPR provides that the EDPB will make a binding decision on an investigation where “a supervisory authority concerned has raised a relevant and reasoned objection to a draft decision of the lead authority or the lead authority has rejected such an objection as being not relevant or reasoned.” In this case, at least one DPA has raised an objection to the DPC’s draft decision, thus triggering Article 65. Then the EDPB has a month to get two-thirds of its members to agree to a binding decision it may draft. If this is not achieved, then the Board has another two weeks to get a simple majority, and if this does not occur, then EDPB Chair  Andrea Jelinek alone may decide. Consequently, it is possible the EDPB redrafts the DPC decision and tries to get buy in from the DPAs that make up the Board to support a stronger punishment of Twitter.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Roland Mey from Pixabay

National Privacy Legislation Stalled in U.S.

The chances for U.S. privacy legislation are worse now than they were before the pandemic.  However, there may be some decision points approaching.     

A few weeks into the traditional August recess, Congress is no closer to enacting federal privacy legislation than before the pandemic. In fact, such legislation may be further from being sent to the White House now that more pressing, more immediate maters have eclipsed privacy such as further COVID-19 relief legislation and appropriations for the next fiscal year set to start on 30 September. There is always the chance stakeholders will dispense with their entrenched positions during a post-election session and reach agreement on a bill, but this will depend on the election results, for if Democrats take the White House and Senate, they may well conclude they will get privacy legislation more to their liking next year.

In terms of the present impasse, at present, emanates from a few different issues: a private right of action for people and state preemption. Generally speaking, Democrats favor the former and oppose the latter with Republicans’ position being the opposite. However, it is possible the two parties can agree on a limited right for people to sue companies for violating their privacy rights and some form of preemption of contrary state laws, perhaps along the lines of the preemption structure in the “Financial Services Modernization Act of 1999” (P.L. 106–102) (aka the Gramm–Leach–Bliley Act) that sets a uniform floor for privacy and data security that states may regulate above. However, industry stakeholders are likely resisting any such provisions for they would still face litigation, likely in the form of class actions, and varied, differing privacy standards across the U.S.

Otherwise, there is broad agreement that people in the U.S. would be notified of the privacy practices of entities before they can start collecting, processing, and sharing personal data and would need to explicitly agree to allow this to happen. And so, it would likely be an opt-in for most data collection, processing, and sharing. However, people would likely get a more limited set of rights to opt out of certain practices such as data transfers to third parties, but there is a great deal of variance among the leading bills on what people can choose to avoid. Likewise, people in the U.S. would generally be able to request and receive, access, correct, and delete personal data in specified situations. Most, but not all, of the bills name the Federal Trade Commission (FTC) as the regulator of a new privacy regulatory structure with varying degrees of rulemaking power. A handful of other bills seek to create out of whole cloth a new privacy regulator along the lines of Europe’s data protection authorities.

However, if the voters of California vote for the ballot initiative to enact the “California Privacy Rights Act” (CPRA), a tightening of the “California Consumer Privacy Act” (CCPA) (AB 375) that would prevent future amendments to weaken or dilute privacy protection in California, things may change in Washington. Deprived of a means of rolling back California’s new privacy regulatory structure, as many industry stakeholders tried to do in the last legislative session with the CCPA, these interests may set their sights on a national privacy bill that would ameliorate this situation. Consequently, they may pressure Republicans and Democrats in Congress to resolve the outstanding issues on federal privacy legislation.

Moreover, stakeholders in Washington are responding to what appears to be the more urgent fire: the deathblow dealt to Privacy Shield by the European Union’s highest court. Without an agreement in place to allow multinationals to transfer and process the personal data to the U.S., these entities will need to cease doing so or implement alternate means of doing so under the General Data Privacy Regulation (GDPR) such as standard contract clauses (SCC) or binding corporate rules (BCR), but even these means of transfer are not without risk. European Union (EU) data protection authorities (DPAs) may soon be reviewing these agreements to ensure they comport with the Court of Justice of the European Union’s (CJEU) ruling that the U.S. lacks controls and remedies to ensure the privacy rights of EU citizens.

It bears note that another suit has been filed in the EU to test the legality of using SCCs generally to transfer data to the U.S. Austrian privacy activist Maximillian Schrems and the organization he is working with, noyb–European Center for Digital Rights, have filed 101 complaints in all 30 EU nations and the 33 European Economic Area (EEA) nations, arguing that Google and Facebook are operating in violation of the CJEU’s ruling. Specifically, the organization is claiming:

A quick analysis of the HTML source code of major EU webpages shows that many companies still use Google Analytics or Facebook Connect one month after a major judgment by the Court of Justice of the European Union (CJEU) – despite both companies clearly falling under US surveillance laws, such as [Section 702 of the Foreign Intelligence Surveillance Act (FISA)]. Neither Facebook nor Google seem to have a legal basis for the data transfers. Google still claims to rely on the “Privacy Shield” a month after it was invalidated, while Facebook continues to use the “SCCs”, despite the Court finding that US surveillance laws violate the essence of EU fundamental rights.

Consequently, even if SCCs are used more widely as means of transferring personal data, the CJEU could find that such agreements for transfers to the U.S. do not comport with the GDPR, eliminating another means used by which U.S. multinationals. This could lead to more companies like Facebook and Google segregating EU data and processing it in the EU or another jurisdiction for which the European Commission has issued an adequacy decision. Or, this could create pressure in Washington to reform U.S. surveillance laws and practices in order that a future general data transfer agreement pass muster with the CJEU.

Still, it may serve some purpose to list the salient privacy bills and link to analysis. As mentioned, a trio of COVID-19 privacy bills were introduced a few months ago to address mainly the use of smartphones for exposure and contact tracing:

Otherwise, the major privacy bills introduced this Congress include:

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by S. Hermann & F. Richter from Pixabay