Further Reading, Other Developments, and Coming Events (23 July)

First things first, if you would like to receive my Technology Policy Update, email me. You can find some of these Updates from 2019 and 2020 here.

Here are Further Reading, Other Developments, and Coming Events.

Other Developments

  • New Zealand’s Privacy Commissioner has begun the process of implementing the new Privacy Act 2020 and has started asking for input on the codes of practice that will effectuate the rewrite of the nation’s privacy laws. The Commissioner laid out the following schedule:
    • Telecommunications Information Privacy Code and Civil Defence National Emergencies (Information Sharing) Code
      • Open: 29 July 2020 / Close: 26 August 2020
    • The Commissioner noted “[t]he new Privacy Act 2020 is set to come into force on 1 December…[and] makes several key reforms to New Zealand’s privacy law, including amendments to the information privacy principles.” The Commissioner added “[a]s a result, the six codes of practice made under the Privacy Act 1993 require replacement.”
  • Australia’s 2020 Cyber Security Strategy Industry Advisory Panel issued its report and recommendations “to provide strategic advice to support the development of Australia’s 2020 Cyber Security Strategy.” The body was convened by the Minister for Home Affairs. The panel “recommendations are structured around a framework of five key pillars:
    • Deterrence: The Government should establish clear consequences for those targeting businesses and Australians. A key priority is increasing transparency on Government investigative activity, more frequent attribution and consequences applied where appropriate, and strengthening the Australian Cyber Security Centre’s (ACSC’s) ability to disrupt cyber criminals by targeting the proceeds of cybercrime.
    • Prevention: Prevention is vital and should include initiatives to help businesses and Australians remain safer online. Industry should increase its cyber security capabilities and be increasingly responsible for ensuring their digital products and services are cyber safe and secure, protecting their customers from foreseeable cyber security harm. While Australians have access to trusted goods and services, they also need to be supported with advice on how to practice safe behaviours at home and work. A clear definition is required for what constitutes critical infrastructure and systems of national significance across the public and private sectors. This should be developed with consistent, principles-based regulatory requirements to implement reasonable protection against cyber threats for both the public and private sectors.
    • Detection: There is clear need for the development of a mechanism between industry and Government for real-time sharing of threat information, beginning with critical infrastructure operators. The Government should also empower industry to automatically detect and block a greater proportion of known cyber security threats in real-time including initiatives such as ‘cleaner pipes’.
    • Resilience: We know malicious cyber activity is hitting Australians hard. The tactics and techniques used by malicious cyber actors are evolving so quickly that individuals, businesses and critical infrastructure operators in Australia are not fully able to protect themselves and their assets against every cyber security threat. As a result, it is recommended that the Government should strengthen the incident response and victim support options already in place. This should include conducting cyber security exercises in partnership with the private sector. Speed is key when it comes to recovering from cyber incidents, it is therefore proposed that critical infrastructure operators should collaborate more closely to increase preparedness for major cyber incidents.
    • Investment: The Joint Cyber Security Centre (JCSC) program is a highly valuable asset to form a key delivery mechanism for the initiatives under the 2020 Cyber Security Strategy should be strengthened. This should include increased resources and the establishment of a national board in partnership with industry, states and territories with an integrated governance structure underpinned by a charter outlining scope and deliverables.
  •  Six of the world’s data protection authorities issued an open letter to the teleconferencing companies “to set out our concerns, and to clarify our expectations and the steps you should be taking as Video Teleconferencing (VTC) companies to mitigate the identified risks and ultimately ensure that our citizens’ personal information is safeguarded in line with public expectations and protected from any harm.” The DPAs stated that “[t]he principles in this open letter set out some of the key areas to focus on to ensure that your VTC offering is not only compliant with data protection and privacy law around the world, but also helps build the trust and confidence of your userbase.” They added that “[w]e welcome responses to this open letter from VTC companies, by 30 September 2020, to demonstrate how they are taking these principles into account in the design and delivery of their services. Responses will be shared amongst the joint signatories to this letter.” The letter was drafted and signed by:
    • The Privacy Commissioner of Canada
    • The United Kingdom Information Commissioner’s Office
    • The Office of the Australian Information Commissioner
    • The Gibraltar Regulatory Authority
    • The Office of the Privacy Commissioner for Personal Data, Hong Kong, China
    • The Federal Data Protection and Information Commissioner of Switzerland
  • The United States Office of the Comptroller of the Currency (OCC) “is reviewing its regulations on bank digital activities to ensure that its regulations continue to evolve with developments in the industry” and released an “advance notice of proposed rulemaking (ANPR) [that] solicits public input as part of this review” by 8 August 2020. The OCC explained:
    • Over the past two decades, technological advances have transformed the financial industry, including the channels through which products and services are delivered and the nature of the products and services themselves. Fewer than fifteen years ago, smart phones with slide-out keyboards and limited touchscreen capability were newsworthy.[1] Today, 49 percent of Americans bank on their phones,[2] and 85 percent of American millennials use mobile banking.[3]
    • The first person-to-person (P2P) platform for money transfer services was established in 1998.[4] Today, there are countless P2P payment options, and many Americans regularly use P2P to transfer funds.[5] In 2003, Congress authorized digital copies of checks to be made and electronically processed.[6] Today, remote deposit capture is the norm for many consumers.[7] The first cryptocurrency was created in 2009; there are now over 1,000 rival cryptocurrencies,[8] and approximately eight percent of Americans own cryptocurrency.[9] Today, artificial intelligence (AI) and machine learning, biometrics, cloud computing, big data and data analytics, and distributed ledger and blockchain technology are used commonly or are emerging in the banking sector. Even the language used to describe these innovations is evolving, with the term “digital” now commonly used to encompass electronic, mobile, and other online activities.
    • These technological developments have led to a wide range of new banking products and services delivered through innovative and more efficient channels in response to evolving customer preferences. Back-office banking operations have experienced significant changes as well. AI and machine learning play an increasing role, for example, in fraud identification, transaction monitoring, and loan underwriting and monitoring. And technology is fueling advances in payments. In addition, technological innovations are helping banks comply with the complex regulatory framework and enhance cybersecurity to more effectively protect bank and customer data and privacy. More and more banks, of all sizes and types, are entering into relationships with technology companies that enable banks and the technology companies to establish new delivery channels and business practices and develop new products to meet the needs of consumers, businesses, and communities. These relationships facilitate banks’ ability to reach new customers, better serve existing customers, and take advantage of cost efficiencies, which help them to remain competitive in a changing industry.
    • Along with the opportunities presented by these technological changes, there are new challenges and risks. Banks should adjust their business models and practices to a new financial marketplace and changing customer demands. Banks are in an environment where they compete with non-bank entities that offer products and services that historically have only been offered by banks, while ensuring that their activities are consistent with the authority provided by a banking charter and safe and sound banking practices. Banks also must comply with applicable laws and regulations, including those focused on consumer protection and Bank Secrecy Act/anti-money laundering (BSA/AML) compliance. And, importantly, advanced persistent threats require banks to pay constant and close attention to increasing cybersecurity risks.
    • Notwithstanding these challenges, the Federal banking system is well acquainted with and well positioned for change, which has been a hallmark of this system since its inception. The OCC’s support of responsible innovation throughout its history has helped facilitate the successful evolution of the industry. The OCC has long understood that the banking business is not frozen in time and agrees with the statement made over forty years ago by the U.S. Court of Appeals for the Ninth Circuit: “the powers of national banks must be construed so as to permit the use of new ways of conducting the very old business of banking.” [10] Accordingly, the OCC has sought to regulate banking in ways that allow for the responsible creation or adoption of technological advances and to establish a regulatory and supervisory framework that allows banking to evolve, while ensuring that safety and soundness and the fair treatment of customers is preserved.
  • A trio of House of Representatives Members have introduced “legislation to put American consumers in the driver’s seat by giving them clearer knowledge about the technology they are purchasing.” The “Informing Consumers about Smart Devices Act” (H.R.7583) was drafted and released by Representatives John Curtis (R-UT), Seth Moulton (D-MA), and Gus Bilirakis (R-FL) and according to their press release, it would:
    • The legislation is in response to reports about household devices listening to individuals’ conversations without their knowledge. While some manufacturers have taken steps to more clearly label their products with listening devices, this legislation would make this information more obvious to consumers without overly burdensome requirements on producers of these devices. 
    • Specifically, the bill requires the Federal Trade Commission (FTC) to work alongside industry leaders to establish guidelines for properly disclosing the potential for their products to contain audio or visual recording capabilities. To ensure this does not become an overly burdensome labeling requirement, the legislation provides manufacturers the option of requesting customized guidance from the FTC that fits within their existing marketing or branding practices in addition to permitting these disclosures pre or post-sale of their products.
  • House Oversight and Reform Committee Ranking Member James Comer (R-KY) sent Twitter CEO Jack Dorsey a letter regarding last week’s hack, asking for answers to his questions about the security practices of the platform. Government Operations Subcommittee Ranking Member Jody Hice (R-GA) and 18 other Republicans also wrote Dorsey demanding an explanation of “Twitter’s intent and use of tools labeled ‘SEARCH BLACKLIST’ and ‘TRENDS BLACKLIST’ shown in the leaked screenshots.”
  • The United States Court of Appeals for the District of Columbia has ruled against United States Agency for Global Media (USAGM) head Michael Pack and enjoined his efforts to fire the board of the Open Technology Fund (OTF). The court stated “it appears likely that the district court correctly concluded that 22 U.S.C. § 6209(d) does not grant the Chief Executive Officer of the United States Agency for Global Media, Michael Pack, with the authority to remove and replace members of OTF’s board.” Four removed members of the OTF Board had filed suit against pack. Yesterday, District of Columbia Attorney General Karl Racine (D) filed suit against USAGM, arguing that Pack violated District of Columbia law by dissolving the OTF Board and creating a new one.
  • Three advocacy organizations have lodged their opposition to the “California Privacy Rights Act” (aka Proposition 24) that will be on the ballot this fall in California. The American Civil Liberties Union, the California Alliance for Retired Americans, and Color of Change are speaking out against the bill because “it stacks the deck in favor of big tech corporations and reduces your privacy rights.” Industry groups have also started advertising and advocating against the statute that would rewrite the “California Consumer Privacy Act” (CCPA) (AB 375).

Further Reading

  • Facebook adds info label to Trump post about elections” – The Hill. Facebook has followed Twitter in appending information to posts of President Donald Trump that implicitly rebut his false claims about fraud and mail-in voting. Interestingly, they also appended information to posts of former Vice President Joe Biden that merely asked people to vote Trump out in November. If Facebook continues this policy, it is likely to stoke the ire of Republicans, many of whom claim that the platform and others are biased against conservative voices and viewpoints.
  • Ajit Pai urges states to cap prison phone rates after he helped kill FCC caps” – Ars Technica. The chair of the Federal Communications Commission (FC) is imploring states to regulate the egregious rates charged on payphones to the incarcerated in prison. The rub here is that Pai fought against Obama-era FCC efforts to regulate these practices, claiming the agency lacked the jurisdiction to police intrastate calls. Pai pulled the plug on the agency’s efforts to fight for these powers in court when he became chair.
  • Twitter bans 7,000 QAnon accounts, limits 150,000 others as part of broad crackdown” – NBC News. Today, Twitter announced it was suspending thousands of account of conspiracy theorists who believe a great number of untrue things, namely the “deep state” of the United States is working to thwart the presidency of Donald Trump. Twitter announced in a tweet: “[w]e will permanently suspend accounts Tweeting about these topics that we know are engaged in violations of our multi-account policy, coordinating abuse around individual victims, or are attempting to evade a previous suspension — something we’ve seen more of in recent weeks.” This practice, alternately called brigading or swarming, has been employed on a number of celebrities who are alleged to be engaging in pedophilia. The group, QAnon, has even been quoted or supported by Members of the Republican Party, some of whom may see Twitter’s actions as ideological.
  • Russia and China’s vaccine hacks don’t violate rules of road for cyberspace, experts say” – The Washington Post. Contrary to the claims of the British, Canadian, and American governments, attempts by other nations to hack into COVID-19 research is not counter to cyber norms these and other nations have been pushing to make the rules of the road. The experts interviewed for the article are far more concerned about the long term effects of President Donald Trump allowing the Central Intelligence Agency to start launching cyber attacks when and how it wishes.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Further Reading and Other Developments (17 July)

First things first, if you would like to receive my Technology Policy Update, email me. You can find some of these Updates from 2019 and 2020 here.

Speaking of which, the Technology Policy Update is being published daily during the week, and here are the Other Developments and Further Reading from this week.

Other Developments

  • Acting Senate Intelligence Committee Chair Marco Rubio (R-FL), Senate Foreign Relations Committee Chair Jim Risch (R-ID), and Senators Chris Coons (D-DE) and John Cornyn (R-TX) wrote Secretary of Commerce Wilbur Ross and Secretary of Defense Mike Esper “to ask that the Administration take immediate measures to bring the most advanced digital semiconductor manufacturing capabilities to the United States…[which] are critical to our American economic and national security and while our nation leads in the design of semiconductors, we rely on international manufacturing for advanced semiconductor fabrication.” This letter follows the Trump Administration’s May announcement that the Taiwan Semiconductor Manufacturing Corporation (TSMC) agreed to build a $12 billion plant in Arizona. It also bears note that one of the amendments pending to the “National Defense Authorization Act for Fiscal Year 2021“ (S.4049) would establish a grants program to stimulate semiconductor manufacturing in the US.
  • Senators Mark R. Warner (D-VA), Mazie K. Hirono (D-HI) and Bob Menendez (D-NJ) sent a letter to Facebook “regarding its failure to prevent the propagation of white supremacist groups online and its role in providing such groups with the organizational infrastructure and reach needed to expand.” They also “criticized Facebook for being unable or unwilling to enforce its own Community Standards and purge white supremacist and other violent extremist content from the site” and posed “a series of questions regarding Facebook’s policies and procedures against hate speech, violence, white supremacy and the amplification of extremist content.”
  • The Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) published the Pipeline Cyber Risk Mitigation Infographic that was “[d]eveloped in coordination with the Transportation Security Administration (TSA)…[that] outlines activities that pipeline owners/operators can undertake to improve their ability to prepare for, respond to, and mitigate against malicious cyber threats.”
  • Representative Kendra Horn (D-OK) and 10 other Democrats introduced legislation “requiring the U.S. government to identify, analyze, and combat efforts by the Chinese government to exploit the COVID-19 pandemic” that was endorsed by “[t]he broader Blue Dog Coalition” according to their press release. The “Preventing China from Exploiting COVID-19 Act” (H.R.7484) “requires the Director of National Intelligence—in coordination with the Secretaries of Defense, State, and Homeland Security—to prepare an assessment of the different ways in which the Chinese government has exploited or could exploit the pandemic, which originated in China, in order to advance China’s interests and to undermine the interests of the United States, its allies, and the rules-based international order.” Horn and her cosponsors stated “[t]he assessment must be provided to Congress within 90 days and posted in unclassified form on the DNI’s website.”
  • The Supreme Court of Canada upheld the “Genetic Non-Discrimination Act” and denied a challenge to the legality of the statute brought by the government of Quebec, the Attorney General of Canada, and others. The court found:
    • The pith and substance of the challenged provisions is to protect individuals’ control over their detailed personal information disclosed by genetic tests, in the broad areas of contracting and the provision of goods and services, in order to address Canadians’ fears that their genetic test results will be used against them and to prevent discrimination based on that information. This matter is properly classified within Parliament’s power over criminal law. The provisions are supported by a criminal law purpose because they respond to a threat of harm to several overlapping public interests traditionally protected by the criminal law — autonomy, privacy, equality and public health.
  • The U.S.-China Economic and Security Review Commission published a report “analyzing the evolution of U.S. multinational enterprises (MNE) operations in China from 2000 to 2017.” The Commission found MNE’s operations in the People’s Republic of China “may indirectly erode the  United  States’  domestic industrial competitiveness  and  technological  leadership relative  to  China” and “as U.S. MNE activity in China increasingly focuses on the production of high-end technologies, the risk  that  U.S.  firms  are  unwittingly enabling China to  achieve  its industrial  policy and  military  development objectives rises.”
  • The Federal Communications Commission (FCC) and Huawei filed their final briefs in their lawsuit before the United States Court of Appeals for the Fifth Circuit arising from the FCC’s designation of Huawei as a “covered company” for purposes of a rule that denies Universal Service Funds (USF) “to purchase or obtain any equipment or services produced or provided by a covered company posing a national security threat to the integrity of communications networks or the communications supply chain.” Huawei claimed in its brief that “[t]he rulemaking and “initial designation” rest on the FCC’s national security judgments..[b]ut such judgments fall far afield of the FCC’s statutory  authority  and  competence.” Huawei also argued “[t]he USF rule, moreover, contravenes the Administrative Procedure Act (APA) and the Due Process Clause.” The FCC responded in its filing that “Huawei challenges the FCC’s decision to exclude carriers whose networks are vulnerable to foreign interference, contending that the FCC has neither statutory nor constitutional authority to make policy judgments involving “national security”…[but] [t]hese arguments are premature, as Huawei has not yet been injured by the Order.” The FCC added “Huawei’s claim that the Communications Act textually commits all policy determinations with national security implications to the President is demonstrably false.”
  • European Data Protection Supervisor (EDPS) Wojciech Wiewiórowski released his Strategy for 2020-2024, “which will focus on Digital Solidarity.” Wiewiórowski explained that “three core pillars of the EDPS strategy outline the guiding actions and objectives for the organisation to the end of 2024:
    • Foresight: The EDPS will continue to monitor legal, social and technological advances around the world and engage with experts, specialists and data protection authorities to inform its work.
    • Action: To strengthen the EDPS’ supervision, enforcement and advisory roles the EDPS will promote coherence in the activities of enforcement bodies in the EU and develop tools to assist the EU institutions, bodies and agencies to maintain the highest standards in data protection.
    • Solidarity: While promoting digital justice and privacy for all, the EDPS will also enforce responsible and sustainable data processing, to positively impact individuals and maximise societal benefits in a just and fair way.
  • Facebook released a Civil Rights Audit, an “investigation into Facebook’s policies and practices began in 2018 at the behest and encouragement of the civil rights community and some members of Congress.” Those charged with conducting the audit explained that they “vigorously advocated for more and would have liked to see the company go further to address civil rights concerns in a host of areas that are described in detail in the report” including but not limited to
    • A stronger interpretation of its voter suppression policies — an interpretation that makes those policies effective against voter suppression and prohibits content like the Trump voting posts — and more robust and more consistent enforcement of those policies leading up to the US 2020 election.
    • More visible and consistent prioritization of civil rights in company decision-making overall.
    • More resources invested to study and address organized hate against Muslims, Jews and other targeted groups on the platform.
    • A commitment to go beyond banning explicit references to white separatism and white nationalism to also prohibit express praise, support and representation of white separatism and white nationalism even where the terms themselves are not used.
    • More concrete action and specific commitments to take steps to address concerns about algorithmic bias or discrimination.
    • They added that “[t]his report outlines a number of positive and consequential steps that the company has taken, but at this point in history, the Auditors are concerned that those gains could be obscured by the vexing and heartbreaking decisions Facebook has made that represent significant setbacks for civil rights.”
  • The National Security Commission on Artificial Intelligence (NSCAI) released a white paper titled “The Role of AI Technology in Pandemic Response and Preparedness” that “outlines a series of investments and initiatives that the United States must undertake to realize the full potential of AI to secure our nation against pandemics.” NSCAI noted its previous two white papers:
  • Secretary of Defense Mark Esper announced that Chief Technology Officer Michael J.K. Kratsios has “been designated to serve as Acting Under Secretary of Defense for Research and Engineering” even though he does not have a degree in science. The last Under Secretary held a PhD. However, Kratsios worked for venture capitalist Peter Thiel who backed President Donald Trump when he ran for office in 2016.
  • The United States’ Department of Transportation’s Federal Railroad Administration (FRA) issued research “to develop a cyber security risk analysis methodology for communications-based connected railroad technologies…[and] [t]he use-case-specific implementation of the methodology can identify potential cyber attack threats, system vulnerabilities, and consequences of the attack– with risk assessment and identification of promising risk mitigation strategies.”
  • In a blog post, a National Institute of Standards and Technology (NIST) economist asserted cybercrime may be having a much larger impact on the United States’ economy than previously thought:
    • In a recent NIST report, I looked at losses in the U.S. manufacturing industry due to cybercrime by examining an underutilized dataset from the Bureau of Justice Statistics, which is the most statistically reliable data that I can find. I also extended this work to look at the losses in all U.S. industries. The data is from a 2005 survey of 36,000 businesses with 8,079 responses, which is also by far the largest sample that I could identify for examining aggregated U.S. cybercrime losses. Using this data, combined with methods for examining uncertainty in data, I extrapolated upper and lower bounds, putting 2016 U.S. manufacturing losses to be between 0.4% and 1.7% of manufacturing value-added or between $8.3 billion and $36.3 billion. The losses for all industries are between 0.9% and 4.1% of total U.S. gross domestic product (GDP), or between $167.9 billion and $770.0 billion. The lower bound is 40% higher than the widely cited, but largely unconfirmed, estimates from McAfee.
  • The Government Accountability Office (GAO) advised the Federal Communications Commission (FCC) that it needs a comprehensive strategy for implementing 5G across the United States. The GAO concluded
    • FCC has taken a number of actions regarding 5G deployment, but it has not clearly developed specific and measurable performance goals and related measures–with the involvement of relevant stakeholders, including National Telecommunications and Information Administration (NTIA)–to manage the spectrum demands associated with 5G deployment. This makes FCC unable to demonstrate whether the progress being made in freeing up spectrum is achieving any specific goals, particularly as it relates to congested mid-band spectrum. Additionally, without having established specific and measurable performance goals with related strategies and measures for mitigating 5G’s potential effects on the digital divide, FCC will not be able to assess the extent to which its actions are addressing the digital divide or what actions would best help all Americans obtain access to wireless networks.
  • The Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) issued “Time Guidance for Network Operators, Chief Information Officers, and Chief Information Security Officers” “to inform public and private sector organizations, educational institutions, and government agencies on time resilience and security practices in enterprise networks and systems…[and] to address gaps in available time testing practices, increasing awareness of time-related system issues and the linkage between time and cybersecurity.”
  • Fifteen Democratic Senators sent a letter to the Department of Defense, Office of the Director of National Intelligence (ODNI), Department of Homeland Security (DHS), Federal Bureau of Investigations (FBI), and U.S. Cyber Command, urging them “to take additional measures to fight influence campaigns aimed at disenfranchising voters, especially voters of color, ahead of the 2020 election.” They called on these agencies to take “additional measures:”
    • The American people and political candidates are promptly informed about the targeting of our political processes by foreign malign actors, and that the public is provided regular periodic updates about such efforts leading up to the general election.
    • Members of Congress and congressional staff are appropriately and adequately briefed on continued findings and analysis involving election related foreign disinformation campaigns and the work of each agency and department to combat these campaigns.
    • Findings and analysis involving election related foreign disinformation campaigns are shared with civil society organizations and independent researchers to the maximum extent which is appropriate and permissible.
    • Secretary Esper and Director Ratcliffe implement a social media information sharing and analysis center (ISAC) to detect and counter information warfare campaigns across social media platforms as authorized by section 5323 of the Fiscal Year 2020 National Defense Authorization Act.
    • Director Ratcliffe implement the Foreign Malign Influence Response Center to coordinate a whole of government approach to combatting foreign malign influence campaigns as authorized by section 5322 of the Fiscal Year 2020 National Defense Authorization Act.
  • The Information Technology and Innovation Foundation (ITIF) unveiled an issue brief “Why New Calls to Subvert Commercial Encryption Are Unjustified” arguing “that government efforts to subvert encryption would negatively impact individuals and businesses.” ITIF offered these “key takeaways:”
    • Encryption gives individuals and organizations the means to protect the confidentiality of their data, but it has interfered with law enforcement’s ability to prevent and investigate crimes and foreign threats.
    • Technological advances have long frustrated some in the law enforcement community, giving rise to multiple efforts to subvert commercial use of encryption, from the Clipper Chip in the 1990s to the San Bernardino case two decades later.
    • Having failed in these prior attempts to circumvent encryption, some law enforcement officials are now calling on Congress to invoke a “nuclear option”: legislation banning “warrant-proof” encryption.
    • This represents an extreme and unjustified measure that would do little to take encryption out of the hands of bad actors, but it would make commercial products less secure for ordinary consumers and businesses and damage U.S. competitiveness.
  • The White House released an executive order in which President Donald Trump determined “that the Special Administrative Region of Hong Kong (Hong Kong) is no longer sufficiently autonomous to justify differential treatment in relation to the People’s Republic of China (PRC or China) under the particular United States laws and provisions thereof set out in this order.” Trump further determined “the situation with respect to Hong Kong, including recent actions taken by the PRC to fundamentally undermine Hong Kong’s autonomy, constitutes an unusual and extraordinary threat, which has its source in substantial part outside the United States, to the national security, foreign policy, and economy of the United States…[and] I hereby declare a national emergency with respect to that threat.” The executive order would continue the Administration’s process of changing policy to ensure Hong Kong is treated the same as the PRC.
  • President Donald Trump also signed a bill passed in response to the People’s Republic of China (PRC) passing legislation the United States and other claim will strip Hong Kong of the protections the PRC agreed to maintain for 50 years after the United Kingdom (UK) handed over the city. The “Hong Kong Autonomy Act” “requires the imposition of sanctions on Chinese individuals and banks who are included in an annual State Department list found to be subverting Hong Kong’s autonomy” according to the bill’s sponsor Representative Brad Sherman (D-CA).
  • Representative Stephen Lynch, who chairs House Oversight and Reform Committee’s National Security Subcommittee, sent letters to Apple and Google “after the Office of the Director of National Intelligence (ODNI) and the Federal Bureau of Investigation (FBI) confirmed that mobile applications developed, operated, or owned by foreign entities, including China and Russia, could potentially pose a national security risk to American citizens and the United States” according to his press release. He noted in letters sent by the technology companies to the Subcommittee that:
    • Apple confirmed that it does not require developers to submit “information on where user data (if any such data is collected by the developer’s app) will be housed” and that it “does not decide what user data a third-party app can access, the user does.”
    • Google stated that it does “not require developers to provide the countries in which their mobile applications will house user data” and acknowledged that “some developers, especially those with a global user base, may store data in multiple countries.”
    • Lynch is seeking “commitments from Apple and Google to require information from application developers about where user data is stored, and to make users aware of that information prior to downloading the application on their mobile devices.”
  • Minnesota Attorney General Keith Ellison announced a settlement with Frontier Communications that “concludes the three major investigations and lawsuits that the Attorney General’s office launched into Minnesota’s major telecoms providers for deceptive, misleading, and fraudulent practices.” The Office of the Attorney General (OAG) stated
    • Based on its investigation, the Attorney General’s Office alleged that Frontier used a variety of deceptive and misleading practices to overcharge its customers, such as: billing customers more than they were quoted by Frontier’s agents; failing to disclose fees and surcharges in its sales presentations and advertising materials; and billing customers for services that were not delivered.
    • The OAG “also alleged that Frontier sold Minnesotans expensive internet services with so-called “maximum speed” ratings that were not attainable, and that Frontier improperly advertised its service as “reliable,” when in fact it did not provide enough bandwidth for customers to consistently receive their expected service.”
  • The European Data Protection Board (EDPB) issued guidelines “on the criteria of the Right to be Forgotten in the search engines cases under the GDPR” that “focuses solely on processing by search engine providers and delisting requests  submitted by data subjects” even Article 17 of the General Data Protection Regulation applies to all data controllers. The EDPB explained “This paper is divided into two topics:
    • The first topic concerns the grounds a data subject can rely on for a delisting request sent to a search engine provider pursuant to Article 17.1 GDPR.
    • The second topic concerns the exceptions to the Right to request delisting according to Article 17.3 GDPR.
  • The Australian Competition & Consumer Commission (ACCC) “is seeking views on draft Rules and accompanying draft Privacy Impact Assessment that authorise third parties who are accredited at the ‘unrestricted’ level to collect Consumer Data Right (CDR) data on behalf of another accredited person.” The ACCC explained “[t]his will allow accredited persons to utilise other accredited parties to collect CDR data and provide other services that facilitate the provision of goods and services to consumers.” In a March explanatory statement, the ACCC stated “[t]he CDR is an economy-wide reform that will apply sector-by-sector, starting with the banking sector…[and] [t]he objective of the CDR is to provide individual and business consumers (consumers) with the ability to efficiently and conveniently access specified data held about them by businesses (data holders), and to authorise the secure disclosure of that data to third parties (accredited data recipients) or to themselves.” The ACCC noted “[t]he CDR is regulated by both the ACCC and the Office of the Australian Information Commissioner (OAIC) as it concerns both competition and consumer matters as well as the privacy and confidentiality of consumer data.” Input is due by 20 July.
  • Office of the Inspector General (OIG) for the Department of the Interior (Interior) found that even though the agency spends $1.4 billion annually on cybersecurity “[g]uarding against increasing cybersecurity threats” remains one of Interior’s top challenges. The OIG asserted Interior “continues to struggle to implement an enterprise information technology (IT) security program that balances compliance, cost, and risk while enabling bureaus to meet their diverse missions.”
  • In a summary of its larger investigation into “Security over Information Technology Peripheral Devices at Select Office of Science Locations,” the Department of Energy’s Office of the Inspector General (OIG) that “identified weaknesses related to access controls and configuration settings” for peripheral devices (e.g. thumb drives, printers, scanners and other connected devices)  “similar in type to those identified in prior evaluations of the Department’s unclassified cybersecurity program.”
  • The House Homeland Security Committee’s Cybersecurity, Infrastructure Protection, and Innovation Subcommittee Ranking Member John Katko (R-NY) “a comprehensive national cybersecurity improvement package” according to his press release, consisting of these bills:
    • The “Cybersecurity and Infrastructure Security Agency Director and Assistant Directors Act:”  This bipartisan measure takes steps to improve guidance and long-term strategic planning by stabilizing the CISA Director and Assistant Directors positions. Specifically, the bill:
      • Creates a 5-year term for the CISA Director, with a limit of 2 terms. The term of office for the current Director begins on date the Director began to serve.
      • Elevates the Director to the equivalent of a Deputy Secretary and Military Service Secretaries.
      • Depoliticizes the Assistant Director positions, appointed by the Secretary of the Department of Homeland Security (DHS), categorizing them as career public servants. 
    • The “Strengthening the Cybersecurity and Infrastructure Security Agency Act of 2020:” This measure mandates a comprehensive review of CISA in an effort to strengthen its operations, improve coordination, and increase oversight of the agency. Specifically, the bill:
      • Requires CISA to review how additional appropriations could be used to support programs for national risk management, federal information systems management, and public-private cybersecurity and integration. It also requires a review of workforce structure and current facilities and projected needs. 
      • Mandates that CISA provides a report to the House and Senate Homeland Committees within 1-year of enactment. CISA must also provide a report and recommendations to GSA on facility needs. 
      • Requires GSA to provide a review to the Administration and House and Senate Committees on CISA facilities needs within 30-days of Congressional report. 
    • The “CISA Public-Private Talent Exchange Act:” This bill requires CISA to create a public-private workforce program to facilitate the exchange of ideas, strategies, and concepts between federal and private sector cybersecurity professionals. Specifically, the bill:
      • Establishes a public-private cyber exchange program allowing government and industry professionals to work in one another’s field.
      • Expands existing private outreach and partnership efforts. 
  • The Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA) is ordering United States federal civilian agencies “to apply the July 2020 Security Update for Windows Servers running DNS (CVE-2020-1350), or the temporary registry-based workaround if patching is not possible within 24 hours.” CISA stated “[t]he software update addresses a significant vulnerability where a remote attacker could exploit it to take control of an affected system and run arbitrary code in the context of the Local System Account.” CISA Director Christopher Krebs explained “due to the wide prevalence of Windows Server in civilian Executive Branch agencies, I’ve determined that immediate action is necessary, and federal departments and agencies need to take this remote code execution vulnerability in Windows Server’s Domain Name System (DNS) particularly seriously.”
  • The United States (US) Department of State has imposed “visa restrictions on certain employees of Chinese technology companies that provide material support to regimes engaging in human rights abuses globally” that is aimed at Huawei. In its statement, the Department stated “Companies impacted by today’s action include Huawei, an arm of the Chinese Communist Party’s (CCP) surveillance state that censors political dissidents and enables mass internment camps in Xinjiang and the indentured servitude of its population shipped all over China.” The Department claimed “[c]ertain Huawei employees provide material support to the CCP regime that commits human rights abuses.”
  • Earlier in the month, the US Departments of State, Treasury, Commerce, and of Homeland Security issued an “advisory to highlight the harsh repression in Xinjiang.” The agencies explained
    • Businesses, individuals, and other persons, including but not limited to academic institutions, research service providers, and investors (hereafter “businesses and individuals”), that choose to operate in Xinjiang or engage with entities that use labor from Xinjiang elsewhere in China should be aware of reputational, economic, and, in certain instances, legal, risks associated with certain types of involvement with entities that engage in human rights abuses, which could include Withhold Release Orders (WROs), civil or criminal investigations, and export controls.
  • The United Kingdom’s National Cyber Security Centre (NCSC), Canada’s Communications  Security Establishment (CSE), United States’ National Security Agency (NSA) and the United States’ Department of Homeland Security’s Cybersecurity and Infrastructure Security  Agency (CISA) issued a joint advisory on a Russian hacking organization’s efforts have “targeted various organisations involved in COVID-19 vaccine development in Canada, the United States and the United Kingdom, highly likely with the intention of stealing information and intellectual property relating to the development and testing of COVID-19 vaccines.” The agencies named APT29 (also known as ‘the Dukes’ or ‘Cozy Bear’), “a cyber espionage group, almost certainly part of the Russian intelligence services,” as the culprit behind “custom malware known as ‘WellMess’ and ‘WellMail.’”
    • This alert follows May advisories issued by Australia, the US, and the UK on hacking threats related to the pandemic. Australia’s Department of Foreign Affairs and Trade (DFAT) and the Australian Cyber Security Centre (ACSC) issued “Advisory 2020-009: Advanced Persistent Threat (APT) actors targeting Australian health sector organisations and COVID-19 essential services” that asserted “APT groups may be seeking information and intellectual property relating to vaccine development, treatments, research and responses to the outbreak as this information is now of higher value and priority globally.” CISA and NCSC issued a joint advisory for the healthcare sector, especially companies and entities engaged in fighting COVID-19. The agencies stated that they have evidence that Advanced Persistent Threat (APT) groups “are exploiting the COVID-19 pandemic as part of their cyber operations.” In an unclassified public service announcement, the Federal Bureau of Investigation (FBI) and CISA named the People’s Republic of China as a nation waging a cyber campaign against U.S. COVID-19 researchers. The agencies stated they “are issuing this announcement to raise awareness of the threat to COVID-19-related research.”
  • The National Initiative for Cybersecurity Education (NICE) has released a draft National Institute of Standards and Technology (NIST) Special Publication (SP) for comment due by 28 August. Draft NIST Special Publication (SP) 800-181 Revision 1, Workforce Framework for Cybersecurity (NICE Framework) that features several updates, including:
    • an updated title to be more inclusive of the variety of workers who perform cybersecurity work,
    • definition and normalization of key terms,
    • principles that facilitate agility, flexibility, interoperability, and modularity,
    • introduction of competencies,
  • Representatives Glenn Thompson (R-PA), Collin Peterson (D-MN), and James Comer (R-KY) sent a letter to Federal Communications Commission (FCC) “questioning the Commission’s April 20, 2020 Order granting Ligado’s application to deploy a terrestrial nationwide network to provide 5G services.”
  • The European Commission (EC) is asking for feedback on part of its recently released data strategy by 31 July. The EC stated it is aiming “to create a single market for data, where data from public bodies, business and citizens can be used safely and fairly for the common good…[and] [t]his initiative will draw up rules for common European data spaces (covering areas like the environment, energy and agriculture) to:
    • make better use of publicly held data for research for the common good
    • support voluntary data sharing by individuals
    • set up structures to enable key organisations to share data.
  • The United Kingdom’s Parliament is asking for feedback on its legislative proposal to regulate Internet of Things (IoT) devices. The Department for Digital, Culture, Media & Sport explained “the obligations within the government’s proposed legislative framework would fall mainly on the manufacturer if they are based in the UK, or if not based in the UK, on their UK representative.” The Department is also “developing an enforcement approach with relevant stakeholders to identify an appropriate enforcement body to be granted day to day responsibility and operational control of monitoring compliance with the legislation.” The Department also touted the publishing of the European Telecommunications Standards Institute’s (ETSI) “security baseline for Internet-connected consumer devices and provides a basis for future Internet of Things product certification schemes.”
  • Facebook issued a white paper, titled “CHARTING A WAY FORWARD: Communicating Towards People-Centered and Accountable Design About Privacy,” in which the company states its desire to be involved in shaping a United States privacy law (See below for an article on this). Facebook concluded:
    • Facebook recognizes the responsibility we have to make sure that people are informed about the data that we collect, use, and share.
    • That’s why we support globally consistent comprehensive privacy laws and regulations that, among other things, establish people’s basic rights to be informed about how their information is collected, used, and shared, and impose obligations for organizations to do the same, including the obligation to build internal processes that maintain accountability.
    • As improvements to technology challenge historic approaches to effective communications with people about privacy, companies and regulators need to keep up with changing times.
    • To serve the needs of a global community, on both the platforms that exist now and those that are yet to be developed, we want to work with regulators, companies, and other interested third parties to develop new ways of informing people about their data, empowering them to make meaningful choices, and holding ourselves accountable.
    • While we don’t have all the answers, there are many opportunities for businesses and regulators to embrace modern design methods, new opportunities for better collaboration, and innovative ways to hold organizations accountable.
  • Four Democratic Senators sent Facebook a letter “about reports that Facebook has created fact-checking exemptions for people and organizations who spread disinformation about the climate crisis on its social media platform” following a New York Times article this week on the social media’s practices regarding climate disinformation. Even though the social media giant has moved aggressively to take down false and inaccurate COVID-19 posts, climate disinformation lives on the social media platform largely unmolested for a couple of reasons. First, Facebook marks these sorts of posts as opinion and take the approach that opinions should be judged under an absolutist free speech regime. Moreover, Facebook asserts posts of this sort do not pose any imminent harm and therefore do not need to be taken down. Despite having teams of fact checkers to vet posts of demonstrably untrue information, Facebook chooses not to, most likely because material that elicits strong reactions from users drive engagement that, in turn, drives advertising dollars. Senators Elizabeth Warren (D-WA), Tom Carper (D-DE), Sheldon Whitehouse (D-R.I.) and Brian Schatz (D-HI) argued “[i]f Facebook is truly “committed to fighting the spread of false news on Facebook and Instagram,” the company must immediately acknowledge in its fact-checking process that the climate crisis is not a matter of opinion and act to close loopholes that allow climate disinformation to spread on its platform.” They posed a series of questions to Facebook CEO Mark Zuckerberg on these practices, requesting answers by 31 July.
  • A Canadian court has found that the Canadian Security Intelligence Service (CSIS) “admittedly collected information in a manner that is contrary to this foundational commitment and then relied on that information in applying for warrants under the Canadian Security Intelligence Service Act, RSC 1985, c C-23 [CSIS Act]” according to a court summary of its redacted decision. The court further stated “[t]he Service and the Attorney General also admittedly failed to disclose to the Court the Service’s reliance on information that was likely collected unlawfully when seeking warrants, thereby breaching the duty of candour owed to the Court.” The court added “[t]his is not the first time this Court has been faced with a breach of candour involving the Service…[and] [t]he events underpinning this most recent breach were unfolding as recommendations were being implemented by the Service and the Attorney General to address previously identified candour concerns.” CSIS was found to have illegally collected and used metadata in a 2016 case ion its conduct between 2006-2016. In response to the most recent ruling, CSIS is vowing to implement a range of reforms. The National Security and Intelligence Review Agency (NSIRA) is pledging the same.
  • The United Kingdom’s National Police Chiefs’ Council (NPCC) announced the withdrawal of “[t]he ‘Digital device extraction – information for complainants and witnesses’ form and ‘Digital Processing Notice’ (‘the relevant forms’) circulated to forces in February 2019 [that] are not sufficient for their intended purpose.” In mid-June, the UK’s data protection authority, the Information Commissioner’s Office (ICO) unveiled its “finding that police data extraction practices vary across the country, with excessive amounts of personal data often being extracted, stored, and made available to others, without an appropriate basis in existing data protection law.” This withdrawal was also due, in part, to a late June Court of Appeal decision.  
  • A range of public interest and advocacy organizations sent a letter to Speaker of the House Nancy Pelosi (D-CA) and House Minority Leader Kevin McCarthy (R-CA) noting “there are intense efforts underway to do exactly that, via current language in the House and Senate versions of the FY2021 National Defense Authorization Act (NDAA) that ultimately seek to reverse the FCC’s recent bipartisan and unanimous approval of Ligado Networks’ regulatory plans.” They urged them “not endorse efforts by the Department of Defense and its allies to veto commercial spectrum authorizations…[and][t]he FCC has proven itself to be the expert agency on resolving spectrum disputes based on science and engineering and should be allowed to do the job Congress authorized it to do.” In late April, the FCC’s “decision authorize[d] Ligado to deploy a low-power terrestrial nationwide network in the 1526-1536 MHz, 1627.5-1637.5 MHz, and 1646.5-1656.5 MHz bands that will primarily support Internet of Things (IoT) services.” The agency argued the order “provides regulatory certainty to Ligado, ensures adjacent band operations, including Global Positioning System (GPS), are sufficiently protected from harmful interference, and promotes more efficient and effective use of [the U.S.’s] spectrum resources by making available additional spectrum for advanced wireless services, including 5G.”
  • The European Data Protection Supervisor (EDPS) rendered his opinion on the European Commission’s White Paper on Artificial Intelligence: a European approach to excellence and trust and recommended the following for the European Union’s (EU) regulation of artificial intelligence (AI):
    • applies both to EU Member States and to EU institutions, offices, bodies and agencies;
    • is designed to protect from any negative impact, not only on individuals, but also on communities and society as a whole;
    • proposes a more robust and nuanced risk classification scheme, ensuring any significant potential harm posed by AI applications is matched by appropriate mitigating measures;
    • includes an impact assessment clearly defining the regulatory gaps that it intends to fill.
    • avoids overlap of different supervisory authorities and includes a cooperation mechanism.
    • Regarding remote biometric identification, the EDPS supports the idea of a moratorium on the deployment, in the EU, of automated recognition in public spaces of human features, not only of faces but also of gait, fingerprints, DNA, voice, keystrokes and other biometric or behavioural signals, so that an informed and democratic debate can take place and until the moment when the EU and Member States have all the appropriate safeguards, including a comprehensive legal framework in place to guarantee the proportionality of the respective technologies and systems for the specific use case.
  • The Bundesamt für Verfassungsschutz (BfV), Germany’s domestic security agency, released a summary of its annual report in which it claimed:
    • The Russian Federation, the People’s Republic of China, the Islamic Republic of Iran and the Republic of Turkey remain the main countries engaged in espionage activities and trying to exert influence on Germany.
    • The ongoing digital transformation and the increasingly networked nature of our society increases the potential for cyber attacks, worsening the threat of cyber espionage and cyber sabotage.
    • The intelligence services of the Russian Federation and the People’s Republic of China in particular carry out cyber espionage activities against German agencies. One of their tasks is to boost their own economies with the help of information gathered by the intelligence services. This type of information-gathering campaign severely threatens the success and development opportunities of German companies.
    • To counteract this threat, Germany has a comprehensive cyber security architecture in place, which is operated by a number of different authorities. The BfV plays a major role in investigating and defending against cyber threats by detecting attacks, attributing them to specific attackers, and using the knowledge gained from this to draw up prevention strategies. The National Cyber Response Centre, in which the BfV plays a key role, was set up to consolidate the co-operation between the competent agencies. The National Cyber Response Centre aims to optimise the exchange of information between state agencies and to improve the co-ordination of protective and defensive measures against potential IT incidents.

Further Reading

  • Trump confirms cyberattack on Russian trolls to deter them during 2018 midterms” – The Washington Post. In an interview with former George W. Bush speechwriter Marc Thiessen, President Donald Trump confirmed he ordered a widely reported retaliatory attack on the Russian Federation’s Internet Research Agency as a means of preventing interference during the 2018 mid-term election. Trump claimed this attack he ordered was the first action the United States took against Russian hacking even though his predecessor warned Russian President Vladimir Putin to stop such activities and imposed sanctions at the end of 2016. The timing of Trump’s revelation is interesting given the ongoing furor over reports of Russian bounties paid to Taliban fighters for killing Americans the Trump Administration may have known of but did little or nothing to stop.
  • Germany proposes first-ever use of EU cyber sanctions over Russia hacking” – Deutsche Welle. Germany is looking to use the European Union’s (EU) cyber sanctions powers against Russia for its alleged 2015 16 GB exfiltration of data from the Bundestag’s systems, including from Chancellor Angela Merkel’s office. Germany has been alleging that Fancy Bear (aka APT28) and Russia’s military secret service GRU carried out the attack. Germany has circulated its case for sanctions to other EU nations and EU leadership. In 2017, the European Council declared “[t]he EU diplomatic response to malicious cyber activities will make full use of measures within the Common Foreign and Security Policy, including, if necessary, restrictive measures…[and] [a] joint EU response to malicious cyber activities would be proportionate to the scope, scale, duration, intensity, complexity, sophistication and impact of the cyber activity.”
  • Wyden Plans Law to Stop Cops From Buying Data That Would Need a Warrant” – VICE. Following on a number of reports that federal, state, and local law enforcement agencies are essentially sidestepping the Fourth Amendment through buying location and other data from people’s smartphones, Senator Ron Wyden (D-OR) is going to draft legislation that would seemingly close what he, and other civil libertarians, are calling a loophole to the warrant requirement.
  • Amazon Backtracks From Demand That Employees Delete TikTok” – The New York Times. Amazon first instructed its employees to remove ByteDance’s app, TikTok, on 11 July from company devices and then reversed course the same day, claiming the email had been erroneously sent out. The strange episode capped another tumultuous week for ByteDance as the Trump Administration is intensifying pressure in a number of ways on the company which officials claim is subject to the laws of the People’s Republic of China and hence must share information with the government in Beijing. ByteDance counters the app marketed in the United States is through a subsidiary not subject to PRC law. ByteDance also said it would no longer offer the app in Hong Kong after the PRC change in law has extended the PRC’s reach into the former British colony. TikTok was also recently banned in India as part of a larger struggle between India and he PRC. Additionally, the Democratic National Committee warned staff about using the app this week, too.
  • Is it time to delete TikTok? A guide to the rumors and the real privacy risks.” – The Washington Post. A columnist and security specialist found ByteDance’s app vacuums up information from users, but so does Facebook and other similar apps. They scrutinized TikTok’s privacy policy and where the data went, and they could not say with certainty that it goes to and stays on servers in the US and Singapore. 
  • California investigating Google for potential antitrust violations” – Politico. California Attorney General Xavier Becerra is going to conduct his own investigation of Google aside and apart from the investigation of the company’s advertising practices being conducted by virtually every other state in the United States. It was unclear why Becerra opted against joining the larger probe launched in September 2019. Of course, the Trump Administration’s Department of Justice is also investigating Google and could file suit as early as this month.
  • How May Google Fight an Antitrust Case? Look at This Little-Noticed Paper” – The New York Times. In a filing with the Australian Competition and Consumer Commission (ACCC), Google claimed it does not control the online advertising market and it is borne out by a number of indicia that argue against a monopolistic situation. The company is likely to make the same case to the United States’ government in its antitrust inquiry. However, similar arguments did not gain tractions before the European Commission, which levied a €1.49 billion for “breaching EU antitrust rules” in March 2019.
  •  “Who Gets the Banhammer Now?” – The New York Times. This article examines possible motives for the recent wave of action by social media platforms to police a fraction of the extreme and hateful speech activists and others have been asking them to take down for years. This piece makes the argument that social media platforms are businesses and operate as such and expecting them to behave as de facto public squares dedicated to civil political and societal discourse is more or less how we ended up where we are.
  • TikTok goes tit-for-tat in appeal to MPs: ‘stop political football’ – The Australian. ByteDance is lobbying hard in Canberra to talk Ministers of Parliament out of possibly banning TikTok like the United States has said it is considering. While ByteDance claims the data collected on users in Australia is sent to the US or Singapore, some experts are arguing just to maintain and improve the app would necessarily result in some non-People’s Republic of China (PRC) user data making its way back to the PRC. As Australia’s relationship with the PRC has grown more fraught with allegations PRC hackers infiltrated Parliament and the Prime Minister all but saying PRC hackers were targeting hospitals and medical facilities, the government in Canberra could follow India’s lead and ban the app.
  • Calls for inquiry over claims Catalan lawmaker’s phone was targeted” – The Guardian. British and Spanish newspapers are reporting that an official in Catalonia who favors separating the region from Spain may have had his smartphone compromised with industrial grade spyware typically used only by law enforcement and counterterrorism agencies. The President of the Parliament of Catalonia Roger Torrent claims his phone was hacked for domestic political purposes, which other Catalan leaders argued, too. A spokesperson for the Spanish government said “[t]he government has no evidence that the speaker of the Catalan parliament has been the victim of a hack or theft involving his mobile.” However, the University of Toronto’s CitizenLab, the entity that researched and claimed that Israeli firm NSO Group’s spyware was deployed via WhatsApp to spy on a range of journalists, officials, and dissidents, often by their own governments, confirmed that Torrent’s phone was compromised.
  • While America Looks Away, Autocrats Crack Down on Digital News Sites” – The New York Times. The Trump Administration’s combative relationship with the media in the United States may be encouraging other nations to crack down on digital media outlets trying to hold those governments to account.
  •  “How Facebook Handles Climate Disinformation” – The New York Times. Even though the social media giant has moved aggressively to take down false and inaccurate COVID-19 posts, climate disinformation lives on the social media platform largely unmolested for a couple of reasons. First, Facebook marks these sorts of posts as opinion and take the approach that opinions should be judged under an absolutist free speech regime. Moreover, Facebook asserts posts of this sort do not pose any imminent harm and therefore do not need to be taken down. Despite having teams of fact checkers to vet posts of demonstrably untrue information, Facebook chooses not to, most likely because material that elicits strong reactions from users drive engagement that, in turn, drives advertising dollars.
  • Here’s how President Trump could go after TikTok” – The Washington Post. This piece lays out two means the Trump Administration could employ to press ByteDance in the immediate future: use of the May 2019 Executive Order “Securing the Information and Communications Technology and Services Supply Chain” or the Committee on Foreign Investment in the United States process examining ByteDance of the app Music.ly that became TikTok. Left unmentioned in this article is the possibility of the Federal Trade Commission (FTC) examining its 2019 settlement with ByteDance to settle violations of the “Children’s Online Privacy Protection Act” (COPPA).
  • You’re Doomscrolling Again. Here’s How to Snap Out of It.” – The New York Times. If you find yourself endlessly looking through social media feeds, this piece explains why and how you might stop doing so.
  • UK selling spyware and wiretaps to 17 repressive regimes including Saudi Arabia and China” – The Independent. There are allegations that the British government has ignored its own regulations on selling equipment and systems that can be used for surveillance and spying to other governments with spotty human rights records. Specifically, the United Kingdom (UK) has sold £75m to countries non-governmental organizations (NGO) are rated as “not free.” The claims include nations such as the People’s Republic of China (PRC), the Kingdom of Saudi Arabia, Bahrain, and others. Not surprisingly, NGOs and the minority Labour party are calling for an investigation and changes.
  • Google sued for allegedly tracking users in apps even after opting out” – c/net. Boies Schiller Flexner filed suit in what will undoubtedly seek to become a class action suit over Google’s alleged continuing to track users even when they turned off tracking features. This follows a suit filed by the same firm against Google in June, claiming its browser Chrome still tracks people when they switch to incognito mode.
  • Secret Trump order gives CIA more powers to launch cyberattacks” – Yahoo! News. It turns out that in addition to signing National Security Presidential Memorandum (NSPM) 13 that revamped and eased offensive cyber operations for the Department of Defense, President Donald Trump signed a presidential finding that has allowed the Central Intelligence Agency (CIA) to launch its own offensive cyber attacks, mainly at Russia and Iran, according to unnamed former United States (US) officials according to this blockbuster story. Now, the decision to commence with an attack is not vetted by the National Security Council; rather, the CIA makes the decision. Consequently, there have been a number of attacks on US adversaries that until now have not been associated with the US. And, the CIA is apparently not informing the National Security Agency or Cyber Command of its operations, raising the risk of US cyber forces working at cross purposes or against one another in cyberspace. Moreover, a recently released report blamed the lax security environment at the CIA for a massive exfiltration of hacking tools released by Wikileaks. 
  • Facebook’s plan for privacy laws? ‘Co-creating’ them with Congress” – Protocol. In concert with the release of a new white paper, Facebook Deputy Chief Privacy Officer Rob Sherman sat for an interview in which he pledged the company’s willingness to work with Congress to co-develop a national privacy law. However, he would not comment on any of the many privacy bills released thus far or the policy contours of a bill Facebook would favor except for advocating for an enhanced notice and consent regime under which people would be better informed about how their data is being used. Sherman also shrugged off suggestions Facebook may not be welcome given its record of privacy violations. Finally, it bears mention that similar efforts by other companies at the state level have not succeeded as of yet. For example, Microsoft’s efforts in Washington state have not borne fruit in the passage of a privacy law.
  • Deepfake used to attack activist couple shows new disinformation frontier” – Reuters. We are at the beginning of a new age of disinformation in which fake photographs and video will be used to wage campaigns against nations, causes, and people. An activist and his wife were accused of being terrorist sympathizers by a university student who apparently was an elaborate ruse for someone or some group looking to defame the couple. Small errors gave away the ruse this time, but advances in technology are likely to make detection all the harder.
  • Biden, billionaires and corporate accounts targeted in Twitter hack” – The Washington Post. Policymakers and security experts were alarmed when the accounts of major figures like Bill Gates and Barack Obama were hacked yesterday by some group seeking to sell bitcoin. They argue Twitter was lucky this time and a more ideologically motivated enemy may seek to cause havoc, say on the United States’ coming election. A number of experts are claiming the penetration of the platform must have been of internal controls for so many high profile accounts to be taken over at the same time.
  • TikTok Enlists Army of Lobbyists as Suspicions Over China Ties Grow” – The New York Times. ByteDance’s payments for lobbying services in Washington doubled between the last quarter of 2019 and thirst quarter of 2020, as the company has retained more than 35 lobbyists to push back against the Trump Administration’s rhetoric and policy changes. The company is fighting against a floated proposal to ban the TikTok app on national security grounds, which would cut the company off from another of its top markets after India banned it and scores of other apps from the People’s Republic of China. Even if the Administration does not bar use of the app in the United States, the company is facing legislation that would ban its use on federal networks and devices that will be acted upon next week by a Senate committee. Moreover, ByteDance’s acquisition of the app that became TikTok is facing a retrospective review of an inter-agency committee for national security considerations that could result in an unwinding of the deal. Moreover, the Federal Trade Commission (FTC) has been urged to review ByteDance’s compliance with a 2019 settlement that the company violated regulations protecting the privacy of children that could result in multi-billion dollar liability if wrongdoing is found.
  • Why Google and Facebook Are Racing to Invest in India” – Foreign Policy. With New Delhi banning 59 apps and platforms from the People’s Republic of China (PRC), two American firms have invested in an Indian giant with an eye toward the nearly 500 million Indians not yet online. Reliance Industries’ Jio Platforms have sold stakes to Google and Facebook worth $4.5 billion and $5.7 billion that gives them prized positions as the company looks to expand into 5G and other online ventures. This will undoubtedly give a leg up to the United States’ online giants in vying with competitors to the world’s second most populous nation.
  • “Outright Lies”: Voting Misinformation Flourishes on Facebook” – ProPublica. In this piece published with First Draft, “a global nonprofit that researches misinformation,” an analysis of the most popular claims made about mail voting show that many of them are inaccurate or false, thus violating the platforms terms of services yet Facebook has done nothing to remove them or mark them as inaccurate until this article was being written.
  • Inside America’s Secretive $2 Billion Research Hub” – Forbes. Using contract information obtained through Freedom of Information requests and interviews, light is shined on the little known non-profit MITRE Corporation that has been helping the United States government address numerous technological problems since the late 1950’s. The article uncovers some of its latest, federally funded projects that are raising eyebrows among privacy advocates: technology to life people’s fingerprints from social media pictures, technology to scan and copy Internet of Things (IoT) devices from a distance, a scanner to read a person’s DNA, and others.
  • The FBI Is Secretly Using A $2 Billion Travel Company As A Global Surveillance Tool” – Forbes. In his second blockbuster article in a week, Forbes reporter Thomas Brewster exposes how the United States (US) government is using questionable court orders to gather travel information from the three companies that essentially provide airlines, hotels, and other travel entities with back-end functions with respect to reservations and bookings. The three companies, one of whom, Sabre is a US multinational, have masses of information on you if you have ever traveled, and US law enforcement agencies, namely the Federal Bureau of Investigation, is using a 1789 statute to obtain orders all three companies have to obey for information in tracking suspects. Allegedly, this capability has only been used to track terror suspects but will now reportedly be used for COVID-19 tracking.
  • With Trump CIA directive, the cyber offense pendulum swings too far” – Yahoo! News. Former United States (US) National Coordinator for Security, Infrastructure Protection, and Counter-terrorism Richard Clarke argues against the Central Intelligence Agency (CIA) having carte blanche in conducting cyber operations without the review or input of other federal agencies. He suggests that the CIA in particular, and agencies in general, tend to push their authority to the extreme, which in this case could lead to incidents and lasting precedents in cyberspace that may haunt the US. Clarke also intimated that it may have been the CIA and not Israel that launched cyber attacks on infrastructure facilities in Tehran this month and last.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Congressional Cybersecurity Commission Releases Annex To Final Report

A Congressional cyber panel is adding four recommendations to its comprehensive March report.  

First things first, if you would like to receive my Technology Policy Update, email me. You can find some of these Updates from 2019 and 2020 here.

On 2 June, the Cyberspace Solarium Commission (CSC) released an annex to its final report. The CSC was created by the National Defense Authorization Act for Fiscal Year 2019 (P.L. 115-232) to “develop a consensus on a strategic approach to defending the United States in cyberspace against cyber attacks of significant consequences.” In mid-March, the CSC released its final report and made a range of recommendations, some of which were paired with legislative language the CSC has still not yet made available. However, Members of Congress who served on the CSC are working with the Armed Services Committees to get some of this language added to the FY 2021 National Defense Authorization Act (NDAA). See this issue of the Technology Policy Update for more detail on the CSC’s final report.

Per its grant of statutory authority, the CSC is set to terminate 120 days after the release of its final report, which will be next month. Nonetheless, the CSC has been holding a series of webinars to elucidate or explain various components of the final report, and the Commission began to consider cybersecurity through the lens of the current pandemic for parallels and practical effects. Consequently, the CSC added four new recommendations and renewed its call that recommendations in its final report related to the pandemic – in the view of the Commission – receive renewed attention and ideally action by Congress and the Executive Branch.

The CSC again called for the types of resources and reforms most policymakers have either not shown an appetite for or believe are a few bridges too far. Even though the CSC stated its intention to a “9/11 Commission without the 9/11 event,” it is unlikely such sweeping policy changes will be made in the absence of a crisis or event that fundamentally changes this status quo. Nevertheless, the CSC’s new recommendations are targeted and modest, one of which call for funneling more funds through an existing grant program to bolster private sector/non-profit efforts and another for a government agency to exercise previously granted authority. What’s more, the CSC could add the new recommendations to those shared in the form of legislative language with the Armed Services Committees in the hopes they are included in this year’s NDAA. Given that CSC co-chairs Senator Angus King (I-ME) and Representative Mike Gallagher (R-WI) serve on their chambers’ Armed Services Committees as do the other two Members of Congress on the CSC, Senator Ben Sasse (R-NE) and Representative James Langevin (D-RI), the chances of some of the recommendations making it into statute are higher than they may be otherwise.

In its “White Paper #1: Cybersecurity Lessons from the Pandemic,” the CSC asserted:

The COVID-19 pandemic illustrates the challenge of ensuring resilience and continuity in a connected world. Many of the effects of this new breed of crisis can be significantly ameliorated through advance preparations that yield resilience, coherence, and focus as it spreads rapidly through the entire system, stressing everything from emergency services and supply chains to basic human needs and mental health. e pandemic produces cascading effects and high levels of uncertainty. It has undermined normal policymaking processes and, in the absence of the requisite preparedness, has forced decision makers to craft hasty and ad hoc emergency responses. Unless a new approach is devised, crises like COVID-19 will continue to challenge the modern American way of life each time they emerge. This annex collects observations from the pandemic as they relate to the security of cyberspace, in terms of both the cybersecurity challenges it creates and what it can teach the United States about how to prepare for a major cyber disruption. These insights and the accompanying recommendations, some of which are new and some of which appear in the original March 2020 report, are now more urgent than ever.

The CSC conceded that “[t]he lessons the country is learning from the ongoing pandemic are not perfectly analogous to a significant cyberattack, but they offer many illuminating parallels.

  • First, both the pandemic and a significant cyberattack can be global in nature, requiring that nations simultaneously look inward to manage a crisis and work across borders to contain its spread.
  • Second, both the COVID-19 pandemic and a significant cyberattack require a whole-of-nation response effort and are likely to challenge existing incident management doctrine and coordination mechanisms.
  • Third, when no immediate therapies or vaccines are available, testing and treatments emerge slowly; such circumstances place a premium on building systems that are agile, are resilient, and enable coordination across the government and private sector, much as is necessary in the cyber realm.
  • Finally, and perhaps most importantly, prevention is far cheaper and preestablished relationships far more effective than a strategy based solely on detection and response.

The CSC continued:

The COVID-19 pandemic is a call to action to ensure that the United States is better prepared to withstand shocks and crises of all varieties, especially those like cyber events that we can reasonably predict will occur, even if we do not know when. We, as a nation, must internalize the lessons learned from this emergency and move forward to strengthen U.S. national preparedness.  This means building structures in government now to ensure strategic leadership and coordination through a cyber crisis. It means driving down the vulnerability of the nation’s networks and technologies. And finally, it means investing in rigorously building greater resiliency in the government, in critical infrastructure, and in our citizenry. In the past several years, experts have sounded the alarm, ranking cyberattacks as one of the most likely causes of a crisis. As the COVID-19 crisis has unfolded, the United States has experienced a wake-up call, prompting a national conversation about disaster prevention, crisis preparedness, and incident response. While COVID-19 is the root cause of today’s crisis, a significant cyberattack could be the cause of the next. If that proves to be the case, history will surely note that the time to prepare was now.

The CSC offered these four new recommendations:

  • Pass an Internet of Things Security Law: With a significant portion of the workforce working from home during the COVID-19 disruption, household internet of things (IoT) devices, particularly household routers, have become vulnerable but important pieces of our national cyber ecosystem and our adversary’s attack surface. To ensure that the manufacturers of IoT devices build basic security measures into the products they sell, Congress should pass an IoT security law. The law should focus on known challenges, like insecurity in Wi-Fi routers, and mandate that these devices have reasonable security measures, such as those outlined under the National Institute of Standards and Technology’s “Recommendations for IoT Device Manufacturers.” But it should be only modestly prescriptive, relying more heavily on outcome-based standards, because security standards change with technology over time. Nonetheless, the law should stress enduring standards both for authentication, such as requiring unique default passwords that a user must change to their own authentication mechanism upon first use, and for patching, such as ensuring that a device is capable of receiving a remote update. Congress should consider explicitly tasking the Federal Trade Commission with enforcement of the law on the basis of existing authorities under Section 5 of the Federal Trade Commission Act.
    • In a footnote, the CSC asserted “[t]he proposed Internet of Things (IoT) Cybersecurity Improvement Act of 2019 provides a viable model for a federal law that mandates that connected devices procured by the federal government have reasonable security measures in place, but should be expanded to cover all devices sold or offered for sale in the United States.
    • The initial draft of the “Internet of Things Cybersecurity Improvement Act of 2019” (H.R. 1668/S. 734) was a revised, unified version of two similar bills from the 115th Congress of the same title: the “Internet of Things (IoT) Cybersecurity Improvement Act of 2017” (S. 1691) and the “Internet of Things (IoT) Federal Cybersecurity Improvement Act of 2018” (H.R. 7283). However, during the process of consideration in both chambers, differences emerged that as of yet have not been reconciled. However, it is possible that a final version of this bill gets folded into the FY 2021 NDAA or is passed as standalone legislation in the waning days of this Congress.
    • However, the FTC already uses its Section 5 authorities to bring actions against IoT manufacturers. For example, last month, the agency announced a settlement with Tapplock regarding “allegations that it deceived consumers by falsely claiming that its Internet-connected smart locks were designed to be “unbreakable” and that it took reasonable steps to secure the data it collected from users.”
  • Support Nonprofits that Assist Law Enforcement’s Cybercrime and Victim Support Efforts: Cyber-specific nonprofit organizations regularly collaborate with law enforcement in writing cybercrime reports, carrying out enforcement operations, and providing victim support services. As the COVID-19 pandemic has proven, trusted nonprofit organizations serve as critical law enforcement partners that can quickly mobilize to help identify and dismantle major online schemes. Such nonprofits have the expertise and flexibility to help and reinforce law enforcement efforts to disrupt cybercrime and assist victims. However, they often face financial challenges. Therefore, the Commission recommends that Congress provide grants through the Department of Justice’s Office of Justice Programs to help fund these essential efforts.
    • The portion of the Department of Justice’s Office of Justice Programs that makes grants was provided $1.892 billion in FY 2020, with large chunks being earmarked for state and local law enforcement agencies like the Edward Byrne Memorial Justice Assistance Grant program. Therefore, there would likely need to be additional funding provided for this program if there will be additional eligible recipients and additional purposes.
  • Establish the Social Media Data and Threat Analysis Center: Because major social media platforms are owned by private companies, developing a robust public-private partnership is essential to effectively combat disinformation. To this end, the Commission supports the provision in the FY2020 National Defense Authorization Act that authorizes the Office of the Director of National Intelligence to establish and fund a Social Media Data and Threat Analysis Center (DTAC), which would take the form of an independent, nonprofit organization intended to encourage public-private cooperation to detect and counter foreign influence operations against the United States. The center would serve as a public-private facilitator, developing information-sharing procedures and establishing—jointly with social media—the threat indicators that the center will be able to access and analyze. In addition, the DTAC would be tasked with informing the public about the criteria and standards for analyzing, investigating, and determining threats from malign influence operations. Finally, in order to strengthen a collective understanding of the threats, the center would host a searchable archive of aggregated information related to foreign influence and disinformation operations.
    • This is, obviously, not really a new recommendation, but rather a call for already granted authority to be used. The Director of National Intelligence was provided discretionary authority to establish the DTAC in P.L. 116-92 and has not chosen to do so yet. There are a number of existing entities that may qualify as the Atlantic Council’s Digital Forensics Research Lab or the Alliance for Securing Democracy. However, the issue may be resources in that the DNI was not provided any additional funding to stand up the DTAC.
  • Increase Nongovernmental Capacity to Identify and Counter Foreign Disinformation and Influence Campaigns: Congress should fund the Department of Justice to provide grants, in consultation with the Department of Homeland Security and the National Science Foundation, to nonprofit centers seeking to identify, expose, and explain malign foreign influence campaigns to the American public while putting those campaigns in context to avoid amplifying them. Such malign foreign influence campaigns can include covert foreign state and non-state propaganda, disinformation, or other inauthentic activity across online platforms, social networks, or other communities. These centers should analyze and monitor foreign influence operations, identify trends, put those trends into context, and create a robust, credible source of information for the American public. To ensure success, these centers should be well-resourced and coordinated with ongoing government efforts and international partners’ efforts.
    • It is not clear whether this program would be conducted through an existing DOJ program or a new one would be created. As with the DOJ’s Office of Justice Programs, funding may be an issue, and while the Armed Services Committees may be able to fold this into the FY 2021 (notwithstanding jurisdictional issues considering the DOJ is part of the Judiciary Committees’ purviews), but the Appropriations Committees would ultimately decide whether this would be funded.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Privacy Bill A Week: United States Consumer Data Privacy Act of 2019

The majority staff of the Senate Commerce Committee circulated the “United States Consumer Data Privacy Act of 2019” (CDPA), a draft data privacy bill days after Ranking Member Maria Cantwell (D-WA) and her cosponsors released the “Consumer Online Privacy Rights Act“ (COPRA) (S.2968) (See here for more analysis). Of course, these competing proposals came before the Senate Commerce, Science, and Transportation Committee’s hearing on legislative proposals on privacy.

In the main, this bill shares the same framework with COPRA with some key, significant differences, including:

  • COPRA expands the FTC’s jurisdiction in policing privacy harms whereas CDPA would not
  • COPRA places a duty of loyalty on covered entities to people whose covered data they process or transfer; CDPA does not have any such duty
  • CDPA does not allow people to sue if covered entities violate the new federal privacy and security regime; COPRA would allow such suits to move forward
  • CDPA preempts state privacy and data security laws; COPRA would establish a federal floor that states like California would be able to legislate on top of
  • CDPA would take effect two years after enactment, and COPRA would take effect six months after enactment.
  • The bar against a person waiving her privacy rights under COPRA is much broader than CDPA
  • COPRA would empower the FTC to punish discriminatory data processing and transfers; CDPA would require the FTC to refer these offenses to the appropriate federal and state agencies
  • CDPA revives a concept from the Obama Administration’s 2015 data privacy bill by allowing organizations and entities to create standards or codes of conduct and allowing those entities in compliance with these standards or codes to be deemed in compliance with CDPA subject to FTC oversight; COPRA does not have any such provision
  • COPRA would require covered entities to conduct algorithmic decision-making impact assessments to determine if these processes are fair, accurate, and unbiased; no such requirement is found in CDPA

However, as noted the basic framework both bills create in establishing a federal privacy and data security regime are similar. Broadly, people would receive new rights, largely premised on being accurately informed of how their personal data would be used by covered entities. However, people would need to affirmatively consent before such data processing and transfers could occur.

The bills have similar definitions of what data is covered, what constitutes sensitive covered data, and the entities covered by the bill. Among the key similarities are:

  • Both bills would require affirmative express consent for a range of data processing and transferring with COPRA requiring this sort of consent under more circumstances
  • Like COPRA, CDPA marries data security requirements to privacy requirements; however, both COPRA and CDPA would deem entities already in compliance with a number of existing federal laws (e.g. Gramm-Leach-Bliley and HIPPA) to be in compliance with their data security requirements, and yet language in both bills suggests that to the extent that these federal standards fall short of the new data security standards, these entities would need to meet additional requirements
  • Both bills would allow people to request a copy of their covered data being held by a covered entity, delete or de-identify covered data, to correct or complete such data, and to port their data to another covered entity; however, COPRA would provide additional rights such as the aforementioned duty of loyalty and a right to opt-out of transfers
  • COPRA and CDPA would provide additional authority for the FTC to police data security with COPRA giving the agency broad authority to promulgate regulations and providing more descriptive guidance on how to do so with CDPA provided very targeted rulemaking authority that would likely continue the current case-by-case enforcement regime at the FTC
  • The FTC could seek civil fines in the first instance of $42,530 per violation along with the current range of equitable and injunctive relief it can seek under both COPRA and CDPA
  • Both bills allow state attorneys general could seek the same relief in the event of alleged violations

Separately from the release of this draft, Chair Roger Wicker (R-MS) said he was willing to allow a limited right for people to sue under a federal privacy bill but only to obtain injunctive relief and not monetary damages. This is a significant concession, for Republicans, including Wicker, have long characterized a private right of action as being out of the question. Of course, Wicker does not speak for other Republicans on the committee nor those in the Senate, so it is not exactly clear how much support he has for such a proposal. In the same vein, Wicker remarked to the media that the other main sticking points with Cantwell are on preemption and on a duty of loyalty. However, he may have been making this statement with some optimism for there are other, significant differences between these two bills, suggesting more negotiating is in order.

Also, it has been reported that Senators Richard Blumenthal (D-CT) and Jerry Moran (R-KS) are still working on their privacy bill but are not yet ready to release bill text. It is possible the release of these two bills speeds them to completion on the draft so they can lay down their marker.

However, turning to the substance of the bill, let’s start, as always, with definitions. Covered entities are “any person who operates in or affects interstate or foreign commerce,” which is a very broad definition that would sweep almost every entity in the U.S. and some overseas into it.

Covered data is defined as “information that identifies or is linked or reasonably linkable to an individual or a device that is linked or reasonably linkable to an individual.” The bill further provides “information held by a covered entity is linked or reasonably linkable to an individual if, as a practical matter, it can be used on its own or in combination with other information held by, or readily accessible to, the covered entity to identify the individual or a device associated with that individual.” However, covered data does not include: aggregated data; de-identified data; employee data; or publicly available information. Aggregated data is a new term among the privacy bills we’ve looked at thus far and is “information that relates to a group or category of individuals or devices that does not identify and is not linked or reasonably linkable to any individual.”

“Sensitive covered data” “means any of the following forms of covered data of an individual” including but not limited to:

  • A unique, government-issued identifier, such as a Social Security number, passport number, or driver’s license number.
  • Any covered data that describes or reveals the diagnosis or treatment of past, present, or future physical health, mental health, or disability of an individual.
  • A financial account number, debit card number, credit card number, or any required security or access code, password, or credentials allowing access to any such account.
  • Covered data that is biometric information.
  • Precise geolocation information capable of determining with reasonable specificity the past or present actual physical location of an individual or device at a specific point in time.
  • The contents of an individual’s private communications or the identity of the parties subject to such communications, unless the covered entity is the intended recipient of the communication;
  • Covered data revealing an individual’s racial or ethnic origin, or religion in a manner inconsistent with the individual’s reasonable expectation regarding the processing or transfer of such information.
  • Covered data revealing the sexual orientation or sexual behavior of an individual in a manner inconsistent with the individual’s reasonable expectation regarding the processing or transfer of such information.
  • Covered data about the online activities of an individual that relate to a category of covered data described in another subparagraph of this paragraph.
  • Covered data that is calendar information, address book information, phone or text logs, photos, or videos maintained on an individual’s device.
  • Any other category of covered data designated by the Commission pursuant to a rulemaking under [Administrative Procedure Act] if the Commission determines that the processing or transfer of covered data in such category in a manner that is inconsistent with the reasonable expectations of an individual would be likely to be highly offensive to a reasonable individual.

This is a fairly comprehensive list of covered data that would be considered sensitive.

Additionally, the FTC would be allowed to add other types of data if the agency goes through a rulemaking, providing flexibility and allowing the agency to address any future, unforeseen uses of personal data.

De-identified data is “information held by a covered entity that…does not identify, and is not linked or reasonably linkable to an individual or device” only if the covered entity publicly commits not to not re-identify the person or device. The covered entity would also need to put in place technical and organizational procedures to stop any possible linkage. Additionally, covered entities may not disclose de-identified data to any other entities without a contract or legal instrument barring the re-identification of the data.

CDPA defines affirmative express consent as “upon being presented with a clear and conspicuous description of an act or practice for which consent is sought, an affirmative act by the individual clearly communicating the individual’s authorization for the act or practice.”

Covered entities will not be able to deny goods or services to an individual because the individual exercised any of the rights established under” the CDPA. Additionally, for each service or product, a covered entity must publish a privacy policy that is “clear and conspicuous” to both the public at large and a person before or at the point of which collection of covered data begins. The CDPA spells out the elements a privacy policy must contain, among other features, the categories of covered data collected, the processing purposes for each category, the categories of third parties to whom the data is transferred and the purposes of such transfers, and a detailed description of data retention practices and data security practices. Any material changes to a covered entity’s privacy policy shall require obtaining affirmative express consent anew from people before any processing or transferring of covered data may occur.

The CDPA requires covered entities to fulfill the requests of people to access, correct, complete, delete or port their covered data within 45 days after receiving a verified request. However, a person may not request to access their covered data more than two times in a 12-month period, and for any additional requests, covered entities may charge a fee for such access. Of course, if a covered entity cannot verify the identity of the requester, then it does not need to meet the request. A covered entity may also deny a request if it would require the maintenance of information solely to fulfill the request, it is impossible or demonstrably impracticable to comply, or it necessitates the re-identification of de-identified data. The CDPA stipulates that none of these rights of obligations may be waived by a person in an agreement between a covered entity and a person. The FTC must promulgate regulations under the APA to implement this section.

Regarding the right to access one’s covered data, a covered entity must either provide the covered data or “an accurate representation” that is processed, any purposes for which such covered data is transferred, and a list of any third parties or service providers who have received covered data. A person has the right to request that a covered entity “correct inaccuracies or incomplete information with respect to the covered data of the individual that is processed by the covered entity; and notify any service provider or third party to which the covered entity transferred such covered data of the corrected information.” A person may also ask that a covered entity delete or de-identify any covered data the covered entity is processing and alert any third parties or service providers the covered entity has transferred the person’s covered data to. Finally, subject to technical feasibility, covered entities must generally provide covered data “in a portable, structured, standards-based, interoperable, and machine-readable format that is not subject to licensing restrictions.”

In regard to sensitive covered data, a covered entity must obtain affirmative express consent before it can process this subset of covered data or transfer it to a third party. This section also details how covered entities are to obtain affirmative express consent. People must be provided with notice that

  • includes a description of the processing purpose for which consent is sought;
  • clearly identifies and distinguishes between a processing purpose that is necessary to fulfill a request made by the individual and a processing purpose that is not necessary to fulfill a request made by the individual;
  • includes a prominent heading that would enable a reasonable individual to easily identify the processing purpose for which consent is sought; and
  • clearly explains the individual’s right to provide or withhold consent.

Covered entities will not be able to infer consent if a person does not act or in his continued use of the covered entity’s services or products. Moreover, a person must be presented “with a clear and conspicuous means to withdraw affirmative express consent.”

The language on the consent related to the sensitive covered data of minors is a bit confusing. Parents will be able to consent on behalf of their minor children in the same manner as they may consent for themselves (i.e. affirmative express consent). And yet, covered entities may not transfer the sensitive covered data of those 16 and younger to a third party if there is actual knowledge of the person’s age and unless the individual consents or her parent does.

Generally, covered entities must minimize how they collect, process, or share covered data to what is necessary for that purpose. Specifically, covered entities “shall not collect, process, or transfer covered data beyond

  • what is reasonably necessary, proportionate, and limited to provide or improve a product, service, or a communication about a product or service, including what is reasonably necessary, proportionate, and limited to provide a product or service specifically requested by an individual or reasonably anticipated within the context of the covered entity’s ongoing relationship with an individual;
  • what is reasonably necessary, proportionate, or limited to otherwise process or transfer covered data in a manner that is described in the privacy policy that the covered entity is required to publish…or
  • what is expressly permitted by this Act or any other applicable Federal law.

There are exceptions to the rights granted to people just like all the other data privacy bills, which we will turn to momentarily. However, this section requires a bit of elaboration. The FTC will undoubtedly need to determine the broad strokes of what is “necessary, proportionate, and limited” in the different contexts that clause if used. And, yet the FTC is not broadly granted rulemaking authority under the APA to implement the CDPA, and so the agency would probably need to hash out these terms through the “common law” it is currently using to forge the federal data security and privacy regime. And, this may be the case even though the agency is required to issue guidelines recommending best practices for covered entities to minimize the collection, processing, and transfer of covered data in accordance with this section” within one year of enactment. Such guidelines will, of course, inform covered entities of the agency’s thinking, but the “necessary, proportionate, and limited” formulation may present a number of close cases that may be adjudicated by courts and/or the FTC.

CDPA lays out the rights, responsibilities, and roles of service providers and third parties under the new federal privacy regime. However, as always, let’s look at who would qualify as either. First service providers would be “with respect to a set of covered data, a covered entity that processes or transfers such covered data for the purpose of performing 1 or more services or functions on behalf of, and at the direction of, another covered entity that” is not a part of that covered entity. Third parties are those entities that are not service providers that receive covered data and, again, are not owned or affiliated with the covered entity. There are also definitions of “service provider data” and “third party data.” Regarding the former, it shall be those data that service providers are given by covered entities or those covered data the service provider collects on behalf of the covered entity and then processed or transferred per the covered entity’s instructions or direction. This could be firms that have dedicated services for processing covered data, possibly even data brokers. Third party data shall be those covered data that are not service provider data that are received from a covered entity. For example, BestBuy transferring covered data with the proper consent to Walmart would make the latter a third party and those covered data are third party data.

The Act stipulates that service providers may process “service provider data” only at the direction of the covered entity that provided the data and may not undertake any additional processing sua sponte. Likewise, the service provider may not transfer service provider data to third parties without the covered entity having obtained affirmative express consent in the first instance. What’s more service providers must delete and deidentify these data as soon as possible after the agreed upon processing has occurred or as soon after the completion of processing as is practicable.

Service providers do not need to respond to a person’s request to access, correct, complete, delete, or port covered data, but they must help covered entities fulfill these requests to the degree possible and upon being notified, they must comply with the request a person has made of a covered entity. However, service providers do not need to get affirmative express consent from consumers to transfer their sensitive covered data to third parties. Nor need service providers minimize covered data. So, it would appear that once a person provides a covered entity the necessary consent to process or transfer their sensitive covered data, then this subset of covered data may be transferred onward or processed by a third party. Additionally, it appears covered entities could transfer sensitive covered data to service providers without the affirmative express consent of a people, and then service providers appear free to process such data and to transfer it onward. However, the definition of “process” may weigh against such a reading, for it covers retention and handling of covered data, so perhaps this scenario is contrary to the constraints placed on covered entities.

Third parties “shall not process third party data for a processing purpose inconsistent with the reasonable expectation of the individual to whom such data relates.” Additionally, third parties “may reasonably rely on representations made by the covered entity that transferred third party data regarding the reasonable expectations of individuals to whom such data relates, provided that the third party conducts reasonable due diligence on the representations of the covered entity and finds those representations to be credible.” And, like service providers third parties do not need to respond to a person’s request to access, correct, complete, delete, or port covered data nor minimize data retention.

Nonetheless, covered entities must exercise reasonable due diligence in selecting a service provider or transferring covered data to a third party in order to ensure compliance with the CDPA.

A subset of covered entities would need to meet other requirements. “Large data holders” “shall conduct a privacy impact assessment that weighs the benefits of the covered entity’s covered data collection, processing, and transfer practices against the potential adverse consequences to individual privacy of such practices.” Those covered entities that are large data holders are those that “processed or transferred the covered data of more than 5,000,000 individuals or devices that are linked or reasonably linkable to such individuals” or “processed or transferred the sensitive covered data of more than 100,000 individuals or devices that linked or reasonably linkable to such individuals (excluding any instance where the covered entity processes the log-in information of an individual or device to allow the individual or device to log in to an account administered by the covered entity).” Covered entities would need to determine annually if they have passed either threshold and have become a large data holder that needs to conduct an annual privacy impact assessment. Thereafter, these assessments would need to be conducted every two years and would need to be approved by the entity’s privacy officer.

Like the other privacy bills, there are circumstances under which covered entities may disregard some of the responsibilities to people. In terms of exceptions to the general rights laid out for people, “a covered entity may collect, process or transfer covered data for any of the following purposes, provided that the collection, processing, or transfer is reasonably necessary, proportionate, and limited to such purpose:

  • To complete a transaction or fulfilling an order or service specifically requested by an individual, including associated routine administrative activities such as billing, shipping, and accounting.
  • To perform internal system maintenance and network management.
  • Subject to [language governing biometrics], to detect or respond to a security incident, provide a secure environment, or maintain the safety of a product or service.
  • Subject to [language governing biometrics], to protect against malicious, deceptive, fraudulent, or illegal activity.
  • To comply with a legal obligation or the establishment, exercise, or defense of legal claims.
  • To prevent an individual from suffering serious harm where the covered entity believes in good faith that the individual is at risk of death or serious physical injury.
  • To effectuate a product recall pursuant to Federal or State law.
  • To conduct internal research to improve, repair, or develop products, services, or technology.
  • To engage in an act or practice that is fair use under copyright law.
  • To conduct a public or peer-reviewed scientific, historical, or statistical research that—
    • is in the public interest;
    • adheres to all applicable ethics and privacy laws; and
    • is approved, monitored, and governed by an institutional review board or other oversight entity that meets standards promulgated by the Commission pursuant to [the Administrative Procedure Act]

However, in availing themselves of these exceptions to many of the rights detailed in Title I of the bill, covered entities would not be allowed to breach the ban on denying goods or services because a person exercised their rights under the CDPA nor would they be able to disregard the rights of access, correction, completion, deletion, or portability. Similarly, the covered entity must still adhere to its privacy policy.

As noted earlier, covered entities may “not process or transfer covered data of an individual that is biometric information” “to detect or respond to a security incident, provide a secure environment, or maintain the safety of a product or service” or “to protect against malicious, deceptive, fraudulent, or illegal activity” unless “these activities are “limited to real-time or short-term processing” and comply with to-be-promulgated FTC regulations. There is the further stipulation that “the covered entity does not transfer such information to a third party other than to comply with a legal obligation or to establish, exercise, or defend a legal claim.”

Small businesses would be provided with a limited carve out under the CDPA from heeding requests to access, correct, complete, delete, or port covered data and from the data minimization requirements binding on other covered entities. Such exempted small businesses would be those whose gross annual revenues for the preceding three years is $25 million or less, processing of covered data did not exceed more than 100,000 people or devices, and whose revenue from transferring covered data was less than 50% of its annual revenue.

Senate Commerce Republican staff have apparently acceded to Democratic insistence that data security be made part of a privacy bill as the CDPA contains such language. The bill provides generally that “[a] covered entity shall establish, implement, and maintain reasonable administrative, technical, and physical data security policies and practices to protect against risks to the confidentiality, security, and integrity of sensitive covered data.” These data security standards should be “appropriate to the size and complexity of the covered entity, the nature and scope of the covered entity’s collection or processing of sensitive covered data, the volume and nature of the sensitive covered data at issue, and the cost of available tools to improve security and reduce vulnerabilities.” These standards should be designed to

  • identify and assess anticipated human and technical vulnerabilities to sensitive covered data;
  • take preventative and corrective action to address anticipated and known vulnerabilities to sensitive covered data; and
  • delete sensitive covered data after it is no longer needed for the purpose for which it was collected unless such retention is necessary to comply with a law.”

Theoretically, those covered entities processing and transferring sensitive covered data would need to implement more robust data security standards than covered entities just handling covered data.

The FTC may, but is not required to, promulgate regulations under the APA and must consult with the National Institute for Standards and Technology (NIST). However, the FTC must “issue guidance to covered entities on how to—

  • identify and assess vulnerabilities to sensitive covered data, including—
    • the potential for unauthorized access to sensitive covered data;
    • human and technical vulnerabilities in the covered entity’s collection or processing of sensitive covered data;
    • the management of access rights; and
    • the use of service providers to process sensitive covered data; and
  • take preventative and corrective action to address vulnerabilities to sensitive covered data.”

If the FTC chooses to skip regulations and instead issue guidance, covered entities might be wise to heed the FTC’s views in the latter document, but they would not be required to meet any articulated standards.

And yet, those covered entities in compliance with the “Financial Modernization Act of 1999” (P.L. 106-102) (aka Gramm-Leach-Bliley) and the “Health Insurance Portability and Accountability Act of 1996” (P.L. 104-191) (HIPAA), mainly financial services and healthcare entities respectively, would be deemed to be in compliance with the CDPA. However, this compliance would be only with respect to “information security requirements.” Additionally,

Covered entities must also designate privacy officers and data security officers that “shall be responsible for, at a minimum…coordinating the covered entity’s policies and practices regarding the processing of covered data; and…facilitating the covered entity’s compliance with this Act.” Furthermore, “[a] covered entity shall maintain internal controls and reporting structures to ensure that appropriate senior management officials of the covered entity are involved in assessing risks and making decisions that implicate compliance with this Act.” Those entities in compliance with a range of federal privacy regimes regarding “data collection, processing, or transfer activities” under those statutes would be deemed to be in compliance but only with respect to “the data collection, processing, or transfer activities governed by such laws.”

In terms of enforcing the CDPA, the FTC would be able to seek civil penalties in the first instance and common carriers and non-profits would be added to the universe of entities the FTC can police. Like COPRA, this bill would establish a “Data Privacy and Security Victims Relief Fund” in which the FTC shall deposit “any civil penalty obtained against any covered entity in any judicial or administrative action the Commission commences to enforce this Act or a regulation promulgated under this Act.” These FTC may use these funds “to provide redress, payments or compensation, or other monetary relief to individuals affected by an act or practice for which civil penalties have been imposed under this Act.”

State attorneys general may also bring actions to seek a range of remedies including to enjoin conduct in violation of the CDPA and to “obtain damages, civil penalties, restitution, or other compensation on behalf of the residents of the State.” If two or attorneys general file suit against the same covered entity for the same conduct, the cases would be combined in federal court in the District of Columbia. Moreover, the FTC may intervene in an action brought by a state attorney general, and if the FTC brings an action first, state attorneys general may not bring actions until the FTC’s action finishes.

The CDPA uses a concept from the Obama Administration’s “Consumer Privacy Bill of Rights Act of 2015:” the creation of voluntary codes that private entities may adhere to after the FTC has signed off on them. Accordingly, the FTC “may approve certification programs developed by 1 or more covered entities or associations representing categories of covered entities to create standards or codes of conduct regarding compliance with 1 or more provisions in this Act.” Consequently, “[a] covered entity that complies with a certification program approved by the Commission shall be deemed to be in compliance with the provisions of this Act addressed by such program.” However, “[a] covered entity that has certified compliance with an approved certification program and is found not to be in compliance with such program by the Commission shall be considered to be in violation of the section 5 of the Federal Trade Commission Act…prohibition on unfair or deceptive acts or practices.”

The CDPA would preempt state laws on privacy but not any such laws or provisions regarding data breach notification. The CDPA would take effect two years after enactment, allowing covered entities, the FTC and other time to get prepared for the new privacy standards.

The FTC would receive limited responsibility to address discriminatory data processing or transferring. Notably, if the agency receives credible evidence of possible violations of federal laws barring discrimination (e.g. the 1964 Civil Rights Act), it would not investigate and possibly bring an action. Rather, the FTC would transfer this information to federal or state regulators with explicit authority to regulate discrimination.

The FTC would need to use its current Section 6(b) authority to obtain information from entities to examine “the use of algorithms to process covered data in a manner that may violate Federal anti-discrimination laws.” The FTC would send out demands for information and entities must answer upon pain of potential penalties. The agency would need to publish a report on its findings within three years and then publish guidance “to assist covered entities in avoiding discriminatory use of algorithms.”

Additionally, within six months of enactment of the CDPA, the National Institute of Standards and Technology (NIST) “shall develop and publish a definition of “digital content forgery” and accompanying explanatory materials” and no later than one year after NIST’s report, the FTC must “publish a report regarding the impact of digital content forgeries on individuals and competition.” The FTC must update the report at least every two years or more frequently if necessary.

The CDPA lifts a structure from the “California Consumer Privacy Act” (CCPA) (AB 375) in setting up a regime for data brokers to annually register with the FTC. The data broker would need to provide contact information and pay a $100 fee. Failure to do so could result in a fine of $50 per day and no more than $10,000 per year. The FTC would then publish the registration information on its website.

Privacy Bill A Week: Consumer Online Privacy Rights Act

Yesterday, we posted the political backdrop for the introduction of the “Consumer Online Privacy Rights Act“ (COPRA). Today, let’s turn to the substance of the bill.

Under COPRA, entities covered by the new requirements is a broad class simply defined as those already subject to the FTC Act and “process[] or transfer[] covered data.” The bill carves out sub-classes of entities that might otherwise be covered but some of which may not fall into the definition of covered entity.

Service providers are defined to be covered entities that are performing a service on behalf of another covered entity that process of transfer covered data. However, the definition is written to include only those activities undertaken at the behest of another covered entity and is explicit that the “term does not include a covered entity that processes or transfers the covered data outside of the direct relationship between the service provider and the covered entity.” Consequently, entities such as Verizon and Amazon would be deemed service providers only to the extent they are providing services like broadband internet and cloud services. Otherwise, they would be covered entities and subject to all the responsibilities the bill would place on them. Third parties are those who received covered data from covered entities for processing or transfer that are not service providers, affiliates, subsidiaries, or otherwise controlled by the covered entity.

Additionally, small businesses would be carved out of much of the bill, and these are defined as those with $25 million or less in annual revenues for the preceding three years, processed the covered data of fewer than 100,000 individuals, and earns 50% or less of its gross revenue from processing covered data. So, non-profits and other discrete classes of entities would be outside the confines of this bill (e.g. some of the activities in the privacy and data security spheres of telecommunications companies would still be regulated by the Federal Communications Commission.)

“Covered data” is “information that identifies, or is linked or reasonably linkable to an individual or a consumer device, including derived data.” But this term excludes “de-identified data,” “employee data,” and “public records.” Turning to those terms, de-identified data are generally “information that cannot reasonably be used to infer information about, or otherwise be linked to, an individual, a household, or a device used by an individual or household.” However, before any such information may be deemed de-identified data, in addition to ensuring the information cannot be linked to a person, device, or household and also that inferences cannot be reasonably drawn, the entity must put in place reasonable measures to block the re-identification of such information and publicly commit not to re-identifying and to only process or transfer in a de-identified state. Moreover, any entity seeking to de-identify data must also obligate any other entities who receive the information to meet all of the aforementioned requirements.

Employee data are the information employers collect, process, and transfer solely related to a person’s employment, application for employment, emergency contacts, and administration of benefits. Public records are “information that is lawfully made available from Federal, State, or local government records provided that the covered entity processes and transfers such information in accordance with any restrictions or terms of use placed on the information by the relevant government entity.” This last definition may receive some scrutiny, for a number of Departments of Motor Vehicles are selling the personal information of people who hold driver’s licenses, so this could prove a significant loophole that may be exploited.

COPRA creates a subset of covered data, ‘‘sensitive covered data,’’ which includes the following list, which has been shortened:

  • A government-issued identifier, such as a Social Security number, passport number, or driver’s license number.
  • Any information that describes or reveals the past, present, or future physical health, mental health, disability, or diagnosis of an individual.
  • Biometric information.
  • Precise geolocation information that reveals the past or present actual physical location of an individual or device.
  • The content or metadata of an individual’s private communications or the identity of the parties to such communications unless the covered entity is an intended recipient of the communication.
  • Information revealing an individual’s race, ethnicity, national origin, religion, or union membership in a manner inconsistent with the individual’s reasonable expectation regarding disclosure of such information.
  • Information revealing the sexual orientation or sexual behavior of an individual in a manner inconsistent with the individual’s reasonable expectation regarding disclosure of such information.
  • Information revealing online activities over time and across third-party website or online services.
  • Calendar information, address book information, phone or text logs, photos, or videos maintained on an individual’s device.
  • A photograph, film, video recording, or other similar medium that shows the naked or undergarment-clad private area of an individual.
  • Any other covered data processed or transferred for the purpose of identifying the above data types.
  • Any other covered data that the Commission determines to be sensitive covered data through a rulemaking pursuant to [the Administrative Procedure Act]

While we will not dive into all the categories of information considered sensitive covered data, one bears mention for it sets COPRA apart from the only major privacy bill introduced in the House this year, the “Online Privacy Act of 2019“ (H.R. 4978). In COPRA, both the content and metadata of private communications are provided privileged status. The same is not true in the other bill, which protects only the contents of communications with metadata being subject to lesser standards.

A final definition to note: “affirmative express consent.” Since so much of a person’s rights under COPRA is linked to the provision of “affirmative express consent,” it bears a bit of investigation. First, the bill makes clear that this cannot be inferred by a person’s actions or inaction or even her continued use of a covered entity’s products and services. Consequently, only affirmative actions that clearly communicate agreement in response to a specific request that meets defined criteria will qualify. Namely, this request must be by itself, describe each act or practice for which consent is being requested, expressed in easily understood language, and explains applicable rights. Any consent short of this would violate the Act, for then any subsequent processing or transference of covered data would be contrary to a number of requirements.

Covered entities would have a duty of loyalty. However, the bill is not explicit as to whom this duty if owed, but the context is fairly clear that this duty is due to the people whose covered data is collected, processed, and transferred. This duty has two parts: 1) a prohibition against engaging in deceptive or harmful data practices; and 2) processing or transferring covered data in any way that violates COPRA. The definition of what is deceptive is the same as those practices currently barred as deceptive under the FTC Act, but COPRA would institute a new definition of harmful that would considerably widen the scope of the FTC’s powers to punish illegal privacy or data security practices. Specifically, harmful data practices are “the processing or transfer of covered data in a manner that causes or is likely to cause any of the following:

  • Financial, physical, or reputational injury to an individual.
  • Physical or other offensive intrusion upon the solitude or seclusion of an individual or the individual’s private affairs or concerns, where such intrusion would be offensive to a reasonable person.
  • Other substantial injury to an individual.”

Obviously, the FTC will have views on how to construe some potentially harmful data practices that will ultimately be adjudicated upon by federal courts. For example, how would one define “reputational harm”? Likewise, what constitutes “[o]ther substantial injury” given that financial, physical, reputational, and broad privacy harms are already enumerated. Quite possibly, this language was included to provide the agency and courts with the flexibility to include new harms yet to be seen. As for the other component of the duty of loyalty, it is simply not to violate the myriad requirements of the Act, which provides a very broad means for the FTC and state attorneys general to pursue and prosecute violations.

People would be able to request and receive a human-readable version of their covered data a covered entity holds along with the names of all third parties with whom such information has been shared and why. Covered entities must make publicly available “a privacy policy that provides a detailed and accurate representation of the entity’s data processing and data transfer activities.” This policy must include

  • each category of data the covered entity collects and the processing purposes for which such data is collected
  • whether the covered entity transfers covered data and, if so—
    • each category of service provider and third party to which the covered entity transfers covered data and the purposes for which such data is transferred to such categories; and
    • the identity of each third party to which the covered entity transfers covered data and the purposes for which such data is transferred to such third party, except for transfers to governmental entities pursuant to a court order or law that prohibits the covered entity from disclosing such transfer;
  • how long covered data processed by the covered entity will be retained by the covered entity and a description of the covered entity’s data minimization policies;
  • how individuals can exercise his or her individual rights; and
  • a description of the covered entity’s data security policies

This is a fairly comprehensive list of information a consumer must be provided. Unless the FTC issues regulations or guidance directing covered entities to use a uniform format or keep this disclosure to a certain length, it is possible covered entities will favor longer, denser privacy policies in order to either obfuscate or discourage reading.

And, of course, any material changes to a covered entity’s privacy policy will require obtaining affirmative express consent from users.

Another right granted by COPRA is that of deletion. Upon receiving a verified request from a person, a covered entity must delete the requested information and then also inform third parties and service providers of the deletion request. However, it is not clear that the latter two entities would be bound to honor the request and actually carry out the deletion. It may be necessary for the FTC”s regulations to require such language be inserted into contracts between covered entities and their service providers and third parties.

Likewise, an individual may ask that a covered entity correct any inaccuracies in the covered data they hold and process. Again, any such request would need to be verified and again the covered entity would need to inform third parties and service providers.

The bill creates a right of data portability in that covered entities must honor verified requests from people and provide them with both human-readable and machine-readable copies of their covered data. COPRA also establishes a right to object to and opt-out of transfers of covered data to third parties, and the FTC would need to conduct a rulemaking to establish the procedures one may use to affect this right. The bill lists the features this final rule must have, including requirements for clear and conspicuous opt-out notices and easy to use mechanisms and a centralization of opting out so a person will not need to repeatedly opt-out of a covered entity’s transfers.

Furthermore, covered entities may neither process nor transfer a person’s sensitive covered data with “prior, affirmative express consent” and must “provide an individual with a consumer-friendly means to withdraw affirmative express consent to process the sensitive covered data of the individual.” However, covered entities do not need prior, affirmative express consent to process or transfer publicly available information. Considering that these passages are in the same section of the bill, the drafters are clearly contemplating that sensitive covered data may be available from public sources. For example, as mentioned earlier, some DMVs are selling the personal information of drivers, making some available information that would likely be considered sensitive covered data that could then be processed and transferred without the consent of the person to which it pertains.

Covered entities must limit their data processing and transferring to what is necessary, proportionate, and limited. This right to data minimization would task covered entities with engaging in the bare minimum “to carry out the specific processing purposes and transfers described in the privacy policy made available by the covered entity as required” unless it has affirmative express consent for other processing or transferring. This right to data minimization would be abridged by the exceptions discussed below.

Cantwell has long expressed her view that privacy legislation should include data security requirements, and so COPRA does. Covered entities must “establish, implement, and maintain reasonable data security practices to protect the confidentiality, integrity, and accessibility of covered data…appropriate to the volume and nature of the covered data at issue.” This provision spells out further requirements, including the need to conduct vulnerability assessments to turn up reasonable foreseeable threats, developing and implementing a process to address any such vulnerabilities, destroying or deleting any covered data that is no longer needed or for which affirmative express consent to hold has not been provided, and to properly train the covered entity’s employees to properly handle and safeguard covered data. The FTC would need to issue training guidelines to assist covered entities, and even though this provision does not specifically task the agency with promulgating regulations, COPRA provides the FTC with a broad grant of authority to promulgate regulations under the Administrative Procedure Act.

Next, the bill turns to the civil rights granted to individuals residing in the U.S. regarding data privacy, many of which address practices the Obama Administration called digital redlining. Covered entities are barred from processing or transferring covered data on the basis of real, or perceived, classes, including but not limited to, race, national origin, ethnicity, gender, sexual orientation and others, for a variety of defined purposes. Broadly speaking the purposes for processing and transferring covered data using protected classes pertain to differentiating opportunities for employment, education, housing, and credit on the basis of different classes. As an example of a practice that would be barred is the Department of Housing and Urban Development’s allegations that Facebook allowed people placing ads on the social platform to target certain racial groups and exclude others. This bar on discriminatory treatment would also be applied to public accommodations writ large meaning any services or products offered generally to the public. Consequently, covered data could not be used by covered entities to discriminate against women, for example, in providing a different, lower price for men for a service. Additionally, “[a] covered entity may request advice from the Commission concerning the covered entity’s potential compliance with this subsection, in accordance with the Commission’s rules of practice on advisory opinions.”

These civil rights are extended to algorithmic decision making. Covered entities using algorithmic decision making in processing or transferring covered data in the same contexts must perform impact assessments annually, keep them on file, and make them available to the FTC upon request. Presumably, the FTC could use these impact assessments as evidence, if warranted, in finding that a covered entity has violated the Act through discriminatory actions flowing from such decision making. In any event, the FTC would be required to public a report “examining the use of algorithms” for decision making in this context within 3 years of enactment and then every 3 years thereafter.

COPRA would bar people from being allowed to waive certain of their rights under any circumstances and other rights under circumscribed circumstances. Those rights that cannot be waived are the duty of loyalty covered entities owe to people, data portability, data minimization, data security, and the various civil rights. And yet, the rights of access, transparency, deletion, correcting inaccuracies may be waived if three circumstances are present:

  • “there exists a direct relationship between the individual and the covered entity initiated by the individual;
  • the provision of the service or product requested by the individual requires the processing or transferring of the specific covered data of the individual and the covered data is strictly necessary to provide the service or product; and
  • an individual provides affirmative express consent to such specific limitations.”

Of course, in the latter category, covered entities that believe all three conditions are at work will prompt or perhaps even require people to waive those rights. And, it is all but certain that covered entities will seek to expand as much as possible the concept of what “is strictly necessary to provide the service or product.” Consequently, should the provision of a service such as FaceTime require the processing and/or transfer of covered data, then Apple would need to obtain affirmative, express consent and only after an individual initiates the relationship. However, would covered entities be able to advertise or spam people with offers for their services and products in exchange for waivers? Also, it will undoubtedly be a point of contention as to what processing and transferring of covered data is necessary for certain services and products to be provided. Presumably, a company like Google could make the case that its provision of free email through Gmail is financed through the harvesting and sharing of data and without this, it is not viable. It seems to me the FTC will need to weigh in on the contours of what constitutes “strictly necessary” in terms of seeking waivers from these rights.

Of course, the exercise of a number of these rights hinges on verifying that the person making the request is who he claims to be (i.e. the rights to access, transparency, deletion, correction, and portability). Covered entities would be able to deny people the exercise of these rights if they cannot reasonably verify the identity of the requester, which seems on its face a reasonable step to avoid allowing people to make mischief with others’ data and accounts. Covered entities must request additional information to verify a person’s identity in cases of uncertainty. In any event, covered entities must minimize burdens and cannot charge for these requests.

And yet, there circumstances that would allow covered entities to deny these requests:

  • if complying with the request would be demonstrably impossible,
  • complying with the request would prevent the covered entity from carrying out internal audits, performing accounting functions, processing refunds, or fulfilling warranty claims, provided that the covered data that is the subject of the request is not processed or transferred for any purpose other than such specific activities;
  • the request is made to correct or delete publicly available information, and then only to the extent the data is publicly available information;
  • complying with the request would impair the publication of newsworthy information of legitimate public concern to the public by a covered entity, or the processing or transfer of information by a covered entity for such purpose;
  • complying with the request would impair the privacy of another individual or the rights of another to exercise free speech; or
  • the covered entity processes or will process the data subject to the request for a specific purpose described in [provisions detailing when express affirmative consent is not needed], and complying with the request would prevent the covered entity from using such data for such specific purpose

However, covered entities may also deny these requests if they reasonably believe they would interfere with a contract between the covered entity and another individual.

COPRA also stipulates that “[t]he rights and remedies provided for in this section shall not be waived by any policy form or condition of employment, including by a predispute arbitration agreement.” Moreover, “[n]o predispute arbitration agreement shall be valid or enforceable if the agreement requires arbitration of a dispute.”

As noted earlier, covered entities may process or transfer covered data without in the affirmative express consent of a person “provided that the processing or transfer is reasonably necessary, proportionate, and limited to such purpose:

  • To complete a transaction or fulfill an order or service specifically requested by an individual, such as billing, shipping, or accounting.
  • To perform system maintenance, debug systems, or repair errors to ensure the functionality of a product or service provided by the covered entity.
  • To detect or respond to a security incident, provide a secure environment, or maintain the safety of a product or service.
  • To protect against malicious, deceptive, fraudulent or illegal activity.
  • To comply with a legal obligation or the establishment, exercise, or defense of legal claims.
  • To prevent an individual from suffering harm where the covered entity believes in good faith that the individual is in danger of suffering death or serious physical injury.
  • To effectuate a product recall pursuant to Federal or State law.
  • To conduct scientific, historical, or statistical research in the public interest that adheres to all other applicable ethics and privacy laws and is approved, monitored, and governed by an institutional review board or a similar oversight entity that meets standards promulgated by [the FTC in an APA rulemaking.]

The FTC and state attorneys general will need to closely monitor the use of these exceptions by covered entities, for the inclination of regulated entities is to push the limits of legal or excepted behavior. Consequently, regulators will need to review the use of these exceptions lest one or more become the exception that ate the federal privacy statute.

The FTC will need to promulgate regulations “identifying privacy protective requirements for the processing of biometric information” for two of the above exceptions to the requirement for affirmative express consent: to detect or respond to a security incident, provide a secure environment, or maintain the safety of a product or service, or to protect against malicious, deceptive, fraudulent or illegal activity. This section further details the requirements of such a rulemaking.

The bill carves out “the publication of newsworthy information of legitimate public concern to the public by a covered entity, or to the processing or transfer of information by a covered entity for that purpose.”

COPRA would exempt those covered entities subject to other federal privacy and data security statutes such as the “Financial Services Modernization Act of 1999” (aka Gramm-Leach-Bliley) and “Health Insurance Portability and Availability Act of 1996” (HIPAA) to a certain degree. There are provisions making clear that entities in compliance with the named federal regimes shall be deemed to be in compliance with the privacy and data security requirements of COPRA “with respect to data subject to the requirements of such regulations, part, title, or Act.” This would suggest that for data that falls outside those regimes (e.g. biometric data and geolocation data are not subject to Gramm-Leach-Bliley), any covered entities would need to meet the privacy and data security requirements of COPRA in addition to their existing responsibilities. The FTC must issue guidance describing the implementation of this section within one year.

COPRA would add compliance responsibilities for “large data holders,” those covered entities that process or transfer the covered data of 5 million or more individuals per year or processed or transferred the sensitive covered data of 100,000 or more individuals in a year. These entities would need to annually certify compliance with the Act after a review of its internal procedures and processes for compliance. The CEO, chief privacy officer, and chief data security officer must sign this certification. This language is obviously aimed at the largest of data collectors and processors and is intended to make the CEOs aware and responsible for privacy and data security practices, so they would not be able to claim they were ignorant of problems that turn up.

However, all covered entities must designate both chief privacy and chief data security officers who “shall be responsible for, at a minimum—

  • implementing a comprehensive written data privacy program and data security program to safeguard the privacy and security of covered data throughout the life cycle of development and operational practices of the covered entity’s products or services;
  • annually conducting privacy and data security risk assessments, data hygiene, and other quality control practices; and
  • facilitating the covered entity’s ongoing compliance with this Act.”

COPRA spells out the responsibilities of service providers and third parties. Service providers may only process covered data in accordance with the wishes of the covered entity from whom it received the information or to comply with a legal obligation. Service providers may not transfer covered data “without the affirmative express consent… of the individual to whom the service provider data is linked or reasonably linkable.” Additionally, service providers must delete or de-identify covered data once they have completed their services for a covered entity. Third parties may not “process third party data for a purpose that is inconsistent with the expectations of a reasonable individual” and “may reasonably rely on representations made by the covered entity that transferred third party data regarding the expectation of a reasonable individual, provided the third party conducts reasonable due diligence on the representations of the covered entity and finds those representations to be credible.” Service providers and third parties would be exempted from some of the rights people would be given under COPRA (e.g. the right of access.)

Covered entities must exercise reasonable due diligence regarding service providers and third parties:

  • in selecting a service provider and conduct reasonable oversight of its service providers to ensure compliance with the applicable requirements of this section; and
  • in deciding to transfer covered data to a third party, and conduct oversight of third parties to which it transfers data to ensure compliance with the applicable requirements of this subsection.

The bill has provisions to protect and encourage whistleblowers in coming forward to uncover illegal privacy and data security practices. Additionally, the National Institute of Standards and Technology “shall publish a report regarding digital content forgeries,” an area of increasing concern for policymakers as deep fakes become more and more prevalent and lifelike.

With respect to enforcement, the FTC would receive broad authority to draft regulations and guidance to effectuate COPRA. The FTC and state attorneys general could bring actions under this bill. They could seek civil penalties of $42,530 per violation in the first instance and all the other relief that can currently be sought such as equitable remedies including rescission, disgorgement, and injunctions. All of this is fairly anodyne and even Republicans have come to accept what they long resisted earlier in the decade when data security legislation was debated and opposed state attorneys general getting on the field or giving the FTC authority to seek fines for first offenses. However, what many stakeholders may be relying on is that the FTC and state attorneys general are only capable of bringing so many actions and there may well be conduct that goes unpunished that is quite possibly at odds with COPRA.

Additionally, the FTC must “establish a new Bureau within the Commission comparable in structure, size, organization, and authority to the existing Bureaus with the Commission related to consumer protection and competition” within two years of enactment. However, this bill does not specifically authorize extra appropriations for this purpose and rather includes language authorizing those sums necessary to implement the Act. And, without additional funds to set up and resource this new Bureau, then this may be a hollow grant of authority that may be obeyed by the FTC cannibalizing its other current operations. However, an account titled the “Data Privacy and Security Relief Fund” would be established to collect all civil awards won by the FTC and to primarily make consumers whole who were harmed by covered entities.


As noted, individuals could sue for violations in any competent federal or state court and could win the greater of actual damages and between $100-$1000 per violation, punitive damages, and attorney’s fees. This is the most expansive such right in a major privacy bill released this year and may be seen as the lynchpin of enforcement efforts, for if state attorneys general and the FTC are only able to police a small set of violations, then people and their attorneys through the use of class actions may be able to enforce the statute for many companies may emphasize compliance in order to avoid a huge settlement. And yet, giving plaintiffs’ attorneys another means by which they can sue corporations is anathema to Republicans. Therefore, it will be an uphill battle for any private right of action to survive in a final privacy and data security bill passed by the Senate and sent to the White House.

Further Reading (23 November)

  • Meet The Immigrants Who Took On Amazon”Wired. This article traces a burgeoning movement of workers at an Amazon fulfillment center in Minneapolis-St. Paul comprised largely of Somali immigrants to win some concessions from management. The article also traces Amazon’s view on unionizing (not surprisingly, it’s not favorable) and its employment practices. Whether the efforts of Amazon workers at this warehouse spread to other facilities remains to be seen.
  • Child Abusers Run Rampant as Tech Companies Look the Other Way” – The New York Times. A horrific expose on how poorly technology platforms are doing in identifying and taking down child pornography. A number of the tech companies claim security and privacy are the reasons they do not scan the pictures and videos uploaded to their networks, law enforcement officials and other stakeholders decry a lack of will. Worse still, tech companies are not sharing technology to identify this illegal material or are not sharing proprietary methods. Moreover, end-to-end encryption is only complicating matters.
  • “He’s F–King Destroyed This Town”: How Mark Zuckerberg Became The Most Reviled Man In Tech” ­– Vanity Fair. Once widely admired among the tech community in Northern California, Facebook’s CEO is a bit less admired these days on account of the company’s bruising (some say illegal) business tactics and how its actions portray the larger tech world.
  • Yes, Robots Are Stealing Your Job” – The New York Times. Candidate for the Democratic nomination for president, Andrew Yang, shares his views on automation and why many current and future jobs may soon not be available for humans. He discusses his proposal on how to help those displaced by the coming wave of automation, including a universal basic income.
  • How Facebook’s ‘Switcheroo’ plan concealed scheme to kill popular apps” – ComputerWeekly.com. An investigative journalist got his hands on thousands of pages of documents showing Facebook’s methods of dealing with competitors and potential rivals, which a former app developer is alleging in a California state court violates antitrust laws. In addition to the outlets reporting on these documents, the cache of internal Facebook communications have been provided to the House Judiciary Committee for its investigation into digital markets.
  • Microsoft vows to ‘honor’ California’s sweeping privacy law across entire US” – The Verge. Just as with the GDPR, Microsoft says it will voluntarily honor the “core” principles of the CCPA when it becomes effective.

Further Reading (15 November)

  • The Porch Pirate of Potrero Hill Can’t Believe It Came to This” – The Atlantic. How technology intersects with and possibly exacerbates long entrenched societal problems. A fascinating read starting with someone stealing Amazon packages in a rapidly gentrifying San Francisco neighborhood.
  • Why Do We Tolerate Saudi Money in Tech?” – The New York Times and “Former Twitter employees charged with spying for Saudi Arabia by digging into the accounts of kingdom critics” – The Washington Post. Unsealed indictments show that agents working for the Saudi regime used Twitter to track critics of the government, and questions have been posed regarding the effect of a Saudi prince’s stake in Twitter that is the second largest bloc of shares and bigger than CEO Jack Dorsey. It is likely that many countries around the world will continue to seek to penetrate Twitter and other giant social media platforms to mine the information for a range of goals, not least of which will be spying on enemies.
  • Facebook’s Rebrand Addresses Its $5 Billion FTC Settlement” – BuzzFeed News. Critics claim Facebook’s all capitals rebrand is an attempt to forestall action by regulators that its ownership of WhatsApp and Instagram is deceptive and to also to stave off attempts to split up the company.
  • Inside the Valentine’s Day Text Message Mystery” – The New York Times. Last week thousands of SMS messages sent on Valentine’s Day 2019 arrived on people’s phones, causing understandable confusion. The explanations from telecommunications companies as to why this happened were vague, but eventually the fingered was pointed at Syniverse Technologies, a third-party messaging service that admitted the wave of messages was caused when a server that crashed on February 14 was reactivated.
  • In the Trump era, Oracle holds tech sway” – Axios. In part because of CEO Safra Catz’s support for President Donald Trump, and in part because of its different business model, Oracle has escaped the lashing the larger technology companies have endured of late.
  • Facebook considering limits on targeted campaign ads” – Politico. Vice-President for Global Affairs and Communications and former British Deputy Prime Minister Nick Clegg reveals that Facebook may forgo the microtargeting of users that allowed for personalized political ads in 2016 that many argue amplified the dynamics of the 2016 election and allowed disinformation to be all the more effective. Facebook’s floating of this policy change came after Google signaled it might limit political advertising, and Twitter swore off paid political ads. These may be signs that the scrutiny and pressure that accompany political advertising may not be worth the revenue.
  • Why has a privacy app used by Edward Snowden hit the NBA, NFL and NCAA?” – yahoo! sports. Signal has displaced WhatsApp as the go-to messaging in professional North American sports for players, agents, and executives because of the app’s reputation as the safest, most secure app available. It also helps cover potentially unethical conduct because of the setting that automatically deletes communications.

Further Reading (8 November)

  • I Accidentally Uncovered a Nationwide Scam on Airbnb” – Vice. A writer discovered through experience about a scam many on the short-term rental site, Airbnb, have experienced: a last-minute cancellation leading to a much inferior property and an interminable process for lodging complaints and obtaining a refund. Airbnb seems lax about enforcing its own policies against deceptive properties, and the incentive structure is weighted against renters leaving candid reviews.
  • An Unidentified Government Spied On Dissidents In India Using A WhatsApp Exploit” – BuzzFeed News. Israel’s NSO Group’s spyware may have been used by India’s ruling Bharatiya Janata Party (BJP) to surveil judges, activists, academics, journalists, and politicians by exploiting a weakness in WhatsApp, a messaging application used by more than 400 million Indians. This is, of course, not the first time the NSO Group has been linked to spyware, and in this case, the spyware, Pegasus, was inserted on phones through a call made using WhatApp to the victim’s phone they did not even need to answer. India’s Home Ministry has denied any connection and calls the reports “attempts to malign the government of India,” and the NSO Group seemed to claim that any such uses of its technology are contrary to their intended uses.
  • Police want faster data from the US, but Australia’s encryption laws could scuttle the deal” – ABC (Australia). As the U.S. and Australia negotiate a CLOUD Act agreement that would provide each country with a legal process to obtain information on citizens from technology companies as part of a law enforcement investigation, concerns and reservations are being raised in both countries about the powers of the Australian government under the “Telecommunications and Other Legislation Amendment (Assistance and Access) Act 2018” that allows it to direct technology companies to provide assistance in decrypting user information without any judicial review.
  • Missouri Official Admits to Tracking Women’s Periods” – The Cut. Health & Senior Services Director Dr. Randall Williams admitted during a hearing that his office maintained a spreadsheet with women’s menstrual cycles drawn from medical information the state had access to. He further admitted the database was used to track “failed abortions” as a means of investigating abortion and reproductive services clinics.
  • Russia Tests New Disinformation Tactics in Africa to Expand Influence” – The New York Times. Facebook and Stanford’s Internet Observatory revealed vast, new evolved Russian disinformation efforts being deployed in Africa with the goal of bringing successful tactics to the U.S. for next year’s election. For now, these tactics seem to boost Russian interests in the region and call into question American and French actions. The volume of both disinformation creation and distribution have increased several times compared to the 2016 U.S. election. These efforts have been tied to Yevgeny Prigozhin, the Russian oligarch who runs the Internet Research Agency and is a close ally of Vladimir Putin.
  • Gaggle Knows Everything About Teens And Kids In School” – BuzzFeed News and “School apps track students from classroom to bathroom, and parents are struggling to keep up” – The Washington Post. Two articles on the technology that many public schools are employing to track kids in the physical and digital worlds, begging many questions about the long term effect on children, their privacy, their rights, and their lives.
  • A Chinese hacking group breached a telecom to monitor targets’ texts, phone metadata” – cyberscoop. APT41 compromised a telecommunications company in a strategic competitor of China’s and surveilled a range of people. The Chinese hackers infected devices using SMS.
  • Banks are using their Washington clout to stomp on the tech industry” – Politico. As if the tech industry isn’t having enough trouble in Washington, the lobbies representing the banks and other financial services entities have worked to block cryptocurrencies and tech’s entry into any sector of banking and finance and found willing allies on Capitol Hill.

Further Reading

  • Russian operatives sacrifice followers to stay under cover on Facebook” – Reuters. Facebooks is using the tactics Russian hackers have used to spread disinformation against them. In order to sow discord, the Internet Research Agency’s (IRA) hackers need to be outrageous and memorable but doing so makes it easier for Facebook’s security team to track and take down these profiles. With the IRA changing techniques, their hackers may prove less effective.
  • Google Accused of Creating Spy Tool to Squelch Worker Dissent” – Bloomberg. Depending on your perspective within Google, a new Chrome extension that reports any large calendar events is either a means by which Google executives can monitor and squelch union organizing or is merely a means by which Google employees will not have their calendars jammed with events.
  • U.S. Government Still Uses Suspect Chinese Cameras” ­– The Wall Street Journal. Despite bans on the purchase of Huawei, ZTE, and other Chinese products and services that went into effect in August, one security firm is reporting that thousands of Chinese-made cameras are still in use at federal military and civilian facilities, raising questions about the effect of such a prospective ban and how U.S. agencies are to manage existing Chinese-built information technology currently in use.
  • House antitrust probe report likely by ‘first part’ of 2020” – Reuters. The House subcommittee chair running the investigation into the anti-competitive practices in digital markets envisions releasing their report early next year, likely in time for the Department of Justice, the Federal Trade Commission, and numerous state attorneys general to use in the various anti-trust investigations into a number of large technology companies.
  • Online Influencers Tell You What to Buy, Advertisers Wonder Who’s Listening” – The Wall Street Journal. The market for advertising in the form of paid but not necessarily transparent celebrity endorsement of products has begun to dip. Some early adopters are now questioning the value of paying someone with thousands or millions of followers to include content in their feed considering saturation in the marketplace and consumers generally be wiser to and warier of such endorsements.
  • Attorney General’s Antitrust Power Play Is Just What Trump Wants” – Bloomberg BusinessWeek. William Barr is uniquely versed in anti -trust policy, having served as Verizon’s general counsel from 1994 through 2008 and was a participant in the battles over the power of telephone and cable companies and net neutrality. His move to have the Department of Justice investigate the same tech companies the Federal Trade Commission is drew criticism inside Washington but may prove favorable to his boss, President Donald Trump.
  • Facebook takedowns show new Russian activity targeted Biden, praised Trump” – The Washington Post. The social media giant took down four disinformation campaigns from Instagram, one Russian, and the other three Iranian, seeking to influence the 2020 election. A number of the disinformation efforts sought o widen schisms in the Democratic party among a number of nominees with a particular focus on former Vice President Joe Biden. The Russian efforts are most likely allied with Russia’s Internet Research Agency, the entity responsible for the disinformation sown during the last presidential election. The takedowns occurred two days before Zuckerberg appeared before the House Financial Services Committee.
  • Cops Need a Warrant to Access Your Car’s Data, Court Rules” – Vice. The Georgia Supreme Court reverses two lower courts in finding that the Four Amendment bars warrantless searches of cars for the data they contain. In this case, after a car crash, a police officer downloaded the data from the airbag sensors and learned that one of the people involved was driving at twice the speed limit. The court turned aside all the state’s arguments about how this should fit into a number of Fourth Amendment exceptions allowing what would otherwise be unreasonable searches. It remains to be seen how the U.S. Supreme Court would rule on this issue, especially since Justice Anthony Kennedy was the swing vote in the 2018 case that found warrantless searches of cell phone records a violation of the Fourth Amendment.

Further Reading

  • How to report on a data breach” – Columbia Journalism Review. A veteran tech journalist who has written about a number of the recent, major data breaches (Target, MySpace, Equifax, LinkedIn, eBay, JP Morgan Chase, Yahoo, and Sony) offers tips to other journalists that can serve those interested in the policy side of these issues, including how to best confirm that a hack has occurred and its extent and how to ethically confirm an email address or log-in information is part of a breach.
  • How to Stop the Abuse of Location Data” ­– The New York Times. Foursquare CEO Jeff Glueck lays out the principles Congress should enshrine in legislation regulating how the location data on smart phones and other devices is used, including a fiduciary duty that would bar the use of some location data (e.g. visits to Planned Parenthood):
    • First, apps on mobile devices should not be allowed to ask for location data unless they offer the user a clear service that depends on that data.
    • Second, a new privacy law must require greater transparency around what consumers are signing up for and how their data will be used.
    • Third, a privacy law must establish the obligation and duty on those collecting location data (even with consent) to “do no harm.”
    • Moreover, all location companies should be required to protect consumer data with appropriate security steps, and blur or minimize data sharing in ways to enhance privacy.
  • My Family Story of Love, the Mob, and Government Surveillance” – The Atlantic. Former Assistant Attorney General and Harvard Law School Professor Jack Goldsmith makes amends with his stepfather, Chuckie O’Brien, an intimate of Teamsters head Jimmy Hoffa, by tracing the history of the U.S. government disregarding constitutional and statutory constraints on surveillance in the name of fighting national security and criminal threats. Once surveillance abuses come to light, Congress institutes new limits while legalizing some of the previously illegal practices. In the name of national security, a future administration violates these limits, and the cycle begins anew. Goldsmith’s reluctant conclusion is “The executive branch does what it thinks it must, including conduct robust surveillance, to meet our demands for safety. The technology of surveillance races ahead of the law of surveillance, which tries to catch up in spurts, and often does an admirable job of curtailing old abuses. But the law cannot eliminate ever-growing threats, and security is elemental.”
  • California blocks police from using facial recognition in body cameras” – San Francisco Chronicle. California Governor Gavin Newsom signed A.B. 1215 which will bar police departments from using body cameras that utilize facial recognition or biometric information for three years. The bill’s primary sponsor was motivated to act once Amazon’s facial recognition technology, Rekognition, incorrectly identified 26 members of the California legislature as criminal suspects. California is the third state after Oregon and New Hampshire to ban this technology for police departments, and Oakland and San Francisco already bar this practice.
  • Is Amazon Unstoppable?” – The New Yorker. The magazine takes a very long, very deep look at the online retailer, its culture, its impact, its labor practices, and its CEO. The upshot is that Amazon is poised to fight tooth and nail against tighter regulation at the federal and state level despite historic tides that may be running against them if previous patterns of American capitalism repeat.
  • Jeff Bezos’s Master Plan” – The Atlantic. A deeper look at Jeff Bezos and Amazon.
  • Exclusive: U.S. carried out secret cyber strike on Iran in wake of Saudi oil attack: officials” – Reuters. U.S. officials leaked word of at least the third cyber attack on Iran in response to provocation. In this case the September attack on a Saudi oil facility prompted an attack on Iran’s propaganda apparatus.
  • Accused Capital One hacker had as much as 30 terabytes of stolen data, feds say” – cyberscoop. The hacker who stole the identity information of millions may have also penetrated other entities, often by probing firewalls for weaknesses that would give her access to the cloud.