Further Reading, Other Developments, and Coming Events (22 October)

Further Reading

  •  “A deepfake porn Telegram bot is being used to abuse thousands of women” By Matt Burgess — WIRED UK. A bot set loose on Telegram can take pictures of women and, apparently teens, too, and “takes off” their clothing, rendering a naked image of females who never took naked pictures. This seems to be the next iteration in deepfake porn, a problem that will surely get worse until governments legislate against it and technology companies have incentives to locate and take down such material.
  • The Facebook-Twitter-Trump Wars Are Actually About Something Else” By Charlie Warzel — The New York Times. This piece makes the case that there are no easy fixes for American democracy or for misinformation on social media platforms.
  • Facebook says it rejected 2.2m ads for breaking political campaigning rules” — Agence France-Presse. Facebook’s Vice President of Global Affairs and Communications Nick Clegg said the social media giant is employing artificial intelligence and humans to find and remove political advertisements that violate policy in order to avoid a repeat of 2016 where untrue information and misinformation played roles in both Brexit and the election of Donald Trump as President of the United States.
  • Huawei Fallout—Game-Changing New China Threat Strikes At Apple And Samsung” By Zak Doffman — Forbes. Smartphone manufacturers from the People’s Republic of China (PRC) appear ready to step into the projected void caused by the United States (U.S.) strangling off Huawei’s access to chips. Xiaomi and Oppo have already seen sales surge worldwide and are poised to pick up where Huawei is being forced to leave off, perhaps demonstrating the limits of U.S. power to blunt the rise of PRC technology companies.
  • As Local News Dies, a Pay-for-Play Network Rises in Its Place” By Davey Alba and Jack Nicas — The New York Times. With a decline and demise of many local media outlets in the United States, new groups are stepping into the void, and some are politically minded but not transparent about biases. The organization uncovered in this article is nakedly Republican and is running and planting articles at both legitimate and artificial news sites for pay. Sometimes conservative donors pay, sometimes campaigns do. Democrats are engaged in the same activity but apparently to a lesser extent. These sorts of activities will only erode further faith in the U.S. media.
  • Forget Antitrust Laws. To Limit Tech, Some Say a New Regulator Is Needed.” By Steve Lohr — The New York Times. This piece argues that anti-trust enforcement actions are plodding, tending to take years to finish. Consequently, this body of law is inadequate to the task of addressing the market dominance of big technology companies. Instead, a new regulatory body is needed along the lines of those regulating the financial services industries that is more nimble than anti-trust. Given the problems in that industry with respect to regulation, this may not be the best model.
  • “‘Do Not Track’ Is Back, and This Time It Might Work” By Gilad Edelman — WIRED. Looking to utilize the requirement in the “California Consumer Privacy Act” (CCPA) (AB 375) that requires regulated entities to respect and effectuate the use of a one-time opt-out mechanism, a group of entities have come together to build and roll out the Global Privacy Control. In theory, users could download this technical specification to their phones and computers, install it, use it once, and then all websites would be on notice regarding that person’s privacy preferences. Such a means would go to the problem turned up by Consumer Reports recent report on the difficulty of trying to opt out of having one’s personal information sold.
  • EU countries sound alarm about growing anti-5G movement” By Laurens Cerulus — Politico. 15 European Union (EU) nations wrote the European Commission (EC) warning that the nascent anti-5G movement borne of conspiracy thinking and misinformation threatens the Eu’s position vis-à-vis the United States (U.S.) and the People’s Republic of China (PRC). There have been more than 200 documented arson attacks in the EU with the most having occurred in the United Kingdom, France, and the Netherlands. These nations called for a more muscular, more forceful debunking of the lies and misinformation being spread about 5G.
  • Security firms call Microsoft’s effort to disrupt botnet to protect against election interference ineffective” By Jay Greene — The Washington Post. Microsoft seemingly acted alongside the United States (U.S.) Cyber Command to take down and impair the operation of Trickbot, but now cybersecurity experts are questioning how effective Microsoft’s efforts really were. Researchers have shown the Russian operated Trickbot has already stood up operations and has dispersed across servers around the world, showing how difficult it is to address some cyber threats.
  • Governments around the globe find ways to abuse Facebook” By Sara Fischer and Ashley Gold — Axios. This piece puts a different spin on the challenges Facebook faces in countries around the world, especially those that ruthlessly use the platform to spread lies and misinformation than the recent BuzzFeed News article. The new article paints Facebook as the well-meaning company being taken advantage of while the other one portrays a company callous to content moderation except in nations where it causes them political problems such as the United States, the European Union, and other western democracies.

Other Developments

  • The United States (U.S.) Department of Justice’s (DOJ) Cyber-Digital Task Force (Task Force) issued “Cryptocurrency: An Enforcement Framework,” that “provides a comprehensive overview of the emerging threats and enforcement challenges associated with the increasing prevalence and use of cryptocurrency; details the important relationships that the Department of Justice has built with regulatory and enforcement partners both within the United States government and around the world; and outlines the Department’s response strategies.” The Task Force noted “[t]his document does not contain any new binding legal requirements not otherwise already imposed by statute or regulation.” The Task Force summarized the report:
    • [I]n Part I, the Framework provides a detailed threat overview, cataloging the three categories into which most illicit uses of cryptocurrency typically fall: (1) financial transactions associated with the commission of crimes; (2) money laundering and the shielding of legitimate activity from tax, reporting, or other legal requirements; and (3) crimes, such as theft, directly implicating the cryptocurrency marketplace itself. 
    • Part II explores the various legal and regulatory tools at the government’s disposal to confront the threats posed by cryptocurrency’s illicit uses, and highlights the strong and growing partnership between the Department of Justice and the Securities and Exchange Commission, the Commodity Futures Commission, and agencies within the Department of the Treasury, among others, to enforce federal law in the cryptocurrency space.
    • Finally, the Enforcement Framework concludes in Part III with a discussion of the ongoing challenges the government faces in cryptocurrency enforcement—particularly with respect to business models (employed by certain cryptocurrency exchanges, platforms, kiosks, and casinos), and to activity (like “mixing” and “tumbling,” “chain hopping,” and certain instances of jurisdictional arbitrage) that may facilitate criminal activity.    
  • The White House’s Office of Science and Technology Policy (OSTP) has launched a new website for the United States’ (U.S.) quantum initiative and released a report titled “Quantum Frontiers: Report On Community Input To The Nation’s Strategy For Quantum Information Science.” The Quantum Initiative flows from the “National Quantum Initiative Act” (P.L. 115-368) “to  provide  for  a  coordinated  Federal  program  to  accelerate  quantum  research  and  development  for  the  economic and national security of the United States.” The OSTP explained that the report “outlines eight frontiers that contain core problems with fundamental questions confronting quantum information science (QIS) today:
    • Expanding Opportunities for Quantum Technologies to Benefit Society
    • Building the Discipline of Quantum Engineering
    • Targeting Materials Science for Quantum Technologies
    • Exploring Quantum Mechanics through Quantum Simulations
    • Harnessing Quantum Information Technology for Precision Measurements
    • Generating and Distributing Quantum Entanglement for New Applications
    • Characterizing and Mitigating Quantum Errors
    • Understanding the Universe through Quantum Information
    • OSTP asserted “[t]hese frontier areas, identified by the QIS research community, are priorities for the government, private sector, and academia to explore in order to drive breakthrough R&D.”
  • The New York Department of Financial Services (NYDFS) published its report on the July 2020 Twitter hack during which a team of hacker took over a number of high-profile accounts (e.g. Barack Obama, Kim Kardashian West, Jeff Bezos, and Elon Musk) in order to perpetrate a cryptocurrency scam. The NYDFS has jurisdiction over cryptocurrencies and companies dealing in this item in New York. The NYDFS found that the hackers used the most basic means to acquire permission to take over accounts. The NYDFS explained:
    • Given that Twitter is a publicly traded, $37 billion technology company, it was surprising how easily the Hackers were able to penetrate Twitter’s network and gain access to internal tools allowing them to take over any Twitter user’s account. Indeed, the Hackers used basic techniques more akin to those of a traditional scam artist: phone calls where they pretended to be from Twitter’s Information Technology department. The extraordinary access the Hackers obtained with this simple technique underscores Twitter’s cybersecurity vulnerability and the potential for devastating consequences. Notably, the Twitter Hack did not involve any of the high-tech or sophisticated techniques often used in cyberattacks–no malware, no exploits, and no backdoors.
    • The implications of the Twitter Hack extend far beyond this garden-variety fraud. There are well-documented examples of social media being used to manipulate markets and interfere with elections, often with the simple use of a single compromised account or a group of fake accounts.In the hands of a dangerous adversary, the same access obtained by the Hackers–the ability to take control of any Twitter users’ account–could cause even greater harm.
    • The Twitter Hack demonstrates the need for strong cybersecurity to curb the potential weaponization of major social media companies. But our public institutions have not caught up to the new challenges posed by social media. While policymakers focus on antitrust and content moderation problems with large social media companies, their cybersecurity is also critical. In other industries that are deemed critical infrastructure, such as telecommunications, utilities, and finance, we have established regulators and regulations to ensure that the public interest is protected. With respect to cybersecurity, that is what is needed for large, systemically important social media companies.
    • The NYDFS recommended the cybersecurity measures cryptocurrency companies in New York should implement to avoid similar hacks, including its own cybersecurity regulations that bind its regulated entities in New York. The NYDFS also called for a national regulator to address the lack of a dedicated regulator of Twitter and other massive social media platforms. The NYDFS asserted:
      • Social media companies currently have no dedicated regulator. They are subject to the same general oversight applicable to other companies. For instance, the SEC’s regulations for all public companies apply to public social media companies, and antitrust and related laws and regulations enforced by the Department of Justice and the FTC apply to social media companies as they do to all companies. Social media companies are also subject to generally applicable laws, such as the California Consumer Privacy Act and the New York SHIELD Act. The European Union’s General Data Protection Regulation, which regulates the storage and use of personal data, also applies to social media entities doing business in Europe.
      • But there are no regulators that have the authority to uniformly regulate social media platforms that operate over the internet, and to address the cybersecurity concerns identified in this Report. That regulatory vacuum must be filled.
      • A useful starting point is to create a “systemically important” designation for large social media companies, like the designation for critically important bank and non-bank financial institutions. In the wake of the 2007-08 financial crisis, Congress established a new regulatory framework for financial institutions that posed a systemic threat to the financial system of the United States. An institution could be designated as a Systemically Important Financial Institution (“SIFI”) “where the failure of or a disruption to the functioning of a financial market utility or the conduct of a payment, clearing, or settlement activity could create, or increase, the risk of significant liquidity or credit problems spreading among financial institutions or markets and thereby threaten the stability of the financial system of the United States.”
      • The risks posed by social media to our consumers, economy, and democracy are no less grave than the risks posed by large financial institutions. The scale and reach of these companies, combined with the ability of adversarial actors who can manipulate these systems, require a similarly bold and assertive regulatory approach.
      • The designation of an institution as a SIFI is made by the Financial Stability Oversight Council (“FSOC”), which Congress established to “identify risks to the financial stability of the United States” and to provide enhanced supervision of SIFIs.[67] The FSOC also “monitors regulatory gaps and overlaps to identify emerging sources of systemic risk.” In determining whether a financial institution is systemically important, the FSOC considers numerous factors including: the effect that a failure or disruption to an institution would have on financial markets and the broader financial system; the nature of the institution’s transactions and relationships; the nature, concentration, interconnectedness, and mix of the institution’s activities; and the degree to which the institution is regulated.
      • An analogue to the FSOC should be established to identify systemically important social media companies. This new Oversight Council should evaluate the reach and impact of social media companies, as well as the society-wide consequences of a social media platform’s misuse, to determine which companies they should designate as systemically important. Once designated, those companies should be subject to enhanced regulation, such as through the provision of “stress tests” to evaluate the social media companies’ susceptibility to key threats, including cyberattacks and election interference.
      • Finally, the success of such oversight will depend on the establishment of an expert agency to oversee designated social media companies. Systemically important financial companies designated by the FSOC are overseen by the Federal Reserve Board, which has a long-established and deep expertise in banking and financial market stability. A regulator for systemically important social media would likewise need deep expertise in areas such as technology, cybersecurity, and disinformation. This expert regulator could take various forms; it could be a completely new agency or could reside within an established agency or at an existing regulator.
  • The Government Accountability Office (GAO) evaluated how well the Trump Administration has been implementing the “Open, Public, Electronic and Necessary Government Data Act of 2018” (OPEN Government Data Act) (P.L. 115-435). As the GAO explained, this statute “requires federal agencies to publish their information as open data using standardized, nonproprietary formats, making data available to the public open by default, unless otherwise exempt…[and] codifies and expands on existing federal open data policy including the Office of Management and Budget’s (OMB) memorandum M-13-13 (M-13-13), Open Data Policy—Managing Information as an Asset.”
    • The GAO stated
      • To continue moving forward with open government data, the issuance of OMB implementation guidance should help agencies develop comprehensive inventories of their data assets, prioritize data assets for publication, and decide which data assets should or should not be made available to the public.
      • Implementation of this statutory requirement is critical to agencies’ full implementation and compliance with the act. In the absence of this guidance, agencies, particularly agencies that have not previously been subject to open data policies, could fall behind in meeting their statutory timeline for implementing comprehensive data inventories.
      • It is also important for OMB to meet its statutory responsibility to biennially report on agencies’ performance and compliance with the OPEN Government Data Act and to coordinate with General Services Administration (GSA) to improve the quality and availability of agency performance data that could inform this reporting. Access to this information could inform Congress and the public on agencies’ progress in opening their data and complying with statutory requirements. This information could also help agencies assess their progress and improve compliance with the act.
    • The GAO made three recommendations:
      • The Director of OMB should comply with its statutory requirement to issue implementation guidance to agencies to develop and maintain comprehensive data inventories. (Recommendation 1)
      • The Director of OMB should comply with the statutory requirement to electronically publish a report on agencies’ performance and compliance with the OPEN Government Data Act. (Recommendation 2)
      • The Director of OMB, in collaboration with the Administrator of GSA, should establish policy to ensure the routine identification and correction of errors in electronically published performance information. (Recommendation 3)
  • The United States’ (U.S.) National Security Agency (NSA) issued a cybersecurity advisory titled “Chinese State-Sponsored Actors Exploit Publicly Known Vulnerabilities,” that “provides Common Vulnerabilities and Exposures (CVEs) known to be recently leveraged, or scanned-for, by Chinese state-sponsored cyber actors to enable successful hacking operations against a multitude of victim networks.” The NSA recommended a number of mitigations generally for U.S. entities, including:
    • Keep systems and products updated and patched as soon as possible after patches are released.
    • Expect that data stolen or modified (including credentials, accounts, and software) before the device was patched will not be alleviated by patching, making password changes and reviews of accounts a good practice.
    • Disable external management capabilities and set up an out-of-band management network.
    • Block obsolete or unused protocols at the network edge and disable them in device configurations.
    • Isolate Internet-facing services in a network Demilitarized Zone (DMZ) to reduce the exposure of the internal network.
    • Enable robust logging of Internet-facing services and monitor the logs for signs of compromise.
    • The NSA then proceeded to recommend specific fixes.
    • The NSA provided this policy backdrop:
      • One of the greatest threats to U.S. National Security Systems (NSS), the U.S. Defense Industrial Base (DIB), and Department of Defense (DOD) information networks is Chinese state-sponsored malicious cyber activity. These networks often undergo a full array of tactics and techniques used by Chinese state-sponsored cyber actors to exploit computer networks of interest that hold sensitive intellectual property, economic, political, and military information. Since these techniques include exploitation of publicly known vulnerabilities, it is critical that network defenders prioritize patching and mitigation efforts.
      • The same process for planning the exploitation of a computer network by any sophisticated cyber actor is used by Chinese state-sponsored hackers. They often first identify a target, gather technical information on the target, identify any vulnerabilities associated with the target, develop or re-use an exploit for those vulnerabilities, and then launch their exploitation operation.
  • Belgium’s data protection authority (DPA) (Autorité de protection des données in French or Gegevensbeschermingsautoriteit in Dutch) (APD-GBA) has reportedly found that the Transparency & Consent Framework (TCF) developed by the Interactive Advertising Bureau (IAB) violates the General Data Protection Regulation (GDPR). The Real-Time Bidding (RTB) system used for online behavioral advertising allegedly transmits the personal information of European Union residents without their consent even before a popup appears on their screen asking for consent. The APD-GBA is the lead DPA in the EU in investigating the RTB and will likely now circulate their findings and recommendations to other EU DPAs before any enforcement will commence.
  • None Of Your Business (noyb) announced “[t]he Irish High Court has granted leave for a “Judicial Review” against the Irish Data Protection Commission (DPC) today…[and] [t]he legal action by noyb aims to swiftly implement the [Court of Justice for the European Union (CJEU)] Decision prohibiting Facebook’s” transfer of personal data from the European Union to the United States (U.S.) Last month, after the DPC directed Facebook to stop transferring the personal data of EU citizens to the U.S., the company filed suit in the Irish High Court to stop enforcement of the order and succeeded in staying the matter until the court rules on the merits of the challenge.
    • noyb further asserted:
      • Instead of making a decision in the pending procedure, the DPC has started a second, new investigation into the same subject matter (“Parallel Procedure”), as widely reported (see original reporting by the WSJ). No logical reasons for the Parallel Procedure was given, but the DPC has maintained that Mr Schrems will not be heard in this second case, as he is not a party in this Parallel Procedure. This Paralell procedure was criticised by Facebook publicly (link) and instantly blocked by a Judicial Review by Facebook (see report by Reuters).
      • Today’s Judicial Review by noyb is in many ways the counterpart to Facebook’s Judicial Review: While Facebook wants to block the second procedure by the DPC, noyb wants to move the original complaints procedure towards a decision.
      • Earlier this summer, the CJEU struck down the adequacy decision for the agreement between the EU and (U.S. that had provided the easiest means to transfer the personal data of EU citizens to the U.S. for processing under the General Data Protection Regulation (GDPR) (i.e. the EU-U.S. Privacy Shield). In the case known as Schrems II, the CJEU also cast doubt on whether standard contractual clauses (SCC) used to transfer personal data to the U.S. would pass muster given the grounds for finding the Privacy Shield inadequate: the U.S.’s surveillance regime and lack of meaningful redress for EU citizens. Consequently, it has appeared as if data protection authorities throughout the EU would need to revisit SCCs for transfers to the U.S., and it appears the DPC was looking to stop Facebook from using its SCC. Facebook is apparently arguing in its suit that it will suffer “extremely significant adverse effects” if the DPC’s decision is implemented.
  • Most likely with the aim of helping British chances for an adequacy decision from the European Union (EU), the United Kingdom’s Information Commissioner’s Office (ICO) published guidance that “discusses the right of access [under the General Data Protection Regulation (GDPR)] in detail.” The ICO explained “is aimed at data protection officers (DPOs) and those with specific data protection responsibilities in larger organisations…[but] does not specifically cover the right of access under Parts 3 and 4 of the Data Protection Act 2018.”
    • The ICO explained
      • The right of access, commonly referred to as subject access, gives individuals the right to obtain a copy of their personal data from you, as well as other supplementary information.
  • The report the House Education and Labor Ranking Member requested from the Government Accountability Office (GAO) on the data security and data privacy practices of public schools. Representative Virginia Foxx (R-NC) asked the GAO “to review the security of K-12 students’ data. This report examines (1) what is known about recently reported K-12 cybersecurity incidents that compromised student data, and (2) the characteristics of school districts that experienced these incidents.” Strangely, the report did have GAO’s customary conclusions or recommendations. Nonetheless, the GAO found:
    • Ninety-nine student data breaches reported from July 1, 2016 through May 5, 2020 compromised the data of students in 287 school districts across the country, according to our analysis of K-12 Cybersecurity Resource Center (CRC) data (see fig. 3). Some breaches involved a single school district, while others involved multiple districts. For example, an attack on a vendor system in the 2019-2020 school year affected 135 districts. While information about the number of students affected was not available for every reported breach, examples show that some breaches affected thousands of students, for instance, when a cybercriminal accessed 14,000 current and former students’ personally identifiable information (PII) in one district.
    • The 99 reported student data breaches likely understate the number of breaches that occurred, for different reasons. Reported incidents sometimes do not include sufficient information to discern whether data were breached. We identified 15 additional incidents in our analysis of CRC data in which student data might have been compromised, but the available information was not definitive. In addition, breaches can go undetected for some time. In one example, the personal information of hundreds of thousands of current and former students in one district was publicly posted for 2 years before the breach was discovered.
    • The CRC identified 28 incidents involving videoconferences from April 1, 2020 through May 5, 2020, some of which disrupted learning and exposed students to harm. In one incident, 50 elementary school students were exposed to pornography during a virtual class. In another incident in a different district, high school students were targeted with hate speech during a class, resulting in the cancellation that day of all classes using the videoconferencing software. These incidents also raise concerns about the potential for violating students’ privacy. For example, one district is reported to have instructed teachers to record their class sessions. Teachers said that students’ full names were visible to anyone viewing the recording.
    • The GAO found gaps in the protection and enforcement of student privacy by the United States government:
      • [The Department of] Education is responsible for enforcing Family Educational Rights and Privacy Act (FERPA), which addresses the privacy of PII in student education records and applies to all schools that receive funds under an applicable program administered by Education. If parents or eligible students believe that their rights under FERPA have been violated, they may file a formal complaint with Education. In response, Education is required to take appropriate actions to enforce and deal with violations of FERPA. However, because the department’s authority under FERPA is directly related to the privacy of education records, Education’s security role is limited to incidents involving potential violations under FERPA. Further, FERPA amendments have not directly addressed educational technology use.
      • The “Children’s Online Privacy Protection Act” (COPPA) requires the Federal Trade Commission (FTC) to issue and enforce regulations concerning children’s privacy. The COPPA Rule, which took effect in 2000 and was later amended in 2013, requires operators of covered websites or online services that collect personal information from children under age 13 to provide notice and obtain parental consent, among other things. COPPA generally applies to the vendors who provide educational technology, rather than to schools directly. However, according to FTC guidance, schools can consent on behalf of parents to the collection of students’ personal information if such information is used for a school-authorized educational purpose and for no other commercial purpose.
  • Upturn, an advocacy organization that “advances equity and justice in the design, governance, and use of technology,” has released a report showing that United States (U.S.) law enforcement agencies have multiple means of hacking into encrypted or protected smartphones. There have long been the means and vendors available in the U.S. and abroad for breaking into phones despite the claims of a number of nations like the Five Eyes (U.S., the United Kingdom, Australia, Canada, and New Zealand) that default end-to-end encryption was a growing problem that allowed those preying on children and engaged in terrorism to go undetected. In terms of possible bias, Upturn is “is supported by the Ford Foundation, the Open Society Foundations, the John D. and Catherine T. MacArthur Foundation, Luminate, the Patrick J. McGovern Foundation, and Democracy Fund.”
    • Upturn stated:
      • Every day, law enforcement agencies across the country search thousands of cellphones, typically incident to arrest. To search phones, law enforcement agencies use mobile device forensic tools (MDFTs), a powerful technology that allows police to extract a full copy of data from a cellphone — all emails, texts, photos, location, app data, and more — which can then be programmatically searched. As one expert puts it, with the amount of sensitive information stored on smartphones today, the tools provide a “window into the soul.”
      • This report documents the widespread adoption of MDFTs by law enforcement in the United States. Based on 110 public records requests to state and local law enforcement agencies across the country, our research documents more than 2,000 agencies that have purchased these tools, in all 50 states and the District of Columbia. We found that state and local law enforcement agencies have performed hundreds of thousands of cellphone extractions since 2015, often without a warrant. To our knowledge, this is the first time that such records have been widely disclosed.
    • Upturn argued:
      • Law enforcement use these tools to investigate not only cases involving major harm, but also for graffiti, shoplifting, marijuana possession, prostitution, vandalism, car crashes, parole violations, petty theft, public intoxication, and the full gamut of drug-related offenses. Given how routine these searches are today, together with racist policing policies and practices, it’s more than likely that these technologies disparately affect and are used against communities of color.
      • We believe that MDFTs are simply too powerful in the hands of law enforcement and should not be used. But recognizing that MDFTs are already in widespread use across the country, we offer a set of preliminary recommendations that we believe can, in the short-term, help reduce the use of MDFTs. These include:
        • banning the use of consent searches of mobile devices,
        • abolishing the plain view exception for digital searches,
        • requiring easy-to-understand audit logs,
        • enacting robust data deletion and sealing requirements, and
        • requiring clear public logging of law enforcement use.

Coming Events

  • The Federal Communications Commission (FCC) will hold an open commission meeting on 27 October, and the agency has released a tentative agenda:
    • Restoring Internet Freedom Order Remand – The Commission will consider an Order on Remand that would respond to the remand from the U.S. Court of Appeals for the D.C. Circuit and conclude that the Restoring Internet Freedom Order promotes public safety, facilitates broadband infrastructure deployment, and allows the Commission to continue to provide Lifeline support for broadband Internet access service. (WC Docket Nos. 17-108, 17-287, 11- 42)
    • Establishing a 5G Fund for Rural America – The Commission will consider a Report and Order that would establish the 5G Fund for Rural America to ensure that all Americans have access to the next generation of wireless connectivity. (GN Docket No. 20-32)
    • Increasing Unlicensed Wireless Opportunities in TV White Spaces – The Commission will consider a Report and Order that would increase opportunities for unlicensed white space devices to operate on broadcast television channels 2-35 and expand wireless broadband connectivity in rural and underserved areas. (ET Docket No. 20-36)
    • Streamlining State and Local Approval of Certain Wireless Structure Modifications – The Commission will consider a Report and Order that would further accelerate the deployment of 5G by providing that modifications to existing towers involving limited ground excavation or deployment would be subject to streamlined state and local review pursuant to section 6409(a) of the Spectrum Act of 2012. (WT Docket No. 19-250; RM-11849)
    • Revitalizing AM Radio Service with All-Digital Broadcast Option – The Commission will consider a Report and Order that would authorize AM stations to transition to an all-digital signal on a voluntary basis and would also adopt technical specifications for such stations. (MB Docket Nos. 13-249, 19-311)
    • Expanding Audio Description of Video Content to More TV Markets – The Commission will consider a Report and Order that would expand audio description requirements to 40 additional television markets over the next four years in order to increase the amount of video programming that is accessible to blind and visually impaired Americans. (MB Docket No. 11-43)
    • Modernizing Unbundling and Resale Requirements – The Commission will consider a Report and Order to modernize the Commission’s unbundling and resale regulations, eliminating requirements where they stifle broadband deployment and the transition to next- generation networks, but preserving them where they are still necessary to promote robust intermodal competition. (WC Docket No. 19-308)
    • Enforcement Bureau Action – The Commission will consider an enforcement action.
  • The Senate Commerce, Science, and Transportation Committee will hold a hearing on 28 October regarding 47 U.S.C. 230 titled “Does Section 230’s Sweeping Immunity Enable Big Tech Bad Behavior?” with testimony from:
    • Jack Dorsey, Chief Executive Officer of Twitter;
    • Sundar Pichai, Chief Executive Officer of Alphabet Inc. and its subsidiary, Google; and 
    • Mark Zuckerberg, Chief Executive Officer of Facebook.
  • On 29 October, the Federal Trade Commission (FTC) will hold a seminar titled “Green Lights & Red Flags: FTC Rules of the Road for Business workshop” that “will bring together Ohio business owners and marketing executives with national and state legal experts to provide practical insights to business and legal professionals about how established consumer protection principles apply in today’s fast-paced marketplace.”
  • On 10 November, the Senate Commerce, Science, and Transportation Committee will hold a hearing to consider nominations, including Nathan Simington’s to be a Member of the Federal Communications Commission.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Mehmet Turgut Kirkgoz from Pixabay

Another Federal Privacy Bill

Senate Commerce Republicans revise and release privacy bill that does not budge on main issues setting them apart from their Democratic colleagues.

Last week, in advance of tomorrow’s hearing on privacy legislation, the chair and key Republicans released a revised version of draft legislation released last year to mark their position on what United States (U.S.) federal privacy regulation should be. Notably, last year’s draft and the updated version would still preempt state laws like the “California Consumer Privacy Act” (CCPA) (AB 375), and people in the U.S. would not be given the right to sue entities that violate the privacy law. These two issues continue to be the main battle lines between Democratic and Republican bills to establish a U.S. privacy law. Given the rapidly dwindling days left in the 116th Congress and the possibility of a Democratic White House and Senate next year, this may be both a last gasp effort to get a bill out of the Senate and to lay down a marker for next year.

The “Setting an American Framework to Ensure Data Access, Transparency, and Accountability (SAFE DATA) Act” (S.4626) was introduced by Senate Commerce, Science, and Transportation Committee Chair Roger Wicker (R-MS), Senate Majority Whip and  Communications, Technology, Innovation, and the Internet Subcommittee Chair John Thune (R-SD), Transportation and Safety Subcommittee Chair Deb Fischer (R-NE), and Safety, and Senator Marsha Blackburn (R-TN). However, a notable Republican stakeholder is not a cosponsor: Consumer Protection Subcommittee Chair Jerry Moran (R-KS) who introduced his own bill, the “Consumer Data Privacy and Security Act of 2020” (S.3456) (See here for analysis).

As mentioned, Wicker had put out for comment a discussion draft, the “Consumer Data Privacy Act of 2019” (CDPA) (See here for analysis) in November 2019 shortly after the Ranking Member on the committee, Senator Maria Cantwell (D-WA) and other Democrats had introduced their privacy bill, the “Consumer Online Privacy Rights Act“ (COPRA) (S.2968) (See here for more analysis). Here’s how I summarized the differences at the time: in the main, CDPA shares the same framework with COPRA with some key, significant differences, including:

  • COPRA expands the FTC’s jurisdiction in policing privacy harms whereas CDPA would not
  • COPRA places a duty of loyalty on covered entities to people whose covered data they process or transfer; CDPA does not have any such duty
  • CDPA does not allow people to sue if covered entities violate the new federal privacy and security regime; COPRA would allow such suits to move forward
  • CDPA preempts state privacy and data security laws; COPRA would establish a federal floor that states like California would be able to legislate on top of
  • CDPA would take effect two years after enactment, and COPRA would take effect six months after enactment.
  • The bar against a person waiving her privacy rights under COPRA is much broader than CDPA
  • COPRA would empower the FTC to punish discriminatory data processing and transfers; CDPA would require the FTC to refer these offenses to the appropriate federal and state agencies
  • CDPA revives a concept from the Obama Administration’s 2015 data privacy bill by allowing organizations and entities to create standards or codes of conduct and allowing those entities in compliance with these standards or codes to be deemed in compliance with CDPA subject to FTC oversight; COPRA does not have any such provision
  • COPRA would require covered entities to conduct algorithmic decision-making impact assessments to determine if these processes are fair, accurate, and unbiased; no such requirement is found in CDPA

As a threshold matter, the SAFE DATA Act is in the latest in a line of enhanced notice and consent bills founded on the logic that if people were informed and able to make choices about how and when their data are used, then the U.S. would have an ideal data and privacy ecosystem. This view, perhaps coincidentally, dovetails with Republican views on other issues where people should merely be given information and the power to choose, and any bad outcomes being the responsibility of those who made poor choices. This view runs counter to those who see privacy and data security as being akin to environmental or pollution problems, that is being beyond the ability of any one person to manage or realistically change.

Turning to the bill before us, we see that while covered entities may not outright deny services and products to people if they choose to exercise the rights granted under the bill visa vis their covered data, a covered entity may charge different prices. This structure would predictably lead to only those who can afford it or are passionately committed to their privacy being able to pay for more privacy. And yet, the rights established by the bill for people to exercise some control over their private information cannot be waived, forestalling the possibility that some covered entities would make such a waiver a term of service like many companies do with a person’s right to sue.

Covered entities must publish privacy policies before or at the point of data collection, including:

  • The identity of the entity in charge of processing and using the covered data
  • The categories of covered data collected and the processing purposes of each category
  • Whether transfers of covered data occur, the categories of those receiving such data, and the purposes for which transfers occur
  • The entity’s data retention and data security policies generally; and
  • How individuals may exercise their rights.

Any material changes mean new privacy policies provided to people and consent again must be obtained before collection and processing may resume.

There is, however, language not seen in other privacy bills: “[w]here the ownership of an individual’s device is transferred directly from one individual to another individual, a covered entity may satisfy its obligation to disclose a privacy policy prior to or at the point of collection of covered data by making the privacy policy available under (a)(2)” (i.e. by posting on the entity’s website.) So, if I give an old phone to a friend, a covered entity may merrily continue collecting and processing data because I consented and my friend’s consent is immaterial. Admittedly, this would seem to be a subset of all devices used in the U.S., but it does not seem to be a stretch for covered entities to need to obtain consent if they determine a different person has taken over a device. After all, they will almost certainly be able to discern the identity of the new user and that the device is now being used by someone new.

Section 103 of the SAFE DATA Act establishes a U.S. resident’s rights to access, correct, delete, and port covered data. People would be able to access their covered data and correct “material” inaccuracies or incomplete information at least twice a year at no cost provided the covered entity can verify their identity. Included with the right to access would be provision of the categories of third parties to whom covered data has been transferred and a list of the categories of purposes. There is a long list of reasons why covered entities would not need to comply, including but not limited to:

  • If the covered entity must “retain any covered data for the sole purpose of fulfilling the request; “
  • If it would “be impossible or demonstrably impracticable to comply with;”
  • If a request would “require the covered entity to combine, relink, or otherwise reidentify covered data that has been deidentified;”
  • If it would “result in the release of trade secrets, or other proprietary or confidential data or business practices;”
  • If it would “interfere with law enforcement, judicial proceedings, investigations, or reasonable efforts to guard against, detect, or investigate malicious or unlawful activity, or enforce contracts;”
  • If it would “require disproportionate effort, taking into consideration available technology, or would not be reasonably feasible on technical grounds;”
  • If it would “compromise the privacy, security, or other rights of the covered data of an- other individual;”
  • If it would “be excessive or abusive to another individual; or
  • If t would “violate Federal or State law or the rights and freedoms of another individual, including under the Constitution of the United States.”

This extensive list will give companies not interested in complying with plenty of reason to proffer as to why they will not provide access or correct. Nonetheless, the FTC would need to draft and implement regulations “establishing requirements for covered entities with respect to the verification of requests to exercise rights” to access and correct. Perhaps the agency will be able to address some foreseeable problems with the statute as written.

Explicit consent is needed before a covered entity may transfer or process the “sensitive covered data” of a person. The first gloss on this right is that a person’s consent is not needed to collect, process, and transfer the “covered data” of a person. Elsewhere in the section, it is clear that one has a limited opt out right: “a covered entity shall provide an individual with the ability to opt out of the collection, processing, or transfer of such individual’s covered data before such collection, processing, or transfer occurs.”

Nonetheless, a bit of a detour back into the definitions section of the bill is in order to understand which types of data lay on which side of the consent line. “Covered data” are “information that identifies or is linked or reasonably linkable to an individual or a device that is linked or reasonably linkable to an individual” except for publicly available data, employment data, aggregated data, and de-identified data. Parenthetically, I would note the latter two exceptions would seem to be incentives for companies to hold personal information in the aggregate or in a de-identified state as much as possible so as to avoid triggering the requirements of the SAFE DATA Act.

“Sensitive covered data” would be any of the following (and, my apologies, the list is long):

  • A unique, government-issued identifier, such as a Social Security number, passport number, or driver’s license number, that is not required to be displayed to the public.
  • Any covered data that describes or reveals the diagnosis or treatment of the past, present, or future physical health, mental health, or disability of an individual.
  • A financial account number, debit card number, credit card number, or any required security or access code, password, or credentials allowing access to any such account.
    Covered data that is biometric information.
  • A persistent identifier.
  • Precise geolocation information (defined elsewhere as anything within 1750 feet)
  • The contents of an individual’s private communications, such as emails, texts, direct messages, or mail, or the identity of the parties subject to such communications, unless the covered entity is the intended recipient of the communication (meaning metadata is fair game; and this can be incredibly valuable. Just ask he National Security Agency)
  • Account log-in credentials such as a user name or email address, in combination with a password or security question and answer that would permit access to an online account.
  • Covered data revealing an individual’s racial or ethnic origin, or religion in a manner inconsistent with the individual’s reasonable expectation regarding the processing or transfer of such information. (of course, this sort of qualifying language always makes me think according to who’s definition of “reasonable expectation”)
  • Covered data revealing the sexual orientation or sexual behavior of an individual in a manner inconsistent with the individual’s reasonable expectation regarding the processing or transfer of such information. (See the previous clause)
  • Covered data about the online activities of an individual that addresses or reveals a category of covered data described in another subparagraph of this paragraph. (I suppose this is intended as a backstop against covered entities trying to backdoor their way into using sensitive covered data by claiming its covered data from online activities.)
  • Covered data that is calendar information, address book information, phone or text logs, photos, or videos maintained for private use on an individual’s device.
  • Any covered data collected or processed by a covered entity for the purpose of identifying covered data described in another paragraph of this paragraph. (again, this seems aimed at plugging a possible loophole in that ordinary covered data can probably be processed or combined with other covered data to arrive at some categories of “sensitive covered data.”)
  • Any other category of covered data designated by the Commission pursuant to a rulemaking under section 553 of title 5, United States Code (meaning the FTC can use normal rulemaking authority and not the shackles of the Moss-Magnuson rulemaking procedures to expand this definition as needed).

So, we have a subset of covered data that would be subject to consent requirements, including notice with a “clear description of the processing purpose for which the sensitive covered data will be processed;” that “clearly identif[ies] any processing purpose that is necessary to fulfill a request made by the individual” that “include[s] a prominent heading that would enable a reasonable individual to easily identify the processing purpose for which consent is sought; and “clearly explain[s] the individual’s right to provide or withhold consent.”

Finally, the FTC may but does not have “to establish requirements for covered entities regarding clear and conspicuous procedures for allowing individuals to provide or withdraw affirmative express consent for the collection of sensitive covered data.” If the agency chooses to do so, it may use the normal notice and comment procedures virtually every other U.S. agency may.

Covered entities must minimize collection, processing, and retention of covered data to “what is reasonably necessary, proportionate, and limited” except if permitted elsewhere in the SAFE DATA Act or another federal statute. Interestingly, the FTC would not be tasked with conducting a rulemaking but would instead need to issue guidelines with best practices on how covered entities would undertake such minimization.

Service providers must follow the direction of the covered entity with whom they are working and delete or deidentify data after they have finished work upon it. Third parties are limited in processing covered data to only those purposes consistent with the reasonable expectations of the individual to whom the data belong. However, third parties do not need to obtain consent for processing sensitive covered data or covered data. However, covered entities must perform due diligence to ensure that service providers and third parties will comply with the requirements particular to these two classes of entities. However, there is no obligation beyond due diligence and no suggestion of liability for the misdeeds and violations of service providers and third parties.

Large data holders would need to conduct periodic privacy impact analyses with an eye toward helping these entities improve their privacy policies. This class of covered entities are those that have processed or transferred the covered data of 8 million or more people in a given year or the sensitive covered data of 300,000 people.

The SAFE DATA Act would generally allow covered entities to collect, process, and transfer the covered data of people without their consent so long as these activities are reasonably necessary, proportionate and limited to the following purposes:

  • To initiate or complete a transaction or to fulfill an order or provide a service specifically requested by an individual, including associated routine administrative activities such as billing, shipping, financial reporting, and accounting.
  • To perform internal system maintenance, diagnostics, product or service management, inventory management, and network management.
  • To prevent, detect, or respond to a security incident or trespassing, provide a secure environment, or maintain the safety and security of a product, service, or individual.
  • To protect against malicious, deceptive, fraudulent, or illegal activity.
  • To comply with a legal obligation or the establishment, exercise, analysis, or defense of legal claims or rights, or as required or specifically authorized by law.
  • To comply with a civil, criminal, or regulatory inquiry, investigation, subpoena, or summons by an Executive agency.
  • To cooperate with an Executive agency or a law enforcement official acting under the authority of an Executive or State agency concerning conduct or activity that the Executive agency or law enforcement official reasonably and in good faith believes may violate Federal, State, or local law, or pose a threat to public safety or national security.
  • To address risks to the safety of an individual or group of individuals, or to ensure customer safety, including by authenticating individuals in order to provide access to large venues open to the public.
  • To effectuate a product recall pursuant to Federal or State law.

People would not be able to opt out of collection, processing, and transferring covered data. As mentioned earlier, U.S. residents would receive a limited right to opt out, and it is in Section 108 that one learns the things a person cannot opt out of. I suppose it should go without saying that covered entities will interpret these terms as broadly as they can in order to forestall people from opting out. The performance of “internal system maintenance, diagnostics, product or service management, inventory management, and network management” seems like a potentially elastic definition that could be asserted to give cover to some covered entities.

Speaking of exceptions, small businesses would not need to heed the rights of individuals regarding their covered data, do not need to minimize their collection, processing, and transferring covered data, and will not need to have data privacy and security officers. These are defined as entities with gross annual revenues below $50 million per year, that has processed the covered data of less than 1 million people, has fewer than 500 employees, and earns less than 50% of its revenue from transferring covered data. On its face, this seems like a very generous definition of what shall be a small business.

The FTC would not be able to police processing and transferring of covered data that violates discrimination laws. Instead the agency would need to transfer these matters to agencies of jurisdiction. The FTC would be required to use its 6(b) authority to “examin[e] the use of algorithms to process covered data in a manner that may violate Federal anti-discrimination laws” and then publish a report in its findings and guidance on how covered entities can avoid violating discrimination laws.

Moreover, the National Institute of Standards and Technology (NIST) must “develop and publish a definition of ‘‘digital content forgery’’ and accompanying explanatory materials.” One year afterwards, the FTC must “publish a report regarding the impact of digital content forgeries on individuals and competition.”

Data brokers would need to register with the FTC, which would then publish a registry of data brokers on its website.

There would be additional duties placed on covered entities. For example, these entities must “establish, implement, and maintain reasonable administrative, technical, and physical data security policies and practices to protect against risks to the confidentiality, security, and integrity of covered data.” However, financial services companies subject to and in compliance with Gramm-Leach-Bliley regulations would be deemed to be in compliance with these data security obligations. The same would be true of entities subject to and in compliance with the “Health Insurance Portability and Accountability Act” and “Health Information Technology for Economic and Clinical Health Act.” Additionally, the FTC may “issue regulations to identify processes for receiving and assessing information regarding vulnerabilities to covered data that are reported to the covered entity.”

The SAFE DATA Act has language new to federal privacy bills on “opaque algorithms.” Specifically, covered internet platforms would not be able to use opaque algorithms unless notice is provided to users and an input-transparent algorithm version is available to users. The term ‘‘covered internet platform’’ is broad and encompasses “any public-facing website, internet application, or mobile application, including a social network site, video sharing service, search engine, or content aggregation service.” An “opaque algorithm” is “an algorithmic ranking system that determines the order or manner that information is furnished to a user on a covered internet platform based, in whole or part, on user-specific data that was not expressly provided by the user to the platform for such purpose.”

The bill makes it an unfair and deceptive practice for “large online operator[s]” “to design, modify, or manipulate a user interface with the purpose or substantial effect of obscuring, subverting, or impairing user autonomy, decision-making, or choice to obtain consent or user data.”

A covered entity must have

  • 1 or more qualified employees or contractors as data privacy officers; and
  • 1 or more qualified employees or contractors…as data security officers.

Moreover, “[a] covered entity shall maintain internal controls and reporting structures to ensure that appropriate senior management officials of the covered entity are involved in assessing risks and making decisions that implicate compliance with this Act.”

There are also provisions protecting whistleblowers inside covered entities that “voluntarily provide[] [“original information”] to the [FTC]…relating to non-compliance with, or any violation or alleged violation of, this Act or any regulation promulgated under this Act.”

Like virtually all the other bills, the FTC would be able to levy civil fines of more than $42,000 per violation, and state attorneys general would also be able to enforce the new privacy regime. However, the FTC would be able to intervene and take over the action if it chose, and if two or more state attorneys general are bringing cases regarding the same violations, then the cases would be consolidated and heard in the federal court in the District of Columbia. The FTC would also get jurisdiction over common carriers and non-profits for purposes of enforcing the SAFE DATA Act.

And then there is new language in the SAFE DATA Act that seems aimed at addressing a pair of cases before the Supreme Court on the extent of the FTC’s power to seek and obtain certain monetary damages and equitable relief. The FTC has appealed an adverse ruling from the U.S. Court of Appeals for the Seventh Circuit while the other case is coming from the U.S. Court of Appeals for the Ninth Circuit.

Like the forerunner bill released last November, the FTC would be empowered to “approve voluntary consensus standards or certification programs that covered entities may use to comply with 1 or more provisions in this Act.” These provisions came from an Obama Administration privacy bill allowing for the development and usage of voluntary consensus-based standards for covered entities to comply with in lieu of the provisions of that bill.

The SAFE DATA Act would not impinge existing federal privacy laws but would preempt all privacy laws at the state level. Ironically, the bill would not preempt data breach notification laws. One would think if uniformity across the U.S. were a driving motivation, doing so would be desirable.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Gerd Altmann from Pixabay

Further Reading and Other Developments (11 July)

First things first, if you would like to receive my Technology Policy Update, email me. You can find some of these Updates from 2019 and 2020 here.

Other Developments

  • The United States District Court of Maine denied a motion by a number of telecommunications trade associations to enjoin enforcement of a new Maine law instituting privacy practices for internet service providers (ISP) in the state that limited information collection and processing. The plaintiffs claimed the 2017 repeal of the Federal Communications Commission’s (FCC) 2016 ISP Privacy Order preempted states from implementing their own privacy rules for ISPs. In its decision, the court denied the plaintiffs’ motion and will proceed to decide the merits of the case.
  • The European Data Protection Board (EDPB) has debuted a “One-Stop-Shop” register “containing decisions taken by national supervisory authorities following the One-Stop-Shop cooperation procedure (Art. 60 GDPR).” The EDPB explained “[u]nder the GDPR, Supervisory Authorities have a duty to cooperate on cases with a cross-border component to ensure a consistent application of the regulation – the so-called one-stop-shop (OSS) mechanism…[and] [u]nder the OSS, the Lead Supervisory Authority (LSA) is in charge of preparing the draft decisions and works together with the concerned SAs to reach consensus.” Hence this new repository will contain the decisions on which EU data protection authorities have cooperated in addressing alleged GDPR violations that reach across the borders of EU nations.
  • The chair of the House Energy and Commerce Committee and three subcommittee chairs wrote Facebook, Google, and Twitter asking the companies “provide the Committee with monthly reports similar in scope to what you are providing the European Commission regarding your COVID-19 disinformation efforts as they relate to United States users of your platform.” They are also asking that the companies brief them and staff on 22 July on these efforts. Given the Committee’s focus on disinformation, it is quite possible these monthly reports and the briefing could be the basis of more hearings and/or legislation. Chair Frank Pallone, Jr. (D-NJ), Oversight and Investigations Subcommittee Chair Diana DeGette (D-CO), Communications and Technology Subcommittee Chair Mike Doyle (D-PA) and Consumer Protection and Commerce Subcommittee Chair Jan Schakowsky (D-IL) signed the letters.
  • Reports indicate the Federal Trade Commission (FTC) and Department of Justice (DOJ) are reviewing the February 2019 $5.7 million settlement between the FTC and TikTok for violating the Children’s Online Privacy Protection Act (COPPA). In May 2020, a number of public advocacy groups filed a complaint with the FTC, asking whether the agency has “complied with the consent decree.” If TikTok has violated the order, it could face huge fines as the FTC and DOJ could seek a range of financial penalties. This seems to be another front in the escalating conflict between the United States and the People’s Republic of China.
  • Tech Inquiry, an organization that “seek[s] to combat abuses in the tech industry through coupling concerned tech workers with relevant members of civil society” revealed “an in-depth analysis of all public US federal (sub)contracting data over the last four and a half years to estimate the rankings of tech companies, both in and out of Silicon Valley, as contractors with the military, law enforcement, and diplomatic arms of the United States.” Tech Inquiry claimed “[o]ur analysis shows a diversity of contracting postures (see Tables 2 and 3), not a systemic divide from Washington. Within a substantial list of namebrand tech companies, only Facebook, Apple, and Twitter look to be staying out of major military and law enforcement contracts.”
  • The United States Secret Service announced the formation of a new Cyber Fraud Task Force (CFTF) which merges “its Electronic Crimes Task Forces (ECTFs) and Financial Crimes Task Forces (FCTFs) into a single unified network.” The rationale given for the merger is “the line between cyber and financial crimes has steadily blurred, to the point today where the two – cyber and financial crimes – cannot be effectively disentangled.”
  • The United States Election Assistance Commission (EAC) held a virtual public hearing, “Lessons Learned from the 2020 Primary Elections” “to discuss the administration of primary elections during the coronavirus pandemic.”
  • The National Council of Statewide Interoperability Coordinators (NCSWIC), a Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA) administered program, released its “NCSWIC Strategic Plan and Implementation Guide,” “a stakeholder-driven, multi-jurisdictional, and multi-disciplinary plan to enhance interoperable and emergency communications.” NCSWIC contended “[t]he plan is a critical mid-range (three-year) tool to help NCSWIC and its partners prioritize resources, strengthen governance, identify future investments, and address interoperability gaps.”
  • Access Now is pressing “video conferencing platforms” other than Zoom to issue “regular transparency reports… clarifying exactly how they protect personal user data and enforce policies related to freedom of expression.”

Further Reading

  • India bans 59 Chinese apps, including TikTok and WeChat, after deadly border clash” – South China Morning Post. As a seeming extension to the military skirmish India and the People’s Republic of China (PRC) engaged in, a number of PRC apps have been banned by the Indian government, begging the question of whether there will be further escalation between the world’s two most populous nations. India is the TikTok’s biggest market with more than 120 million users in the South Asian country, and a range of other apps and platforms also have millions of users. Most of the smartphones used in India are made by PRC entities. Moreover, if New Delhi joins Washington’s war on Huawei, ZTE, and other PRC companies, the cumulative effect could significantly affect the PRC’s global technological ambitions.
  • Huawei data flows under fire in German court case” – POLITICO. A former Huawei employee in Germany has sued the company alleging violations of the General Data Protection Regulation (GDPR) through the company’s use of standard contractual clauses. This person requested the data the company had collected from him and the reasons for doing so. Huawei claimed it had deleted the data. A German court’s decision that Huawei had violated the GDPR is being appealed. However, some bigger issues are raised by the case, including growing unease within the European Union, that People’s Republic of China firms are possibly illegally transferring and processing EU citizens’ data and a case before Europe’s highest court in which the legality of standard contractual clauses may be determined as early as this month.
  • Deutsche Telekom under pressure after reports on Huawei reliance” – Politico. A German newspaper reported on confidential documents showing that Deutsche Telekom deepened its relationship with Huawei as the United States’ government was pressuring its allies and other nations to stop using the equipment and services of the company. The German telecommunications company denied the claims, and a number of German officials expressed surprise and dismay, opining that the government of Chancellor Angela Merkel should act more swiftly to implement legislation to secure Germany’s networks.
  • Inside the Plot to Kill the Open Technology Fund” – Vice. According to critics, the Trump Administration’s remaking of the United States (US) Agency for Global Media (USAGM) is threatening the mission and effectiveness of the Open Technology Fund (OTF), a US government non-profit designed to help dissidents and endangered populations throughout the world. The OTF has funded a number of open technology projects, including the Signal messaging app, but the new USAGM head, Michael pack, is pressing for closed source technology.
  • How Police Secretly Took Over a Global Phone Network for Organized Crime” – Vice. European law enforcement agencies penetrated and compromised an encrypted messaging service in Europe, leading to a number of arrests and seizures of drugs. Encrochat had billed itself as completely secure, but hackers with the French government broke into the system and laid bare the details of numerous crimes. And, this is only the latest encrypted app that is marketed to criminals, meaning others will soon step into the void created when Encrochat shut down.
  • Virus-Tracing Apps Are Rife With Problems. Governments Are Rushing to Fix Them.” – The New York Times. In numerous nations around the world, the rush to design and distribute contact tracing apps to fight COVID-19 has resulted in a host of problems predicted by information technology professionals and privacy, civil liberties and human rights advocates. Some apps collect too much information, many are not secure, and some do not seem to perform their intended tasks. Moreover, without mass adoption, the utility of an app is questionable at best. Some countries have sought to improve and perfect their apps in response to criticism, but others are continuing to use and even mandate their citizens and residents use them.
  • Hong Kong Security Law Sets Stage for Global Internet Fight” – The New York Times. After the People’s Republic of China (PRC) passed a new law that strips many of the protections Hong Kong enjoyed, technology companies are caught in a bind, for now Hong Kong may well start demanding they hand over data on people living in Hong Kong or employees could face jail time. Moreover, the data demands made of companies like Google or Facebook could pertain to people anywhere in the world. Companies that comply with Beijing’s wishes would likely face turbulence in Washington and vice versa. TikTok said it would withdraw from Hong Kong altogether.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Gino Crescoli from Pixabay

Further Reading and Other Developments (6 June)

Other Developments

First things first, if you would like to receive my Technology Policy Update, email me. You can find some of these Updates from 2019 and 2020 here.

  • A number of tech trade groups are asking the House Appropriations Committee’s Commerce-Justice-Science Subcommittee “to direct the National Institute of Standards and Technology (NIST) to create guidelines that help companies navigate the technical and ethical hurdles of developing artificial intelligence.” They argued:
    • A NIST voluntary framework-based consensus set of best practices would be pro-innovation, support U.S. leadership, be consistent with NIST’s ongoing engagement on AI industry consensus standards development, and align with U.S. support for the OECD AI principles as well as the draft Memorandum to Heads of Executive Departments and Agencies, “Guidance for Regulation of Artificial Intelligence Applications.”
  • The Department of Defense (DOD) “named seven U.S. military installations as the latest sites where it will conduct fifth-generation (5G) communications technology experimentation and testing. They are Naval Base Norfolk, Virginia; Joint Base Pearl Harbor-Hickam, Hawaii; Joint Base San Antonio, Texas; the National Training Center (NTC) at Fort Irwin, California; Fort Hood, Texas; Camp Pendleton, California; and Tinker Air Force Base, Oklahoma.”  The DOD explained “[t]his second round, referred to as Tranche 2, brings the total number of installations selected to host 5G testing to 12…[and] builds on DOD’s previously-announced 5G communications technology prototyping and experimentation and is part of a 5G development roadmap guided by the Department of Defense 5G Strategy.”
  • The Federal Trade Commission announced a $150,000 settlement with “HyperBeard, Inc. [which] violated the Children’s Online Privacy Protection Act Rule (COPPA Rule) by allowing third-party ad networks to collect personal information in the form of persistent identifiers to track users of the company’s child-directed apps, without notifying parents or obtaining verifiable parental consent.”
  • The National Institute of Standards and Technology (NIST) released Special Publication 800-133 Rev. 2, Recommendation for Cryptographic Key Generation that “discusses the generation of the keys to be used with the approved  cryptographic  algorithms…[which] are  either  1) generated  using  mathematical  processing  on  the  output  of  approved  Random  Bit  Generators (RBGs) and  possibly  other  parameters or 2) generated based on keys that are generated in this fashion.”
  • United States Trade Representative (USTR) announced “investigations into digital services taxes that have been adopted or are being considered by a number of our trading partners.” These investigations are “with respect to Digital Services Taxes (DSTs) adopted or under consideration by Austria, Brazil, the Czech Republic, the European Union, India, Indonesia, Italy, Spain, Turkey, and the United Kingdom.” The USTR is accepting comments until 15 July.
  • NATO’s North Atlantic Council released a statement “concerning malicious cyber activities” that have targeted medical facilities stating “Allies are committed to protecting their critical infrastructure, building resilience and bolstering cyber defences, including through full implementation of NATO’s Cyber Defence Pledge.” NATO further pledged “to employ the full range of capabilities, including cyber, to deter, defend against and counter the full spectrum of cyber threats.”
  • The Public Interest Declassification Board (PIDB) released “A Vision for the Digital Age: Modernization of the U.S. National Security Classification and Declassification System” that “provides recommendations that can serve as a blueprint for modernizing the classification and declassification system…[for] there is a critical need to modernize this system to move from the analog to the digital age by deploying advanced technology and by upgrading outdated paper-based policies and practices.”
  • In a Department of State press release, a Declaration on COVID-19, the G7 Science and Technology Ministers stated their intentions “to work collaboratively, with other relevant Ministers to:
    • Enhance cooperation on shared COVID-19 research priority areas, such as basic and applied research, public health, and clinical studies. Build on existing mechanisms to further priorities, including identifying COVID-19 cases and understanding virus spread while protecting privacy and personal data; developing rapid and accurate diagnostics to speed new testing technologies; discovering, manufacturing, and deploying safe and effective therapies and vaccines; and implementing innovative modeling, adequate and inclusive health system management, and predictive analytics to assist with preventing future pandemics.
    • Make government-sponsored COVID-19 epidemiological and related research results, data, and information accessible to the public in machine-readable formats, to the greatest extent possible, in accordance with relevant laws and regulations, including privacy and intellectual property laws.
    • Strengthen the use of high-performance computing for COVID-19 response. Make national high-performance computing resources available, as appropriate, to domestic research communities for COVID-19 and pandemic research, while safeguarding intellectual property.
    • Launch the Global Partnership on AI, envisioned under the 2018 and 2019 G7 Presidencies of Canada and France, to enhance multi-stakeholder cooperation in the advancement of AI that reflects our shared democratic values and addresses shared global challenges, with an initial focus that includes responding to and recovering from COVID-19. Commit to the responsible and human-centric development and use of AI in a manner consistent with human rights, fundamental freedoms, and our shared democratic values.
    • Exchange best practices to advance broadband connectivity; minimize workforce disruptions, support distance learning and working; enable access to smart health systems, virtual care, and telehealth services; promote job upskilling and reskilling programs to prepare the workforce of the future; and support global social and economic recovery, in an inclusive manner while promoting data protection, privacy, and security.
  • The Digital, Culture, Media and Sport Committee’s Online Harms and Disinformation Subcommittee held a virtual meeting, which “is the second time that representatives of the social media companies have been called in by the DCMS Sub-committee in its ongoing inquiry into online harms and disinformation following criticism by Chair Julian Knight about a lack of clarity of evidence and further failures to provide adequate answers to follow-up correspondence.” Before the meeting, the Subcommittee sent a letter to Twitter, Facebook, and Google and received responses. The Subcommittee heard testimony from:
    • Facebook Head of Product Policy and Counterterrorism Monika Bickert
    • YouTube Vice-President of Government Affairs and Public Policy Leslie Miller
    • Google Global Director of Information Policy Derek Slater
    • Twitter Director of Public Policy Strategy Nick Pickles
  • Senators Ed Markey (D-MA), Ron Wyden (D-OR) and Richard Blumenthal (D-CT) sent a letter to AT&T CEO Randall Stephenson “regarding your company’s policy of not counting use of HBO Max, a streaming service that you own, against your customers’ data caps.” They noted “[a]lthough your company has repeatedly stated publicly that it supports legally binding net neutrality rules, this policy appears to run contrary to the essential principle that in a free and open internet, service providers may not favor content in which they have a financial interest over competitors’ content.”
  • The Brookings Institution released what it considers a path forward on privacy legislation and held a webinar on the report with Federal Trade Commissioner (FTC) Christine Wilson and former FTC Commissioner and now Microsoft Vice President and Deputy General Counsel Julie Brill.

Further Reading

  • Google: Overseas hackers targeting Trump, Biden campaigns” – Politico. In what is the latest in a series of attempted attacks, Google’s Threat Analysis Group announced this week that People’s Republic of China affiliated hackers tried to gain access to the campaign of former Vice President Joe Biden and Iranian hackers tried the same with President Donald Trump’s reelection campaign. The group referred the matter to the federal government but said the attacks were not successful. An official from the Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) remarked “[i]t’s not surprising that a number of state actors are targeting our elections…[and] [w]e’ve been warning about this for years.” It is likely the usual suspects will continue to try to hack into both presidential campaigns.
  • Huawei builds up 2-year reserve of ‘most important’ US chips” ­– Nikkei Asian Review. The Chinese tech giant has been spending billions of dollars stockpiling United States’ (U.S.) chips, particularly from Intel for servers and programable chips from Xilinx, the type that is hard to find elsewhere. This latter chip maker is seen as particularly crucial to both the U.S. and the People’s Republic of China (PRC) because it partners with the Taiwan Semiconductor Manufacturing Company, the entity persuaded by the Trump Administration to announce plans for a plant in Arizona. Shortly after the arrest of Huawei CFO Meng Wanzhou in 2018, the company began these efforts and spent almost $24 billion USD last year stockpiling crucial U.S. chips and other components.
  • GBI investigation shows Kemp misrepresented election security” – Atlanta-Journal Constitution. Through freedom of information requests, the newspaper obtained records from the Georgia Bureau of Investigation (GBI) on its investigation at the behest of then Secretary of State Brian Kemp, requested days before the gubernatorial election he narrowly won. At the time, Kemp claimed hackers connected to the Democratic Party were trying to get into the state’s voter database, when it was Department of Homeland Security personnel running a routine scan for vulnerabilities Kemp’s office had agreed to months earlier. The GBI ultimately determined Kemp’s claims did not merit a prosecution. Moreover, even though Kemp’s staff at the time continues to deny these findings, the site did have vulnerabilities, including one turned up by a software company employee.
  • Trump, Biden both want to repeal tech legal protections — for opposite reasons” – Politico. Former Vice President Joe Biden (D) wants to revisit Section 230 because online platforms are not doing enough to combat misinformation, in his view. Biden laid out his views on this and other technology matters for the editorial board of The New York Times in January, at which point he said Facebook should have to face civil liability for publishing misinformation. Given Republican and Democratic discontent with Section 230 and the social media platforms, there may be a possibility legislation is enacted to limit this shield from litigation.
  • Wearables like Fitbit and Oura can detect coronavirus symptoms, new research shows” –The Washington Post. Perhaps wearable health technology is a better approach to determining when a person has contracted COVID-19 than contact tracing apps. A handful of studies are producing positive results, but these studies have not yet undergone the per review process. Still, these devices may be able to determine disequilibrium in one’s system as compared to a baseline, suggesting an infection and a need for a test. This article, however, did not explore possible privacy implications of sharing one’s personal health data with private companies.
  • Singapore plans wearable virus-tracing device for all” – Reuters. For less than an estimated $10 USD for unit, Singapore will soon introduce wearable devices to better track contacts to fight COVID-19. In what may be a sign that the city-state has given up on its contact tracing app, TraceTogether, the Asian nation will soon release these wearables. If it not clear if everyone will be mandated to wear one and what privacy and data protections will be in place.
  • Exclusive: Zoom plans to roll out strong encryption for paying customers” – Reuters. In the same vein as Zoom allowing paying customers to choose where their calls are routing through (e.g. paying customers in the United States could choose a different region with lesser surveillance capabilities), Zoom will soon offer stronger security for paying customers. Of course, should Zoom’s popularity during the pandemic solidify into a dominant competitive position, this new policy of offering end-to-end encryption that the company cannot crack would likely rouse the ire of the governments of the Five Eyes nations. These plans breathe further life into the views of those who see a future in which privacy and security are commodities to be bought and those unable or unwilling to afford them will not enjoy either. Nonetheless, the company may still face a Federal Trade Commission (FTC) investigation into its apparently inaccurate claims that calls were encrypted, which may have violated Section 5 of the FTC Act along with similar investigations by other nations.
  • Russia and China target U.S. protests on social media” – Politico. Largely eschewing doctored material, the Russian Federation and the People’s Republic of China (PRC) are using social media platforms to further drive dissension and division in the United States (U.S.) during the protests by amplifying the messages and points of views of Americans, according to an analysis of one think tank. For example, some PRC officials have been tweeting out “Black Lives Matter” and claims that videos purporting to show police violence are, in fact, police violence. The goal to fan the flames and further weaken Washington. Thus far, the American government and the platforms themselves have not had much of a public response. Additionally, this represents a continued trend of the PRC in seeking to sow discord in the U.S. whereas before this year use of social media and disinformation tended to be confined to issues of immediate concern to Beijing.
  • The DEA Has Been Given Permission To Investigate People Protesting George Floyd’s Death” – BuzzFeed News. The Department of Justice (DOJ) used a little known section of the powers delegated to the agency to task the Drug Enforcement Agency (DEA) with conducting “covert surveillance” of to help police maintain order during the protests following the killing of George Floyd’s, among other duties. BuzzFeed News was given the two page memorandum effectuating this expansion of the DEA’s responsibilities beyond drug crimes, most likely by agency insiders who oppose the memorandum. These efforts could include use of authority granted to the agency to engage in “bulk collection” of some information, a practice the DOJ Office of the Inspector General (OIG) found significant issues with, including the lack of legal analysis on the scope of the sprawling collection practices.
  • Cops Don’t Need GPS Data to Track Your Phone at Protests” – Gizmodo. Underlying this extensive rundown of the types of data one’s phone leaks that is vacuumed up by a constellation of entities is the fact that more law enforcement agencies are buying or accessing these data because the Fourth Amendment’s protections do not apply to private parties giving the government information.
  • Zuckerberg Defends Approach to Trump’s Facebook Posts” – The New York Times. Unlike Twitter, Facebook opted not to flag President Donald Trump’s tweets about the protests arising from George Floyd’s killing last week that Twitter found to be glorifying violence. CEO Mark Zuckerberg reportedly deliberated at length with senior leadership before deciding the tweets did not violate the platform’s terms of service, a decision roundly criticized by Facebook employees, some of whom staged a virtual walkout on 1 June. In a conference call, Zuckerberg faced numerous questions about why the company does not respond more forcefully to tweets that are inflammatory or untrue. His answers that Facebook does not act as an arbiter of truth were not well freceived among many employees.
  • Google’s European Search Menu Draws Interest of U.S. Antitrust Investigators” – The New York Times. Allegedly Department of Justice (DOJ) antitrust investigators are keenly interested in the system Google lives under in the European Union (EU) where Android users are now prompted to select a default search engine instead of just making its Google’s. This system was put in place as a response to the EU’s €4.34 billion fine in 2018 for imposing “illegal restrictions on Android device manufacturers and mobile network operators to cement its dominant position in general internet search.” This may be seen as a way to address competition issues while not breaking up Google as some have called for. However, Google is conducting monthly auctions among the other search engines to be of the three choices given to EU consumers, which allows Google to reap additional revenue.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.