“Censorship, Suppression, and the 2020 Election” Hearing

A second committee gets its shot at social media platform CEOs and much of the hearing runs much like the one at the end of last month.

It was with some reluctance that I watched the Senate Judiciary Committee’s hearing with Facebook and Twitter’s CEO given the other Senate hearing at which they appeared a few weeks ago. This hearing was prompted by the two platform’s “censorship” of a dubious New York Post article on Hunter Biden’s business practices that seems to have been planted by Trump campaign associates. At first, both Facebook and Twitter restricted posting or sharing the article in different ways but ultimately relented. Whatever their motivation and whether this was appropriate strike me as legitimate policy questions to ask. However, to criticize social media platforms for doing what is entirely within their rights under the liability shield provided by 47 U.S.C. 230 (Section 230) seems a bit much. Nonetheless, both Mark Zuckerberg and Jack Dorsey faced pointed questions from both Republicans and Democrats who profess to want to see change in social media. And yet, it remains unlikely the two parties in Congress can coalesce around broad policy changes. Perhaps targeted legislation has a chance, but it seems far too late in this Congress for that to happen.

Chair Lindsey Graham (R-SC) took an interesting approach and largely eschewed the typical Republican approach to rail against an anti-conservative biases social media platforms allegedly have despite little in the way of evidence to support these claims. Graham cited a handful of studies showing that social media engagement might be linked to harm to children and teenagers. This was an interesting approach given the hearing was ostensibly about censorship, content moderation, and Section 230. Perhaps Graham is using a modified rationale similar to the one undergirding Graham’s bill, the “Eliminating Abusive and Rampant Neglect of Interactive Technologies Act of 2020” (EARN IT Act of 2020) (S.3398) (i.e., children are at risk and are being harmed, hence Section 230 must be changed.) Graham did, of course reference the New York Post article but was equivocal as to its veracity and instead framed Twitter and Facebook’s decisions as essentially overriding the editorial choices of the newspaper. He also discussed a tweet of former United Nations Ambassador Nikki Haley that cast doubt on the legality and potential for fraud of mail-in voting that has a label appended by Twitter. Graham contrasted Haley’s tweet with one from Iran’s Ayatollah that questioned why many European nations outlaw Holocaust denial but allow Mohammed to be insulted. This tweet was never fact checked or labeled. Graham suggested the Ayatollah was calling for the destruction of Israel.

Graham argued Section 230 must be changed, and he expressed hope that Republicans and Democrats could work together to do so. He wondered if social media platforms were akin to media organizations given their immense influence and, if so, perhaps they should be regulated accordingly and open to the same liability for publishing defamatory material. Graham called for changes to Section 230 that would establish incentives for social media platforms to make changes such as a more open and transparent system of content moderation, including the biases of the fact checkers. He conceded social media platforms have the almost impossible task of telling people what is reliable and what is not. Finally, he framed social media issues as health issues and compared their addictive effect and harm to cigarettes.

Senator Richard Blumenthal (D-CT) made an opening statement in place of Ranking Member Dianne Feinstein (D-CA), suggesting the possibility that the latter did not want to be associated with this hearing that the former called not serious and a political sideshow. In any event, Blumenthal repeated many of his previously articulated positions on social media companies and how they are currently harming the United States (U.S.) in a number of ways. Blumenthal claimed President Donald Trump is using the megaphone of social media in ways that are harming the U.S. and detrimental to democracy. He called social media terrifying tools of persuasion with power far exceeding the Robber Barons of the last Gilded Age. Blumenthal further claimed social media companies are strip mining the personal data of people to their great profit while also promoting hate speech and voter suppression. Blumenthal acknowledged the baby steps Twitter and Facebook made in trying to address these problems but remarked parenthetically that Google was not required to appear at the hearing, an apparent reward for doing less than the other two companies to combat lies and misinformation.

Blumenthal said the hearing was not serious and was a political sideshow. Blumenthal remarked that “his colleagues” (by which he almost certainly meant Republicans) did not seem interested in foreign interference in U.S. elections and the calls for the murder of Federal Bureau of Investigation Director Christopher Wray and National Institute of Allergy and Infectious Diseases (NIAID) Director Anthony Fauci. Blumenthal said the purpose of the hearing was to bully Facebook, Twitter, and other platforms. He called for serious hearings into “Big Tech,” specifically on antitrust issues as the companies have become dominant and are abusing their power. He specifically suggested that Instagram and WhatsApp be spun off from Facebook and other companies broken up, too. Blumenthal called for strong privacy legislation to be enacted. He said “meaningful” Section 230 reform is needed, including a possible repeal of most of the liability protection, for the immunity shield is way too broad and the victims of harm deserve their day in court. Blumenthal vowed to keep working with Graham in the next Congress on the EARN IT Act, a sign perhaps that the bill is not going to get enacted before the end of the year. Graham noted, however, that next year should the Republicans hold the Congress, Senator Chuck Grassley (R-IA), the Senate’s President Pro Tempore, would become chair. Graham expressed his hope Grassley would work on Section 230.

Facebook CEO Mark Zuckerberg again portrayed Facebook as the platform that gives everyone a voice and then pivoted to the reforms implemented to ensure the company was not a vessel for election misinformation and mischief. Zuckerberg touted Facebook’s voter registration efforts (more than 4.5 million), its role in helping people volunteer at polls, and its efforts to disseminate factual information about when, where, and how Americans could vote. He turned to Facebook’s efforts to combat misinformation and voter suppression and the steps it took on election day and thereafter. Zuckerberg touted the lessons Facebook learned from the 2016 election in the form of changed policies and greater awareness of efforts by other nations to spread disinformation, lies, and chaos. Incidentally (or perhaps not so incidentally) Zuckerberg did not discuss the platform’s efforts to take on domestic efforts to undermine U.S. democracy. He, did, however reveal that Facebook is funding a “partnership with a team of independent external academics to conduct objective and empirically grounded research on social media’s impact on democracy.” Beyond remarking that Facebook hopes to learn about its role in this dynamic, he did not pledge any particular action on the basis of this study.

Zuckerberg reiterated Facebook’s positions on Section 230 reform:

I’ve also called for Congress to update Section 230 of the Communications Decency Act to make sure it’s working as intended. Section 230 allows us to provide our products and services to users by doing two things:

  • First, it encourages free expression. Without Section 230, platforms could potentially be held liable for everything people say. Platforms would likely censor more content to avoid legal risk and would be less likely to invest in technologies that enable people to express themselves in new ways.
  • Second, it allows platforms to moderate content. Without Section 230, platforms could face liability for doing even basic moderation, such as removing hate speech and harassment that impacts the safety and security of their communities.

Thanks to Section 230, people have the freedom to use the internet to express themselves, and platforms are able to more effectively address risks. Updating Section 230 is a significant decision, but we support the ideas around transparency and industry collaboration that are being discussed in some of the current bipartisan proposals, and I look forward to a meaningful dialogue about how we might update the law to deal with the problems we face today.

It’s important that any changes to the law don’t prevent new companies or businesses from being built, because innovation in the internet sector brings real benefits to billions of people around the world. We stand ready to work with Congress on what regulation could look like, whether that means Section 230 reform or providing guidance to platforms on other issues such as harmful content, privacy, elections, and data portability. By updating the rules for the internet, we can preserve what’s best about it—the freedom for people to express themselves and for entrepreneurs to build new things—while also protecting society from broader harms.

Twitter CEO Jack Dorsey explained Twitter’s content moderation policies, especially those related to the election. He stressed that Congress should build upon the foundation laid in Section 230 either through additional legislation or in helping to create private codes of conduct social media companies would help craft and then abide. He asserted that removing Section 230 protection or radically reducing the liability shield would not go to the problem of addressing problematic speech on social media and would indeed cause most platforms to retrench and more severely restrict speech, an outcome at odds with what Members desire. Dorsey then trotted the idea that carving out Section 230, as many of the bills introduced in this Congress propose to do, would create a complicated competitive landscape that would favor large incumbents with the resources to comply while all but shutting out smaller competitors. Regardless of whether this is likely to happen, it is shrewd testimony given the anti-trust sentiment on Capitol Hill and the executive branch towards large technology firms.

In terms of any concrete recommendations for Congress, Dorsey noted:

Three weeks ago, I told the Senate Committee on Commerce, Science and Transportation that I believe the best way to address our mutually-held concerns is to require the publication of moderation processes and practices, a straightforward process to appeal decisions, and best efforts around algorithmic choice, while protecting the privacy of the people who use our service. These are achievable in short order.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Photo by Prateek Katyal from Pexels

Modified EARN IT Act Marked Up; Before Markup, Graham, Cotton, and Blackburn Introduce Encryption Bill

The Senate Judiciary Committee unanimously reports out a revised bill to remove online child sexual material from Section 230 protection. The bill no longer allows companies to use a safe harbor based on adopting best practices for finding and removing this material. However, before the hearing, the chair of the committee introduced a bill requiring technology companies to decrypt or assist in decrypting data subject to a court order accompanying a search warrant.

First things first, if you would like to receive my Technology Policy Update, email me. You can find some of these Updates from 2019 and 2020 here.

The Senate Judiciary Committee met, amended and reported out the “Eliminating Abusive and Rampant Neglect of Interactive Technologies Act of 2020” (EARN IT Act of 2020) (S.3398), a bill that would change 47 USC 230 (aka Section 230) by narrowing the liability shield and potentially making online platforms liable to criminal and civil actions for having child sexual materials on their platforms. The bill as introduced in March was changed significantly this week when a manager’s amendment was released and then further changed at the markup. The Committee reported out the bill unanimously, sending it to the full Senate.

Last week, in advance of the first hearing to markup the EARN IT Act of 2020, key Republican stakeholders released a bill that would require device manufacturers, app developers, and online platforms to decrypt data if a federal court issues a warrant based on probable cause. Critics of the EARN IT Act of 2020 claimed the bill would force big technology companies to choose between weakening encryption or losing their liability protection under Section 230. They likely see this most recent bill as another shot across the bow of technology companies, many of which continue to support and use end-to-end encryption even though the United States government and close allies are pressuring them on the issue. However, unlike the EARN IT Act of 2020, this latest bill does not have any Democratic cosponsors.

Senate Judiciary Committee Chair Lindsey Graham (R-SC) and Senators Tom Cotton (R-AR) and Marsha Blackburn (R-TN) introduced the “Lawful Access to Encrypted Data Act” (S.4051) that would require the manufacturers of devices such as smartphones, app makers, and platforms to decrypt a user’s data if a federal court issues a warrant to search a device, app, or operating system.

The assistance covered entities must provide includes:

  • isolating the information authorized to be searched;
  • decrypting or decoding information on the electronic device or remotely stored electronic information that is authorized to be searched, or otherwise providing such information in an intelligible format, unless the independent actions of an unaffiliated entity make it technically impossible to do so; and
  • providing technical support as necessary to ensure effective execution of the warrant for the electronic devices particularly described by the warrant.

The Department of Justice (DOJ) would be able to issue “assistance capability directives” that would require the recipient to prepare or maintain the ability to aid a law enforcement agency that obtained a warrant that needs technical assistance to access data. Recipients of such orders can file a petition in federal court in Washington, DC to modify or set aside the order on only three grounds: it is illegal, it does meet the requirements of the new federal regulatory structure, or “it is technically impossible for the person to make any change to the way the hardware, software, or other property of the person behaves in order to comply with the directive.” If a court rules against the recipient of such an order, it must comply, and if any recipient of such an order does not comply, a court may find it in contempt of court, allowing for a range of punishments until the contempt is cured. The bill also amends the “Foreign Intelligence Surveillance Act” (FISA) to require the same decryption and assistance in FISA activities, which are mostly surveillance of people outside the United States.

The bill would focus on those device manufacturers that sell more than 1 million devices and those platforms and apps with more than 1 million users, meaning obviously companies like Apple, Facebook, Google, and others.

The bill also tasks the DOJ with conducting a prize competition “to incentivize and encourage research and innovation into solutions providing law enforcement access to encrypted data pursuant to legal process”

According to the Graham, Cotton, and Blackburn’s press release, the “[h]ighlights of the “Lawful Access to Encrypted Data Act” are:

  • Enables law enforcement to obtain lawful access to encrypted data.
    • Once a warrant is obtained, the bill would require device manufacturers and service providers to assist law enforcement with accessing encrypted data if assistance would aid in the execution of the warrant.
    • In addition, it allows the Attorney General to issue directives to service providers and device manufacturers to report on their ability to comply with court orders, including timelines for implementation.
      • The Attorney General is prohibited from issuing a directive with specific technical steps for implementing the required capabilities.
      • Anyone issued a directive may appeal in federal court to change or set aside the directive.
      • The Government would be responsible for compensating the recipient of a directive for reasonable costs incurred in complying with the directive.
  • Incentivizes technical innovation.
    • Directs the Attorney General to create a prize competition to award participants who create a lawful access solution in an encrypted environment, while maximizing privacy and security.
  • Promotes technical and lawful access training and provides real-time assistance.
    • Funds a grant program within the Justice Department’s National Domestic Communications Assistance Center (NDCAC) to increase digital evidence training for law enforcement and creates a call center for advice and assistance during investigations.

The EARN IT Act of 2020 was introduced in March by Graham, Senate Judiciary Committee Ranking Member Dianne Feinstein (D-CA), and Senators Richard Blumenthal (D-CT) and Josh Hawley (R-MO). If enacted, the EARN IT Act would represent a second piece of legislation to change Section 230 of the Communications Decency Act in the last two years with enactment of “Allow States and Victims to Fight Online Sex Trafficking Act of 2017” (P.L. 115-164).

In advance of this week’s markup, Graham and Blumenthal released a manager’s amendment to the EARN IT Act. The bill would still establish a National Commission on Online Child Sexual Exploitation Prevention (Commission) that would design and recommend voluntary “best practices” applicable to technology companies such as Google, Facebook, and many others to address “the online sexual exploitation of children.” However, instead of encouraging technology companies to use these best practices in exchange for continuing to enjoy liability protection, the language creating this safe harbor has been stricken.

Moreover, instead of creating a process under which the DOJ, Department of Homeland Security (DHS), and the Federal Trade Commission (FTC) would accept or reject these standards, as in the original bill, the DOJ would merely have to publish them in the Federal Register. Likewise, the language establishing a fast track process for Congress to codify these best practices has been stricken, too as well as the provisions requiring certain technology companies to certify compliance with the best practices.

Moreover, the revised bill also lacks the safe harbor against lawsuits based on having “child sexual abuse material” on their platform for following the Commission’s best practices. Now the manager’s amendment strikes liability protection under 47 USC 230 for these materials except if a platform is acting as a Good Samaritan in removing these materials. Consequently, should a Facebook or Google fail to find and take down these materials in an expeditious fashion, then they would face federal and state liability to civil and criminal lawsuits.

However, the Committee adopted an amendment offered by Senator Patrick Leahy (D-VT) that would change 47 USC 230 by making clear that the use of end-to-end encryption does not make providers liable for child sexual exploitation laws and abuse material. Specifically, no liability would attach because the provider

  • utilizes full end-to-end encrypted messaging services, device encryption, or other encryption services;
  • does not possess the information necessary to decrypt a communication; or
  • fails to take an action that would otherwise  undermine  the  ability  of  the  provider  to  offer  full  end-to-end  encrypted  messaging  services, device encryption, or other encryption services.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Gerd Altmann from Pixabay

Further Reading and Other Developments (6 June)

Other Developments

First things first, if you would like to receive my Technology Policy Update, email me. You can find some of these Updates from 2019 and 2020 here.

  • A number of tech trade groups are asking the House Appropriations Committee’s Commerce-Justice-Science Subcommittee “to direct the National Institute of Standards and Technology (NIST) to create guidelines that help companies navigate the technical and ethical hurdles of developing artificial intelligence.” They argued:
    • A NIST voluntary framework-based consensus set of best practices would be pro-innovation, support U.S. leadership, be consistent with NIST’s ongoing engagement on AI industry consensus standards development, and align with U.S. support for the OECD AI principles as well as the draft Memorandum to Heads of Executive Departments and Agencies, “Guidance for Regulation of Artificial Intelligence Applications.”
  • The Department of Defense (DOD) “named seven U.S. military installations as the latest sites where it will conduct fifth-generation (5G) communications technology experimentation and testing. They are Naval Base Norfolk, Virginia; Joint Base Pearl Harbor-Hickam, Hawaii; Joint Base San Antonio, Texas; the National Training Center (NTC) at Fort Irwin, California; Fort Hood, Texas; Camp Pendleton, California; and Tinker Air Force Base, Oklahoma.”  The DOD explained “[t]his second round, referred to as Tranche 2, brings the total number of installations selected to host 5G testing to 12…[and] builds on DOD’s previously-announced 5G communications technology prototyping and experimentation and is part of a 5G development roadmap guided by the Department of Defense 5G Strategy.”
  • The Federal Trade Commission announced a $150,000 settlement with “HyperBeard, Inc. [which] violated the Children’s Online Privacy Protection Act Rule (COPPA Rule) by allowing third-party ad networks to collect personal information in the form of persistent identifiers to track users of the company’s child-directed apps, without notifying parents or obtaining verifiable parental consent.”
  • The National Institute of Standards and Technology (NIST) released Special Publication 800-133 Rev. 2, Recommendation for Cryptographic Key Generation that “discusses the generation of the keys to be used with the approved  cryptographic  algorithms…[which] are  either  1) generated  using  mathematical  processing  on  the  output  of  approved  Random  Bit  Generators (RBGs) and  possibly  other  parameters or 2) generated based on keys that are generated in this fashion.”
  • United States Trade Representative (USTR) announced “investigations into digital services taxes that have been adopted or are being considered by a number of our trading partners.” These investigations are “with respect to Digital Services Taxes (DSTs) adopted or under consideration by Austria, Brazil, the Czech Republic, the European Union, India, Indonesia, Italy, Spain, Turkey, and the United Kingdom.” The USTR is accepting comments until 15 July.
  • NATO’s North Atlantic Council released a statement “concerning malicious cyber activities” that have targeted medical facilities stating “Allies are committed to protecting their critical infrastructure, building resilience and bolstering cyber defences, including through full implementation of NATO’s Cyber Defence Pledge.” NATO further pledged “to employ the full range of capabilities, including cyber, to deter, defend against and counter the full spectrum of cyber threats.”
  • The Public Interest Declassification Board (PIDB) released “A Vision for the Digital Age: Modernization of the U.S. National Security Classification and Declassification System” that “provides recommendations that can serve as a blueprint for modernizing the classification and declassification system…[for] there is a critical need to modernize this system to move from the analog to the digital age by deploying advanced technology and by upgrading outdated paper-based policies and practices.”
  • In a Department of State press release, a Declaration on COVID-19, the G7 Science and Technology Ministers stated their intentions “to work collaboratively, with other relevant Ministers to:
    • Enhance cooperation on shared COVID-19 research priority areas, such as basic and applied research, public health, and clinical studies. Build on existing mechanisms to further priorities, including identifying COVID-19 cases and understanding virus spread while protecting privacy and personal data; developing rapid and accurate diagnostics to speed new testing technologies; discovering, manufacturing, and deploying safe and effective therapies and vaccines; and implementing innovative modeling, adequate and inclusive health system management, and predictive analytics to assist with preventing future pandemics.
    • Make government-sponsored COVID-19 epidemiological and related research results, data, and information accessible to the public in machine-readable formats, to the greatest extent possible, in accordance with relevant laws and regulations, including privacy and intellectual property laws.
    • Strengthen the use of high-performance computing for COVID-19 response. Make national high-performance computing resources available, as appropriate, to domestic research communities for COVID-19 and pandemic research, while safeguarding intellectual property.
    • Launch the Global Partnership on AI, envisioned under the 2018 and 2019 G7 Presidencies of Canada and France, to enhance multi-stakeholder cooperation in the advancement of AI that reflects our shared democratic values and addresses shared global challenges, with an initial focus that includes responding to and recovering from COVID-19. Commit to the responsible and human-centric development and use of AI in a manner consistent with human rights, fundamental freedoms, and our shared democratic values.
    • Exchange best practices to advance broadband connectivity; minimize workforce disruptions, support distance learning and working; enable access to smart health systems, virtual care, and telehealth services; promote job upskilling and reskilling programs to prepare the workforce of the future; and support global social and economic recovery, in an inclusive manner while promoting data protection, privacy, and security.
  • The Digital, Culture, Media and Sport Committee’s Online Harms and Disinformation Subcommittee held a virtual meeting, which “is the second time that representatives of the social media companies have been called in by the DCMS Sub-committee in its ongoing inquiry into online harms and disinformation following criticism by Chair Julian Knight about a lack of clarity of evidence and further failures to provide adequate answers to follow-up correspondence.” Before the meeting, the Subcommittee sent a letter to Twitter, Facebook, and Google and received responses. The Subcommittee heard testimony from:
    • Facebook Head of Product Policy and Counterterrorism Monika Bickert
    • YouTube Vice-President of Government Affairs and Public Policy Leslie Miller
    • Google Global Director of Information Policy Derek Slater
    • Twitter Director of Public Policy Strategy Nick Pickles
  • Senators Ed Markey (D-MA), Ron Wyden (D-OR) and Richard Blumenthal (D-CT) sent a letter to AT&T CEO Randall Stephenson “regarding your company’s policy of not counting use of HBO Max, a streaming service that you own, against your customers’ data caps.” They noted “[a]lthough your company has repeatedly stated publicly that it supports legally binding net neutrality rules, this policy appears to run contrary to the essential principle that in a free and open internet, service providers may not favor content in which they have a financial interest over competitors’ content.”
  • The Brookings Institution released what it considers a path forward on privacy legislation and held a webinar on the report with Federal Trade Commissioner (FTC) Christine Wilson and former FTC Commissioner and now Microsoft Vice President and Deputy General Counsel Julie Brill.

Further Reading

  • Google: Overseas hackers targeting Trump, Biden campaigns” – Politico. In what is the latest in a series of attempted attacks, Google’s Threat Analysis Group announced this week that People’s Republic of China affiliated hackers tried to gain access to the campaign of former Vice President Joe Biden and Iranian hackers tried the same with President Donald Trump’s reelection campaign. The group referred the matter to the federal government but said the attacks were not successful. An official from the Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) remarked “[i]t’s not surprising that a number of state actors are targeting our elections…[and] [w]e’ve been warning about this for years.” It is likely the usual suspects will continue to try to hack into both presidential campaigns.
  • Huawei builds up 2-year reserve of ‘most important’ US chips” ­– Nikkei Asian Review. The Chinese tech giant has been spending billions of dollars stockpiling United States’ (U.S.) chips, particularly from Intel for servers and programable chips from Xilinx, the type that is hard to find elsewhere. This latter chip maker is seen as particularly crucial to both the U.S. and the People’s Republic of China (PRC) because it partners with the Taiwan Semiconductor Manufacturing Company, the entity persuaded by the Trump Administration to announce plans for a plant in Arizona. Shortly after the arrest of Huawei CFO Meng Wanzhou in 2018, the company began these efforts and spent almost $24 billion USD last year stockpiling crucial U.S. chips and other components.
  • GBI investigation shows Kemp misrepresented election security” – Atlanta-Journal Constitution. Through freedom of information requests, the newspaper obtained records from the Georgia Bureau of Investigation (GBI) on its investigation at the behest of then Secretary of State Brian Kemp, requested days before the gubernatorial election he narrowly won. At the time, Kemp claimed hackers connected to the Democratic Party were trying to get into the state’s voter database, when it was Department of Homeland Security personnel running a routine scan for vulnerabilities Kemp’s office had agreed to months earlier. The GBI ultimately determined Kemp’s claims did not merit a prosecution. Moreover, even though Kemp’s staff at the time continues to deny these findings, the site did have vulnerabilities, including one turned up by a software company employee.
  • Trump, Biden both want to repeal tech legal protections — for opposite reasons” – Politico. Former Vice President Joe Biden (D) wants to revisit Section 230 because online platforms are not doing enough to combat misinformation, in his view. Biden laid out his views on this and other technology matters for the editorial board of The New York Times in January, at which point he said Facebook should have to face civil liability for publishing misinformation. Given Republican and Democratic discontent with Section 230 and the social media platforms, there may be a possibility legislation is enacted to limit this shield from litigation.
  • Wearables like Fitbit and Oura can detect coronavirus symptoms, new research shows” –The Washington Post. Perhaps wearable health technology is a better approach to determining when a person has contracted COVID-19 than contact tracing apps. A handful of studies are producing positive results, but these studies have not yet undergone the per review process. Still, these devices may be able to determine disequilibrium in one’s system as compared to a baseline, suggesting an infection and a need for a test. This article, however, did not explore possible privacy implications of sharing one’s personal health data with private companies.
  • Singapore plans wearable virus-tracing device for all” – Reuters. For less than an estimated $10 USD for unit, Singapore will soon introduce wearable devices to better track contacts to fight COVID-19. In what may be a sign that the city-state has given up on its contact tracing app, TraceTogether, the Asian nation will soon release these wearables. If it not clear if everyone will be mandated to wear one and what privacy and data protections will be in place.
  • Exclusive: Zoom plans to roll out strong encryption for paying customers” – Reuters. In the same vein as Zoom allowing paying customers to choose where their calls are routing through (e.g. paying customers in the United States could choose a different region with lesser surveillance capabilities), Zoom will soon offer stronger security for paying customers. Of course, should Zoom’s popularity during the pandemic solidify into a dominant competitive position, this new policy of offering end-to-end encryption that the company cannot crack would likely rouse the ire of the governments of the Five Eyes nations. These plans breathe further life into the views of those who see a future in which privacy and security are commodities to be bought and those unable or unwilling to afford them will not enjoy either. Nonetheless, the company may still face a Federal Trade Commission (FTC) investigation into its apparently inaccurate claims that calls were encrypted, which may have violated Section 5 of the FTC Act along with similar investigations by other nations.
  • Russia and China target U.S. protests on social media” – Politico. Largely eschewing doctored material, the Russian Federation and the People’s Republic of China (PRC) are using social media platforms to further drive dissension and division in the United States (U.S.) during the protests by amplifying the messages and points of views of Americans, according to an analysis of one think tank. For example, some PRC officials have been tweeting out “Black Lives Matter” and claims that videos purporting to show police violence are, in fact, police violence. The goal to fan the flames and further weaken Washington. Thus far, the American government and the platforms themselves have not had much of a public response. Additionally, this represents a continued trend of the PRC in seeking to sow discord in the U.S. whereas before this year use of social media and disinformation tended to be confined to issues of immediate concern to Beijing.
  • The DEA Has Been Given Permission To Investigate People Protesting George Floyd’s Death” – BuzzFeed News. The Department of Justice (DOJ) used a little known section of the powers delegated to the agency to task the Drug Enforcement Agency (DEA) with conducting “covert surveillance” of to help police maintain order during the protests following the killing of George Floyd’s, among other duties. BuzzFeed News was given the two page memorandum effectuating this expansion of the DEA’s responsibilities beyond drug crimes, most likely by agency insiders who oppose the memorandum. These efforts could include use of authority granted to the agency to engage in “bulk collection” of some information, a practice the DOJ Office of the Inspector General (OIG) found significant issues with, including the lack of legal analysis on the scope of the sprawling collection practices.
  • Cops Don’t Need GPS Data to Track Your Phone at Protests” – Gizmodo. Underlying this extensive rundown of the types of data one’s phone leaks that is vacuumed up by a constellation of entities is the fact that more law enforcement agencies are buying or accessing these data because the Fourth Amendment’s protections do not apply to private parties giving the government information.
  • Zuckerberg Defends Approach to Trump’s Facebook Posts” – The New York Times. Unlike Twitter, Facebook opted not to flag President Donald Trump’s tweets about the protests arising from George Floyd’s killing last week that Twitter found to be glorifying violence. CEO Mark Zuckerberg reportedly deliberated at length with senior leadership before deciding the tweets did not violate the platform’s terms of service, a decision roundly criticized by Facebook employees, some of whom staged a virtual walkout on 1 June. In a conference call, Zuckerberg faced numerous questions about why the company does not respond more forcefully to tweets that are inflammatory or untrue. His answers that Facebook does not act as an arbiter of truth were not well freceived among many employees.
  • Google’s European Search Menu Draws Interest of U.S. Antitrust Investigators” – The New York Times. Allegedly Department of Justice (DOJ) antitrust investigators are keenly interested in the system Google lives under in the European Union (EU) where Android users are now prompted to select a default search engine instead of just making its Google’s. This system was put in place as a response to the EU’s €4.34 billion fine in 2018 for imposing “illegal restrictions on Android device manufacturers and mobile network operators to cement its dominant position in general internet search.” This may be seen as a way to address competition issues while not breaking up Google as some have called for. However, Google is conducting monthly auctions among the other search engines to be of the three choices given to EU consumers, which allows Google to reap additional revenue.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Dueling COVID-19 Privacy Bills Released

Democratic stakeholders answer a Republican proposal on how to regulate privacy issues raised by COVID-19 contact tracing. The proposals have little chance of enactment and are mostly about positioning.  

First things first, if you would like to receive my Technology Policy Update, email me. You can find some of these Updates from 2019 and 2020 here.

Late last week, a group of Democratic stakeholders on privacy and data security issues released the “Public Health Emergency Privacy Act” (S.3749), a bill that serves as a counterpoint to the “COVID-19 Consumer Data Protection Act” (S.3663) legislation introduced a few weeks ago to pose a solution to the privacy issues raised by contact tracing of COVID-19. However, the Democratic bill contains a number of provisions that many Republicans consider non-starters such as a private right of action for individuals and no preemption of state laws. It is not likely this bill will advance in the Senate even though it may possibly be moved in the House. S. 3749 was introduced by Senators Richard Blumenthal (D-CT) and Mark Warner (D-VA) and Representatives Anna Eshoo (D-CA), Jan Schakowsky (D-IL), and Suzan DelBene (D-WA). 

In a way, the “Public Health Emergency Privacy Act” makes the case that “Health Insurance Portability and Accountability Act” (HIPAA)/“Health Information Technology for Economic and Clinical Health Act” (HITECH Act) regulations are inadequate to protect the privacy and data of people in the United States for the reason that these regulations apply only to healthcare providers, their business associates, and other named entities in the healthcare system. Entities collecting healthcare information, even very sensitive information, are not subject to HIPAA/HITECH regulations and would likely be regulated, to the extent they are, by the Federal Trade Commission (FTC) or state agencies and attorneys general.

The “Public Health Emergency Privacy Act” would cover virtually all entities, including government agencies except for public health authorities (e.g. a state department of health or the Centers for Disease Control and Prevention), healthcare providers, service providers, people acting in a household capacity. This remarkable definition likely reflects plans announced around by the world by governments to eschew Google and Apple’s contact tracing app to develop its own. Moreover, it would also touch some efforts aside and apart from contact tracing apps. Moreover, this is the first bill introduced recently that proposes to treat public and private entities the same way with respect to how and when they may collect personal data.

The types of data protected under the act are “emergency health data” which are defined as “data linked or reasonably linkable to an individual or device, including data inferred or derived about the individual or device from other collected data provided such data is still linked or reasonably linkable to the individual or device, that concerns the public COVID–19 health emergency.” The bill then lists a number of examples of emergency health data that is sweeping and comprehensive. This term includes “information that reveals the past, present, or future physical or behavioral health or condition of, or provision of healthcare to, an individual” related to testing for COVID-19 and related genetic and biometric information. Geolocation and proximity information would also be covered by this definition. Finally, emergency health data encompasses “any other data collected from a personal device,” which seems to cover all data collection from a person’s phone, the primary means of contact tracing proposed thus far. However, it would appear that data on people collected at the household or higher level may not be subject to this definition and hence much of the bill’s protections.

The authority provided to covered entities is limited. First, collection, use, and disclosure of emergency health data are restricted to good faith health purposes. And yet, this term is not defined in the act, and the FTC may need to define it during the expedited rulemaking the bill. However, it seems a fairly safe assumption the agency and courts would not construe using emergency health data for advertising would not be a good faith public health purpose. Next, covered entities must allow people to correct inaccurate information and the entity itself has the duty to take reasonable efforts on its own to correct this information. Covered entities must implement reasonable safeguards to prevent discrimination on the basis of these data, which is a new obligation placed on covered entities in a privacy or data security bill. This provision may have been included to bar government agencies collecting and using covered data for discriminatory purposes. However, critics of more expansive bills may see this as the camel’s nose under the tent for future privacy and data security bills. Finally, the only government agencies that can legally be provided emergency health data are public health authorities and then only “for good faith public health purposes and in direct response to exigent circumstances.” Again, the limits of good faith public health purposes remain unclear and then such disclosure can only occur in exigent circumstances, which likely means when a person has contracted or been exposed to COVID-19.

Covered entities and service providers must implement appropriate measure to protect the security and confidentiality of emergency health data, but the third member of the usual triumvirate, availability, is not identified as requiring protection.

The “Public Health Emergency Privacy Act” outright bars a number of uses for emergency health data:

  • Commercial advertising, recommendations for e-commerce, or machine learning for these purposes
  • Any discriminatory practices related to employment, finance, credit, insurance, housing, or education opportunities; and
  • Discriminating with respect to goods, services, facilities, privileges, advantages, or accommodations of any place of public accommodation

It bears note the covered data cannot be collected or disclosed for these purposes either.

The act contains very strong consent language. Covered entities may not collect, use, or disclose emergency health data unless a person has provided express affirmative consent, which requires knowing, informed choice that cannot be obtained through the use of deceptive practices nor inferred through a person’s inaction. However, there are significant exceptions to this consent requirement that would allow covered entities to collect, use, or disclose these data, including

  • protecting against malicious, deceptive, fraudulent, or illegal activity; or
  • detecting, responding to, or preventing information security incidents or threats;
    the covered organization is compelled to do so by a legal obligation.

But these purposes are valid only when necessary and solely for the fulfillment of named purpose.

In a related vein, covered entities must allow people to revoke their consent and it must be effective as soon as practicable but no later than 15 days afterward. Moreover, the person’s emergency health data must be destroyed or rendered not linkable within 30 days of revocation of consent.

Covered entities must provide clear and conspicuous notice before or at the point of data collection that explains how and why the emergency health data is collected, used, and disclosed. Moreover, it must also disclose the categories of recipients to whom a covered entity discloses covered data. This notice must also disclose the covered entity’s data retention policies and data security policies and practices but only with respect to emergency health data. The notice must also inform consumers on how they may exercise rights under the act and how to file a complaint with the FTC.

There would also be a public reporting requirement for those covered entities collecting, using, or disclosing the emergency health data of 100,000 or more people. Every three months, these entities would need to report on aggregate figures on the number of people from whom data was collected, used, and disclosed. These reports must also detail the categories of emergency health data collected, used and disclosed and the categories of third parties with whom such data are shared.

The bill requires covered entities to destroy or make not linkable emergency health data 60 days after the Secretary of Health and Human Services declares the end of the COVID-19 public health emergency, or a state does so, or 60 days after collection.

The FTC would be given the daunting task of beginning a rulemaking 7 days after enactment “to ensure a covered organization that has collected, used, or disclosed emergency health data before the date of enactment of this Act is in compliance with this Act, to the degree practicable” that must be completed within 45 days. There is also a provision requiring the Department of Health and Human Services (HHS) to issue guidance so that HIPAA/HITECH Act regulated entities do not need to meet duplicative requirements, and, moreover, the bill exempts these entities when operating under the HIPAA/HITECH Act regulations.

The FTC, state attorneys general, and people would be able to enforce this new act through a variety of means. The FTC would treat violations as f they were violations of regulation barring a deceptive or unfair practice, meaning the agency could levy fines in the first instance of more than $43,000 a violation. The FTC could seek a range of other relief, including injunctions, restitution, disgorgement, and remediation. However, the FTC could go to federal court without having to consult with the Department of Justice, which is a departure from current law and almost all the privacy bills introduced in this Congress. The FTC’s jurisdiction would be broadened to include common carriers and non-profits for purposes of this bill, too. The FTC would receive normal notice and comment rulemaking authority that is very broad as the agency would be free to promulgate any regulations it sees necessary to effectuate this act.

State attorneys general would be able to seek all the same relief the FTC can so long as the latter is not already bringing an action. Moreover, any other state officials empowered by state statute to bring similar actions may do so for violations of this act.

People would be allowed to sue in either federal or state court to vindicate violations. The bill states that any violation is to be considered an injury in fact to forestall any court from finding that a violation does not injure the person, meaning her suit cannot proceed. The act would allow people to recover between $100-1000 for any negligent violation and between $500-5000 for any reckless, willful, or intentional violation with no cap on total damages. People may also ask for and receive reasonable attorney’s fees and costs and any equitable or declaratory relief a court sees fit to grant. Moreover, the bill would disallow all pre-lawsuit arbitration agreements or waivers of rights. Much of the right to sue granted to people by the “Public Health Emergency Privacy Act” will be opposed by many Republicans and industry stakeholders.

The enforcement section raises a few questions given that entities covered by the bill include government agencies. Presumably, the FTC cannot enforce this act against government agencies for the very good reason they do not have jurisdiction over them. However, does the private right of action waive the federal and state government’s sovereign immunity? This is not clear and may need clarification of the bill is acted upon in either chamber of Congress.

This bill would not preempt state laws, which, if enacted, could subject covered entities to meeting more than one regime for collecting, using, and disclosing emergency health data.

Apart from contact tracing, the “Public Health Emergency Privacy Act” also bars the use of emergency health data to abridge a person’s right to vote and it requires HHS, the United States Commission on Civil Rights, and the FTC to “prepare and submit to Congress reports that examines the civil rights impact of the collection, use, and disclosure of health information in response to the COVID–19 public health emergency.”

Given the continued impasse over privacy legislation, it is little wonder that the bill unveiled a few weeks ago by four key Senate Republicans takes a similar approach that differs in key aspects. Of course, there is no private right of action and it expressly preempts state laws to the contrary.

Generally speaking, the structure of the “COVID–19 Consumer Data Protection Act of 2020” (S.3663) tracks with the bills that have been released thus far by the four sponsors: Senate Commerce, Science, and Transportation Committee Chair Roger Wicker (R-MS) (See here for analysis of the “Consumer Data Privacy Act of 2019”)and Senators John Thune (R-SD), Jerry Moran (R-KS) (See here for analysis of “Consumer Data Privacy and Security Act of 2020” (S.3456)), and Marsha Blackburn (R-TN) (See here for analysis of the “Balancing the Rights Of Web Surfers Equally and Responsibly Act of 2019” (BROWSER Act) (S. 1116)). In short, people would be provided with notice about what information the app collects, how it is processed, and with whom and under what circumstances this information will be shared. Then a person would be free to make an informed choice about whether or not she wants to consent and allow the app or technology to operate on her smartphone. The FTC and state attorneys general would enforce the new protections.

The scope of the information and entities covered is narrower than the Democratic bill. “Covered data” is “precise geolocation data, proximity data, a persistent identifier, and personal health information” but not aggregated data, business contact information, de-identified data, employee screening data, and publicly available information. Those entities covered by the bill are those already subject to FTC jurisdiction along with common carriers and non-profits, but government agencies are not, again unlike the bill put forth by Democrats. Entities must abide by this bill to the extent they “collect[], process[], or transfer[] such covered data, or determine[] the means and purposes for the collection, processing, or transfer of covered data.”

Another key definition is how an “individual” is determined, for it excludes any person acting in her role as an employee and almost all work-related capacities, making clear employers will not need to comply with respect to those working for them.

“Personal health information” is “information relating to an individual that is

  • genetic information of the individual; or
  • information relating to the diagnosis or treatment of past, present, or future physical, mental health, or disability of the individual; and
  • identifies, or is reasonably linkable to, the individual.

And yet, this term excludes educational information subject to the “Family Educational Rights and Privacy Act of 1974” or “information subject to regulations promulgated pursuant to the HIPAA/HITECH Acts.

The “COVID–19 Consumer Data Protection Act of 2020” bars the collection, processing, or transferring of covered data for a covered purpose unless prior notice is provided, a person has provided affirmative consent, and the covered entity has agreed not to collect, process, or transfer such covered data for any other purpose than those detailed in the notice. However, leaving aside the bill’s enumerated allowable purposes for which covered data may collected with consent, the bill provides a number of exemptions from this general bar. For example, collection, processing, or transfers necessary for a covered entity to comply with another law are permissible apparently in the absence of a person’s consent. Moreover, a covered entity need not obtain consent for operational or administrative tasks not disclosed in the notice provided to people.

The act does spell out “covered purposes” for which covered entities may collect, process, or transfer covered data with consent after notice has been given:

  • Collecting, processing, or transferring the covered data of an individual to track the spread, signs, or symptoms of COVID–19.
  • Collecting, processing, or transferring the covered data of an individual to measure compliance with social distancing guidelines or other requirements related to COVID–19 that are imposed on individuals under a Federal, State, or local government order.
  • Collecting, processing, or transferring the covered data of an individual to conduct contact tracing for COVID–19 cases.

Covered entities would be required to publish a privacy policy detailing for which of the above covered purposes a person’s covered data would be collected, processed, or transferred. This policy must also detail the categories of entities that receive covered data, its data retention and data security policies.

There would be reporting requirements that would affect more covered entities than the Democratic bill. Accordingly, any covered entity collecting, processing or transferring covered data for one of the enumerated covered processes must issue a public report 30 days after enactment and then every 60 days thereafter.

Among other provisions in the bill, people would be able to revoke consent, a request that covered entities must honor within 14 days. Covered entities must also ensure the covered data are accurate, but this requirement falls a bit short of people being granted to right to correct inaccurate data as they would, instead, be merely able to report inaccuracies. There is no recourse if a covered entity chooses not to heed these reports. Covered entities would need to “delete or de-identify all covered data collected, processed, or transferred for a [covered purpose] when it is no longer being used for such purpose and is no longer necessary to comply with a Federal, State, or local legal obligation, or the establishment, exercise, or defense of a legal claim.” Even though there is the commandment to delete or de-identify, the timing as to when that happens seems somewhat open-ended as some covered entities could seek legal obligations to meet in order to keep the data on hand.

Covered entities must also minimize covered data by limiting collection, processing, and transferring to “what is reasonably necessary, proportionate, and limited to carry out [a covered purpose.]” The FTC must draft and release guidelines “recommending best practices for covered entities to minimize the collection, processing, and transfer of covered data.” However, these guidelines would most likely be advisory in nature and would not carry the force of law or regulation, leaving covered entities to disregard some of the document if they choose. Covered entities must “establish, implement, and maintain reasonable administrative, technical, and physical data security policies and practices to protect against risks to the confidentiality, security, and integrity of such data,” a mandate broader than the duty imposed by the Democratic bill.

The FTC and state attorneys general would enforce the new regime. The FTC would be able to seek civil fines for first violations in the same way as the Democrat’s privacy bill. However, unlike the other bill, the Republican bill would nullify any Federal Communications Commission (FCC) jurisdiction to the extent it conflicted with the FTC’s in enforcement of the new act. Presumably, this would address jurisdictional issues raised by placing common carriers under FTC jurisdiction when they are usually overseen by the FCC. Even though state laws are preempted, state attorneys general would be able to bring actions to enforce this act at the state level. And, as noted earlier, there is no private right of action.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Odds and Ends (14 April)

Every week, not surprisingly, there are more developments in the technology space than I can reasonably get to. And so, this week, at least, I’ve decided to include some of the odds and ends.

To no great surprise, federal and state elected officials have been questioning Zoom on its security and privacy practices and demanding improvements thereof.

Earlier this month, Senator Michael Bennet (D-CO) sent a letter after the Washington Post found that thousands of Zoom calls could be accessed online that contained people’s sensitive personal information such as therapy sessions and financial information. The culprit is apparently Zoom’s practice of using an identical name format for each video, meaning once someone knows the format they can look up many videos. Security experts call for unique names for each file for a platform like Zoom so as to avoid this outcome.

With these revelations in mind, Bennet wrote Zoom CEO Eric Yuan, asking him to “provide answers to the following questions no later than April 15, 2020: 

  • Please describe all data that Zoom collects from users with and without accounts and please specify how long Zoom retains this data. 
  • Please list every third party and service provider with which Zoom shares user data and for what purposes and level of compensation, if any.
  • Will Zoom require participants to provide affirmative consent if their calls are being recorded or will later be uploaded to the cloud or transcribed? When recorded calls are uploaded and transcribed, will Zoom provide all participants a copy along with an opportunity to correct errors in the recording?
  • Does Zoom plan to change the naming convention that allowed thousands of videos to become easily searchable online?
  • What steps has Zoom taken to notify users featured in videos that are now searchable online? And when users wish for these videos to be removed, what steps will Zoom take to do so, for example, by engaging the third parties where the videos are now viewable?
  • Which privacy settings for users with and without accounts are activated by default, and which require them to opt-in? Does Zoom plan to expand its default privacy settings?
  • What dedicated staff and other resources is Zoom devoting to ensure the privacy and safety of users on its platform?

Bennet was also quoted in a Politico article along with other Democratic Members calling for the Federal Trade Commission (FTC) to open an investigation. House Energy and Commerce Chair Frank Pallone Jr (D-NJ) and Consumer Protection & Commerce Subcommittee Chair Jan Schakowsky (D-IL) were both quoted as being in support of the FTC investigating. Senators Amy Klobuchar (D-MN) and Sherrod Brown (D-OH) are also requesting that the agency investigate Zoom’s claims on security and privacy as promised versus what the company is actually providing. Brown sent letters to Zoom and the FTC on this matter.

Moreover, the Politico article relates that In blessing Zoom for Government from a security standpoint, the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency and the General Services Administration’s Federal Risk and Authorization Management Program explained in a statement:

We advise federal government users to not initiate video conferences using Zoom’s free/commercial offering, but instead to use Zoom for Government

More recently, Senators Elizabeth Warren (D-MA) and Ed Markey (D-MA) asked Zoom how well they are protecting the personal data of students per the Family Education Rights and Privacy Act (FERPA) and the Children’s Online Privacy Protection Act (COPPA). If the FTC were to find COPPA violations, the company would be facing as much as $42,530 per violation.

Markey wrote the FTC separately, urging agency “to issue guidance and provide a comprehensive resource for technology companies that are developing or expanding online conferencing tools during the coronavirus pandemic, so that these businesses can strengthen their cybersecurity and protect customer privacy.” He argued that “[a]t a minimum, this guidance should cover topics including:

  • Implementing secure authentication and other safeguards against unauthorized access;
  • Enacting limits on data collection and recording;
  • Employing encryption and other security protocols for securing data;and
  • Providing clear and conspicuous privacy policies for users.

Markey also “request[ed] that the FTC develop best practices for users of online conferencing software, so that individuals can make informed, safe decisions when choosing and utilizing these technologies. At a minimum, this guidance should cover topics including:

  • Identifying and preventing cyber threats such as phishing and malware;
  • Sharing links to online meetings without compromising security;
  • Restricting access to meetings via software settings; and
  • Recognizing that different versions of a company’s service may provide varying levels of privacy protection.

Many of the Democrats on the House Energy and Commerce Committee also asked Zoom about its recent update to privacy policies made after some of its substandard practices came to light. These Members stated:

“Despite Zoom’s recent clarifications to its privacy policy, a review of Zoom’s privacy policy shows that Zoom may still collect a significant amount of information about both registered and non-registered users from their use of the platform as well as from third parties. Zoom may use that information for a broad range of purposes, including for targeted marketing from both Zoom and third parties… As consumers turn to Zoom for business meetings, remote consultations with psychologists, or even virtual happy hours with friends, they may not expect Zoom to be collecting and using so much of their information.”

Moreover, federal agency Chief Information Officers are formally and informally directing agency employees not to use the commercial/free edition of Zoom as detailed by Federal News Network.

Last week, CISA and the United Kingdom’s National Cyber Security Centre (NCSC) released a joint advisory titled “COVID-19 exploited by malicious cyber actors.” The two agencies argued:

Malicious cyber actors are using the high appetite for COVID-19 related information as an opportunity to deliver malware and ransomware and to steal user credentials. Individuals and organisations should remain vigilant.

CISA and NCSC noted “[t]hreats observed include:

  • Phishing, using the subject of coronavirus orCOVID-19 as a lure
  • Malware distribution using coronavirus orCOVID-19 themed lures
  • Registration of new domain names containing coronavirus orCOVID-19 related wording
  • Attacks against newly (and often rapidly) deployed remote access or remote working infrastructure.

The agencies added they “are working with law enforcement and industry partners to disrupt or prevent these malicious COVID-19 themed cyber activities.”

The Electronic Privacy Information Center (EPIC) sent the FTC a letter, renewing the concerns it detailed on Zoom’s security practices in its complaint last year asking the agency to open an investigation. EPIC stated “[w]e asked you to open an investigation, to compel Zoom to fix the security flaws with its conferencing services, and to investigate the other companies engaged in similar practices.” The organizations stated that “[w]e anticipated that the FTC, with a staff of more than a 1,000 (EPIC has about a dozen people), would find many problems we missed…[t]hat would lead to a change in business practices, a consent order, and 20 years of agency oversight.”

However, the FTC and the Federal Communications Commission (FCC) sent  joint letters “to three companies providing Voice over Internet Protocol (VoIP) services, warning them that routing and transmitting illegal robocalls, including Coronavirus-related scam calls, is illegal and may lead to federal law enforcement against them.” The FTC and FCC “sent a separate letter to USTelecom – The Broadband Association (USTelecom), a trade association that represents U.S.-based telecommunications-related businesses…thank[ing] USTelecom for identifying and mitigating fraudulent robocalls that are taking advantage of the Coronavirus national health crisis, and notes that the USTelecom Industry Traceback Group has helped identify various entities that appear to be responsible for originating or transmitting Coronavirus-related scam robocalls.”

The FCC also denied “an emergency petition requesting an investigation into broadcasters that have aired the President of the United States’ statements and press conferences regarding the novel coronavirus (COVID-19) and related commentary by other on-air personalities” that Free Press filed. The FCC claimed “the Petition misconstrues the Commission’s rules and seeks remedies that would dangerously curtail the freedom of the press embodied in the First Amendment.” In its press release, the FCC added “[t]he decision also makes clear that the FCC will neither act as a roving arbiter of broadcasters’ editorial judgments nor discourage them from airing breaking news events involving government officials in the midst of the current global pandemic.”

Markey and Senator Richard Blumenthal (D-CT) sent a letter “to Google requesting information about the company’s recently announced COVID-19 Community Mobility Reports.” They asked Google to answer the following

  • Does Google plan to share with any government entities, researchers, or private sector partners any users’ coronavirus-related personal data or pseudonymous information
  • Does Google plan to use datasets other than Location History for its Community Mobility Reports?
  • What measures has Google undertaken to ensure that the trends detailed in the reports are representative of the entire population of an area, including non-Google users, those without smartphones, or individuals that have opted out of Location History?
  • Does Google expect that the Community Mobility Reports to be accurate for more rural or less connected communities?
  • What guidance has Google provided to public health officials about how to interpret the reports, including how Google accounts for common social patterns and categorizes locations?

Blumenthal also joined a letter sent along with Senator Mark Warner (D-VA) and Representative Anna Eshoo (D-CA) “a letter to White House Senior Advisor Jared Kushner, raising questions about reports that the White House has assembled technology and health care firms to establish a far-reaching national coronavirus surveillance system.” They stated their “fear that – absent a clear commitment and improvements to our health privacy laws – these extraordinary measures could undermine the confidentiality and security of our health information and become the new status quo.”

Warner, Eshoo, and Blumenthal argued

Given reports indicating that the Administration has solicited help from companies with checkered histories in protecting user privacy, we have serious concerns that these public health surveillance systems may serve as beachheads for far-reaching health data collection efforts that go beyond responding to the current crisis. Public health surveillance efforts must be accompanied by governance measures that provide durable privacy protections and account for any impacts on our rights. For instance, secondary uses of public health surveillance data beyond coordinating our public health response should be strictly restricted. Any secondary usage for commercial purposes should be explicitly prohibited unless authorized on a limited basis with appropriate administrative process and public input. 

They asked that Kushner answer these questions:

  1. Which technology companies, data providers, and other companies have you approached to participate in the public health surveillance initiative and on what basis were they chosen?
  2. What measures will the Administration put into place to ensure that federal agencies and private sector partners do not misuse or reuse health data for non-pandemic-related purposes, including for training commercial algorithmic decision-making systems, and to require the disposal of data after the sunset of the national emergency? What additional steps have you taken to protect health data from their potential misuse or mishandling?
  3. What is the program described in the press meant to accomplish? Will it be used for the allocation of resources, symptom tracking, or contact tracing? What agency will be operating the program and which agencies will have access to the data? 
  4. When will the federal government stop collecting and sharing health data with the private sector for the public health surveillance initiative? Will the Administration commit to a sunset period after the lifting of the national emergency?
  5. What measures will the Administration put into place to ensure that the public health surveillance initiative protects against misuse of sensitive information and mitigates discriminatory outcomes, such as on the basis of racial identity, sexual orientation, disability status, and income?
  6. Will the Administration commit to conducting an audit of data use, sharing, and security by federal agencies and private sector partners under any waivers or surveillance initiative within a short period after the end of the health emergency?
  7. What steps has the Administration taken under the Privacy Act, which limits the federal government’s authority to collect personal data from third parties and imposes numerous other privacy safeguards?
  8. Will you commit to working with us to pass strong legal safeguards that ensure public health surveillance data can be effectively collected and used without compromising privacy? 

Finally, Consumer Reports showed that Facebook’s system of preventing incorrect COVID-19 from being posted on its platform is not as robust as a top company official claimed. Kaveh Waddell of Consumer Reports stated

Facebook has been saying for weeks that it’s intent on keeping coronavirus misinformation off its platforms, which include Instagram and WhatsApp. During one recent interview with NPR, Nick Clegg, Facebook’s vice president for global affairs and communication, cited two examples of the kinds of posts the company would not allow: any message telling people to drink bleach, or discrediting urgent calls for social distancing to slow the pandemic. 

Waddell continued

  • I’ve been covering Facebook and online misinformation for several years, and I wanted to see how well the company is policing coronavirus-related advertising during the global crisis. So I put the two dangerous claims Clegg brought up, plus other false or dangerous information, into a series of seven paid ads.
  • Facebook approved them all. The advertisements remained scheduled for publication for more than a week without being flagged by Facebook. Then, I pulled them out of the queue to make sure none of them were seen by the public. Consumer Reports made certain not to publish any ads with false or misleading information.

Senate Democrats Release Privacy Principles

The ranking members of four Senate Committees have released their principles for any privacy legislation, many of which are likely to be rejected by Republicans and many industry stakeholders (e.g. no preemptions of the “California Consumer Privacy Act” (AB 375) and a private right of action for consumers).

Nonetheless, Senators Maria Cantwell (D-WA), Dianne Feinstein (D-CA), Patty Murray (D-WA), and Sherrod Brown (D-OH) agreed to these principles, and reportedly Senate Minority Leader Chuck Schumer (D-NY) convened and facilitated the effort, which has come ahead of the release of any of the privacy bills that have been under development this year in the Senate.

Of course, the Senate Commerce, Science, and Transportation Committee had convened an informal working group late last year consisting of Cantwell, Chair Roger Wicker (R-MS) and Senators John Thune (R-SD), Jerry Moran (R-KS), Brian Schatz (D-HI), and Richard Blumenthal (D-CT) to hash out a privacy bill. However, like most other such efforts, the timeline for releasing bill text has been repeatedly pushed back even after Wicker and Cantwell tried working by themselves on a bill late in the summer. Additionally, Moran and Blumenthal, the chair and ranking member of the Manufacturing, Trade, and Consumer Protection Subcommittee, have been working on a bill for some time as well but without a timeline for releasing text.

And, the efforts at this committee are in parallel to those in other committees. Senate Judiciary Chair Lindsey Graham (R-SC) has gotten his committee onto the field with hearings on the subject and has articulated his aim to play a role in crafting a bill. Likewise, the Senate Banking Committee has held hearings and are looking to participate in the process as well. But, like Senate Commerce, no bills have been released.

Of course, it is easier to write out one’s principles than to draft legislation. And yet, the release of these desired policies elegantly puts down a marker for Senate Democrats at a time when the majority in the chamber is struggling to coalesce and release a privacy bill. The move also demonstrates cohesion among the top Democrats on four of the committees with a slice of jurisdiction over privacy and data security issues: Commerce, Banking, HELP, and Judiciary.