Privacy Shield Hearing

The focus was on how the U.S. and EU can reach agreement on an arrangement that will not be struck down by the EU’s highest court.

Last week, the Senate Commerce, Science, and Transportation Committee held a hearing on the now invalidated European Union (EU)-United States (U.S.) Privacy Shield, a mechanism that allowed companies to transfer the personal data of EU residents to the U.S. The EU’s highest court struck down the adequacy decision that underpinned the system on the basis of U.S. surveillance activities and lack of redress that violated EU law. This is the second time in the decade the EU’s top court has invalidated a transfer arrangement, the first being the Safe Harbor system. Given the estimated billions, or even trillions, of dollars in value realized from data flows between the EU and U.S. there is keen interest on both sides of the Atlantic in finding a legal path forward. However, absent significant curtailment of U.S. surveillance and/or a significant expansion of the means by which EU nationals could have violations of their rights rectified, it would appear a third agreement may not withstand the inevitable legal challenges. Moreover, there are questions as to the legality of other transfer tools in light of the Court of Justice for the European Union’s decision in the case known as Schrems II, and the legality of some Standard Contractual Clauses (SCC) and Binding Corporate Rules (BCR) may be soon be found in violation, too.

Consequently, a legislative fix, or some portion thereof, could be attached to federal privacy legislation. Hence, the striking down of Privacy Shield may provide additional impetus to Congress and the next Administration to reach a deal on privacy. Moreover, the lapsed reauthorization of some Foreign Intelligence Surveillance Act authorities may be another legislative opportunity for the U.S. to craft an approach amendable to the EU in order to either obtain an adequacy decision or a successor agreement to the Privacy Shield.

Chair Roger Wicker (R-MS) approached the issue from the perspective of international trade and the economic benefit accruing to businesses on both sides of the Atlantic. His opening remarks pertained less to the privacy and surveillance aspects of the CJEU’s ruling. Wicker appears to be making the case that the EU seems to misunderstand that redress rights in the U.S. are more than adequate, and the U.S.’ surveillance regime is similar to those of some EU nations. One wonders if the CJEU is inclined to agree with this position. Nonetheless, Wicker expressed hope that the EU and U.S. can reach “a durable and lasting data transfer framework…that provides meaningful data protections to consumers, sustains the free flow of information across the Atlantic, and encourages continued economic and strategic partnership with our European allies – a tall order but an essential order.” He worried about the effect of the CJEU’s ruling on SCCs. Wicker made the case that the EU and U.S. share democratic values and hinted that the ongoing talks in the committee to reach a federal data privacy law might include augmented redress rights that might satisfy the CJEU.

Ranking Member Maria Cantwell (D-WA) spoke very broadly about a range of issues related to data transfers and privacy. She stressed the importance of data flows in the context of larger trade relations. Cantwell also stressed the shared values between the U.S. and the EU and her hope that the two entities work “together on these very important national concerns, trade and technology, so that we can continue to improve economic opportunities and avoid moves towards protectionism.” She also called for federal privacy legislation but hinted that states should still be able to regulate privacy, suggesting her commitment to having a federal law be a floor for state laws. Cantwell also asserted that bulk surveillance, the likes of which the National security Agency has engaged in, may simply not be legal under EU law.

Deputy Assistant Secretary of Commerce for Services James Sullivan blurred the issues presented by Schrems II much like Cantwell did. The CJEU’s decision that focused on U.S. surveillance practices and the lack of meaningful recourse in the U.S. if an EU resident’s rights were violated was merged into a call for like-minded nations to unite against authoritarian nations. Sullivan distinguished between U.S. surveillance and the surveillance conducted by the People’s Republic of China (without naming the nation) and other regimes as if this should satisfy the EU as to the legality and propriety of U.S. treatment of EU personal data. Sullivan stated:

  • The Schrems II decision has created enormous uncertainties for U.S. companies and the transatlantic economy at a particularly precarious time. Immediately upon issuance of the ruling, the 5,400 Privacy Shield participants and their business partners in the EU could no longer rely on the Framework as a lawful basis for transferring personal data from Europe to the United States. Because neither the Court nor European data protection authorities provided for any enforcement grace period, Privacy Shield companies were left with three choices: (1) risk facing potentially huge fines (of up to 4 percent of total global turnover in the preceding year) for violating GDPR, (2) withdraw from the European market, or (3) switch right away to another more expensive data transfer mechanism.
  • Unfortunately, because of the Court’s ruling in the Privacy Shield context that U.S. laws relating to government access to data do not confer adequate protections for EU personal data, the use of other mechanisms like SCCs and BCRs to transfer EU personal data to the United States is now in question as well.
  • The objective of any potential agreement between the United States and the European Commission to address Schrems II is to restore the continuity of transatlantic data flows and the Framework’s privacy protections by negotiating targeted enhancements to Privacy Shield that address the Court’s concerns in Schrems II. Any such enhancements must respect the U.S. Government’s security responsibilities to our citizens and allies.
  • To be clear, we expect that any enhancements to the Privacy Shield Framework would also cover transfers under all other EU-approved data transfer mechanisms like SCCs and BCRs as well.
  • The Schrems II decision has underscored the need for a broader discussion among likeminded democracies on the issue of government access to data. Especially as a result of the extensive U.S. surveillance reforms since 2015, the United States affords privacy protections relating to national security data access that are equivalent to or greater than those provided by many other democracies in Europe and elsewhere.
  • To minimize future disruptions to data transfers, we have engaged with the European Union and other democratic nations in a multilateral discussion to develop principles based on common practices for addressing how best to reconcile law enforcement and national security needs for data with protection of individual rights.
  • It is our view that democracies should come together to articulate shared principles regarding government access to personal data—to help make clear the distinction between democratic societies that respect civil liberties and the rule of law and authoritarian governments that engage in the unbridled collection of personal data to surveil, manipulate, and control their citizens and other individuals without regard to personal privacy and human rights. Such principles would allow us to work with like-minded partners in preserving and promoting a free and open Internet enabled by the seamless flow of data.

Federal Trade Commission (FTC) Commissioner Noah Joshua Phillips stressed he was speaking in a personal capacity and not for the FTC. He extolled the virtues of the “free and open” internet model in the U.S. with the double implication that it is superior both to nations like the PRC and Russia but also the EU model. Phillips seemed to be advocating for talking the EU into accepting that the U.S.’s privacy regime and civil liberties are stronger than any other nation. Her also made the case, like other witnesses, that the U.S. data privacy and protection regulation is more similar to the EU than the PRC, Russia, and others. Phillips also sought to blur the issues and recast Privacy Shield in the context of the global struggle between democracies and authoritarian regimes. Phillips asserted:

  • First, we need to find a path forward after Schrems II, to permit transfers between the U.S. and EU. I want to recognize the efforts of U.S. and EU negotiators to find a replacement for Privacy Shield. While no doubt challenging, I have confidence in the good faith and commitment of public servants like Jim Sullivan, with whom I have the honor of appearing today, and our partners across the Atlantic. I have every hope and expectation that protecting cross-border data flows will be a priority for the incoming Administration, and I ask for your help in ensuring it is.
  • Second, we must actively engage with nations evaluating their approach to digital governance, something we at the FTC have done, to share and promote the benefits of a free and open Internet. There is an active conversation ongoing internationally, and at every opportunity—whether in public forums or via private assistance—we must ensure our voice and view is heard.
  • Third, we should be vocal in our defense of American values and policies. While we as Americans always look to improve our laws—and I commend the members of this committee on their important work on privacy legislation and other critical matters—we do not need to apologize to the world. When it comes to civil liberties or the enforcement of privacy laws, we are second to none. Indeed, in my view, the overall U.S. privacy framework—especially with the additional protections built into Privacy Shield—should certainly qualify as adequate under EU standards.
  • Fourth, as European leaders call to strengthen ties with the U.S., we should prioritize making our regimes compatible for the free flow of data. This extends to the data governance regimes of like-minded countries outside of Europe as well. Different nations will have different rules, but relatively minor differences need not impede mutually-beneficial commerce. We need not and should not purport to aim for a single, identical system of data governance. And we should remind our allies, and remind ourselves, that far more unites liberal democracies than divides us.
  • Fifth and finally, if we must draw lines, those lines should be drawn between allies with shared values—the U.S., Europe, Japan, Australia, and others—and those, like China and Russia, that offer a starkly different vision. I am certainly encouraged when I hear recognition of this distinction from Europe. European Data Protection Supervisor Wojciech Wiewiórowski recently noted that the U.S. is much closer to Europe than is China and that he has a preference for data being processed by countries that share values with Europe. Some here in the U.S. are even proposing agreements to solidify the relationships among technologically advanced democracies, an idea worth exploring in more detail

Washington University Professor of Law Neil Richards stressed that the Schrems II decision spells out how the U.S. would achieve adequacy: reforming surveillance and providing meaningful redress for alleged privacy violations. Consequently, FISA would need to be rewritten and narrowed and a means for EU residents to seek relief beyond the current Ombudsman system is needed, possibly a statutory right to sue. Moreover, he asserted strong data protection and privacy laws are needed and some of the bills introduced in this Congress could fit the bill. Richards asserted:

In sum, the Schrems litigation is a creature of distrust, and while it has created problems for American law and commerce, it has also created a great opportunity. That opportunity lies before this Committee –the chance to regain American leadership in global privacy and data protection by passing a comprehensive law that provides appropriate safeguards, enforceable rights, and effective legal remedies for consumers. I believe that the way forward can not only safeguard the ability to share personal data across the Atlantic, but it can do so in a way that builds trust between the United States and our European trading partners and between American companies and their American and European customers. I believe that there is a way forward, but it requires us to recognize that strong, clear, trust-building rules are not hostile to business interest, that we need to push past the failed system of “notice and choice,” that we need to preserve effective consumer remedies and state-level regulatory innovation, and seriously consider a duty of loyalty. In that direction, I believe, lies not just consumer protection, but international cooperation and economic prosperity.

Georgia Tech University Professor Peter Swire explained that the current circumstances make the next Congress the best possibility in memory to enact privacy legislation because of the need for a Privacy Shield replacement, passage of the new California Privacy Rights Act (Proposition 24), and the Biden Administration’s likely support for such legislation. Swire made the following points:

  1. The European Data Protection Board in November issued draft guidance with an extremely strict interpretation of how to implement the Schrems II case.
  2. The decision in Schrems II is based on EU constitutional law. There are varying current interpretations in Europe of what is required by Schrems II, but constitutional requirements may restrict the range of options available to EU and U.S. policymakers.
  3. Strict EU rules about data transfers, such as the draft EDPB guidance, would appear to result in strict data localization, creating numerous major issues for EU- and U.S.-based businesses, as well as affecting many online activities of EU individuals.
  4. Along with concerns about lack of individual redress, the CJEU found that the EU Commission had not established that U.S. surveillance was “proportionate” in its scope and operation. Appendix 2 to this testimony seeks to contribute to an informed judgment on proportionality, by cataloguing developments in U.S. surveillance safeguards since the Commission’s issuance of its Privacy Shield decision in 2016.
  5. Negotiating an EU/U.S. adequacy agreement is important in the short term.
  6. A short-run agreement would assist in creating a better overall long-run agreement or agreements.
  7. As the U.S. considers its own possible legal reforms in the aftermath of Schrems II, it is prudent and a normal part of negotiations to seek to understand where the other party – the EU – may have flexibility to reform its own laws.
  8. Issues related to Schrems II have largely been bipartisan in the U.S., with substantial continuity across the Obama and Trump administrations, and expected as well for a Biden administration.
  9. Passing comprehensive privacy legislation would help considerably in EU/U.S. negotiations.
  10. This Congress may have a unique opportunity to enact comprehensive commercial privacy legislation for the United States.

© Michael Kans, Michael Kans Blog and, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and with appropriate and specific direction to the original content.

Image by Dooffy Design from Pixabay

Section 230 Hearing Almost Devoid Of Discussion About Section 230

The Section 230 hearing was largely political theater.

The Senate Commerce, Science, and Transportation Committee held its long awaited hearing ostensibly on 47 U.S.C. 230 (Section 230) with the CEOs of Facebook, Google, and Twitter. I suppose the title of the hearing should have told us all we need to know about the approach of the Republican majority: “Does Section 230’s Sweeping Immunity Enable Big Tech Bad Behavior?” And, oddly enough, there are likely areas where Republicans can agree with Democrats in terms of less desirable outcomes flowing perhaps from Section 230 immunity. For example, The New York Times and other outlets have highlighted how technology platforms do at identifying and taking down child pornography or non-consensual pornography, and I would think tech’s staunchest supports would concede there is room for improvement. However, this hearing seem conceived and executed for to perpetuate the Republican narrative that technology companies are biased against them and their content. And, to amplify this message, Republican Senators crafted novel arguments (e.g. Senator Mike Lee (R-UT) claiming that a platform labeling a false or misleading statement is censorship) or all but yelled at the CEOs (e.g. Senator Ted Cruz (R-TX) positively shouting at Twitter head Jack Dorsey).

Chair Roger Wicker (R-MS) again propounded the position that technology companies should not be able to moderate, correct, label, or block political content, especially conservative material. In essence, Republicans seem to be making the case that Twitter, Facebook, Google, and others have become the de facto public square for 21st Century America, and just as a person marching with a sign in an actual town cannot be stopped, so, too, should it be online. This argument conveniently ignores the long-established fact that the First Amendment applies to government regulation or suppression of speech, and that private regulation or suppression is largely not protected by the First Amendment. Also, Republicans are taking the paradoxical position that the government should be able to dictate or bully private companies into complying with their desired policy outcome when they purport to favor free market economics. It is also telling that Wicker only wants to change Section 230 and not do away with it entirely. A cynic might observe that so long as the social media platforms are giving conservatives the treatment they want, the many other, extensively documented abuse and harassment women and people of color face online do not seem to be important enough to address. Moreover, Wicker had little to say about the tide of lies, misinformation, and disinformation flooding the online world. Finally, Wicker relied only on anecdotal evidence that conservatives and Republicans are somehow being muted or silenced at a greater rate than liberals and Democrats for the very good reason that no evidence from reputable research supports this argument. The data we have show conservative material flourishing online.

In his opening statement, Wicker claimed:

  • We have convened this morning to continue the work of this Committee to ensure that the internet remains a free and open space, and that the laws that govern it are sufficiently up to date. The internet is a great American success story, thanks in large part to the regulatory and legal structure our government put in place. But we cannot take that success for granted. The openness and freedom of the internet are under attack.
  • For almost 25 years, the preservation of internet freedom has been the hallmark of a thriving digital economy in the United States. This success has largely been attributed to a light-touch regulatory framework and to Section 230 of the Communications Decency Act – often referred to as the “26 words that created the internet.”
  • There is little dispute that Section 230 played a critical role in the early development and growth of online platforms.  Section 230 gave content providers protection from liability to remove and moderate content that they or their users consider to be “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.” This liability shield has been pivotal in protecting online platforms from endless and potentially ruinous lawsuits. But it has also given these internet platforms the ability to control, stifle, and even censor content in whatever manner meets their respective “standards.”   The time has come for that free pass to end.
  • After 24 years of Section 230 being the law of the land, much has changed. The internet is no longer an emerging technology. The companies before us today are no longer scrappy startups operating out of a garage or a dorm room. They are now among the world’s largest corporations, wielding immense power in our economy, culture, and public discourse – immense power. The applications they have created are connecting the world in unprecedented ways, far beyond what lawmakers could have imagined three decades ago. These companies are controlling the overwhelming flow of news and information that the public can share and access. 
  • One noteworthy example occurred just two weeks ago after our subpoenas were unanimously approved; the New York Post – the country’s fourth largest newspaper – ran a story revealing communications between Hunter Biden and a Ukrainian official. The report alleged that Hunter Biden facilitated a meeting with his father, Joe Biden, who was then the Vice President of the United States. Almost immediately, both Twitter and Facebook took steps to block or limit access to the story. Facebook, according to its Policy Communications Manager, began “reducing its distribution on [the] platform” pending a third-party fact check.  Twitter went beyond that, blocking all users — including the House Judiciary Committee — from sharing the article on feeds and through direct messages. Twitter even locked the New York Post’s account entirely, claiming the story included “hacked materials” and was “potentially harmful.”
  • It is worth noting that both Twitter and Facebook’s aversion to hacked materials has not always been so stringent. For example, when the President’s tax returns were illegally leaked, neither company acted to restrict access to that information. Similarly, the now-discredited Steele dossier was widely shared without fact checking or disclaimers. This apparent double standard would be appalling under normal circumstances. But the fact that selective censorship is occurring in the midst of the 2020 election cycle dramatically amplifies the power wielded by Facebook and Twitter.
  • Google recently generated its own controversy when it was revealed that the company threatened to cut off several conservative websites, including the Federalist, from their ad platform. Make no mistake, for sites that rely heavily on advertising revenue for their bottom line, being blocked from Google’s services – or “demonetized” – can be a death sentence.
  • According to Google, the offense of these websites was hosting user-submitted comment sections that included objectionable content. But Google’s own platform, YouTube, hosts user-submitted comment sections for every video uploaded. It seems that Google is far more zealous in policing conservative sites than its own YouTube platform for the same types of offensive and outrageous language.
  • It is ironic, that when the subject is net neutrality technology companies, including Facebook, Google, and Twitter, have warned about the grave threat of blocking or throttling the flow of information on the internet. Meanwhile, these same companies are actively blocking and throttling the distribution of content on their own platforms and are using protections under 30 to do it.  Is it any surprise that voices on the right are complaining about hypocrisy or, even worse, anti-democratic election interference.
  • These recent incidents are only the latest in a long trail of censorship and suppression of conservative voices on the internet. Reasonable observers are left to wonder whether big tech firms are obstructing the flow of information to benefit one political ideology or agenda.
  • My concern is that these platforms have become powerful arbiters of what is true and what content users can access. The American public gets little insight into the decision-making process when content is moderated, and users have little recourse when they are censored or restricted. I hope we can all agree that the issues the Committee will discuss today are ripe for thorough examination and action. 
  • I have introduced legislation to clarify the intent of Section 230’s liability protections and increase the accountability of companies who engage in content moderation. The “Online Freedom and Viewpoint Diversity Act” would make important changes to “right-size” the liability shield and make clear what type of content moderation is protected. This legislation would address the challenges we have discussed while still leaving fundamentals of Section 230 in place.
  • Although some of my colleagues on the other side of the aisle have characterized this as a purely partisan exercise, there is strong bipartisan support for reviewing Section 230. In fact, both presidential candidates Trump and Biden have proposed repealing Section 230 in its entirety – a position I have not yet embraced. I hope we can focus today’s discussion on the issues that affect all Americans. Protecting a true diversity of viewpoints and free discourse is central to our way of life. I look forward to hearing from today’s witnesses about what they are doing to promote transparency, accountability, and fairness in their content moderation processes. And I thank each of them for cooperating with us in the scheduling of this testimony.

Ranking Member Maria Cantwell (D-WA) stayed largely in the mainstream of Democratic thought and policy on Section 230. She opened the aperture on technology issues and spotlighted the problems she sees, including the effect that declining advertising revenue has had on the U.S. media and the growing dominance of Facebook and Google have in online advertising. This is not surprising since she released a report on this very subject the day before. Cantwell discussed at some length Russian election interference, a subject tangentially related to Section 230. Perhaps, she was hinting that technology companies should be charged with finding and removing the types of misinformation foreign governments and malign actors are spreading to wreak havoc in the U.S. If so, she did not hit this point too hard. Rather her recitation of election interference was intended to get Republicans on their back foot, for if the subject of the hearing turned to Russian disinformation and related efforts, they may have to break ranks with the White House and President Donald Trump on the threat posed by Russia. Cantwell also went off topic a bit by obliquely discussing statements made by Trump and others about the security and legality of mail-in voting. She suggested without being specific that there may be means of bolstering Section 230 to drive platforms to take down disinformation and harmful material more expeditiously. Cantwell also poked Wicker by noting that the print media was not being subpoenaed to testify on why they largely ignore the New York Post’s questionable Hunter Biden article.

Cantwell asserted:

  • So these issues about how we harness the information age to work for us, and not against us, is something that we deal with every day of the week, and we want to have discussion and discourse. I believe that discussion and discourse today should be broader than just 230. There are issues of privacy that our committee has addressed and issues of how to make sure there is a free and competitive news market.
  • I noticed today we’re not calling in the NAB or the Publishers Association asking them why they haven’t printed or reprinted information that you alluded to in your testimony that you wish was more broadly distributed. To have the competition in the news market is to have a diversity of voices and diversity of opinion, and in my report, just recently released, we show that true competition really does help perfect information, both for our economy, and for the health of our democracy. So I do look forward to discussing these issues today. What I do not want today’s hearing to be is a chilling effect on the very important aspects of making sure that hate speech or misinformation related to health and public safety, are allowed to remain on the internet.
  • We all know what happened in 2016, and we had reports from the FBI, our intelligence agencies, and a bipartisan Senate committee that concluded in 2016, that Russian operatives did, masquerading as Americans, use targeted advertisements, intentionally falsified news articles, self generated content and social media platform tools to interact and attempt to deceive tens of millions of social media users in the United States. Director of National Intelligence, then Republican Senator–former Senator–Dan Coats said in July 2018, “The warning lights are blinking red that the digital infrastructure that serves our country is literally under attack.”
  • So I take this issue very seriously and have had for many years, that is, making sure, as the Mueller–Special Counsel Mueller indicated, 12 Russian intelligence officers hacked the DNC, and various information detailing phishing attacks into our state election boards, online personas, and stealing documents. So, when we had a subcommittee hearing and former Bush Homeland Security Director Michael Chertoff testified, I asked him point blank, because there were some of our colleagues who were saying, “you know what? Everybody does election interference.” So I asked him if election interference was something that we did, or should be encouraging? He responded that he agreed:  “Interfering with infrastructure or elections is completely off limits and unacceptable.”
  • That is why I believe that we should be working aggressively internationally to sanction anybody that interferes in our elections. So I hope today that we will get a report from the witnesses on exactly what they have been doing to clamp down on election interference. I hope that they will tell us what kind of hate speech and misinformation that they have taken off the books. It is no secret that there are various state actors who are doing all they can to take a whack at democracy, to try to say that our way of government, that our way of life, that our way of freedom of speech and information, is somehow not as good as we have made it, being the beacon of democracy around the globe.
  • I am not going to let or tolerate people to continue to whack at our election process, our vote by mail system, or the ability of tech platforms, security companies, our law enforcement entities, and the collective community to speak against misinformation and hate speech. We have to show that the United States of America stands behind our principles and that our principles do also transfer to the responsibility of communication online. As my colleagues will note, we’ve all been through this in the past. That is why you, Mr. Chairman, and I, and Senators Rosen and Thune, sponsored the Hack Act that is to help increase the security and cyber security of our nation and create a workforce that can fight against that. That is why I joined with Van Hollen and Rubio on the Deter Act, especially in establishing sanctions against Russian election interference, and to continue to make sure that we build the infrastructure of tomorrow.
  • So, I know that some people think that these issues are out of sight and out of mind. I guarantee you, they’re not. There are actors who have been at this for a long time. They wanted to destabilize Eastern Europe, and we became the second act when they tried to destabilize our democracy here by sewing disinformation. I want to show them that we in the United States do have fair elections. We do have a fair process. We are going to be that beacon of democracy.
  • So, I hope that as we talk about 230 today and we hear from the witnesses on the progress that they have made in making sure that disinformation is not allowed online, that we will also consider ways to help build and strengthen that. That is to say, as some of those who are testifying today, what can we do on transparency, on reporting, on analysis, and yes, I think you’re going to hear a lot about algorithms today, and the kinds of oversight that we all want to make sure that we can continue to have the diversity of voices in the United States of America, both online and offline.
  • I do want to say though, Mr. Chairman, I am concerned about the vertical nature of news and information. Today I expect to ask the witnesses about the fact that I believe they create a choke point for local news. The local news media have lost 70% of their revenue over the last decade, and we have lost thousands, thousands of journalistic jobs that are important. It was even amazing to me that the sequence of events yesterday had me being interviewed by someone at a newspaper who was funded by a joint group of the Knight Foundation, and probably Facebook funds, to interview me about the fact that the news media and broadcast has fallen on such a decline because of loss of revenue as they’ve made the transition to the digital age.
  • Somehow, somehow, we have to come together to show that the diversity of voices that local news represent need to be dealt with fairly when it comes to the advertising market. And that too much control in the advertising market puts a foot on their ability to continue to move forward and grow in the digital age. Just as other forms of media have made the transition, and yes still making the transition, we want to have a very healthy and dynamic news media across the United States of America. So, I plan to ask the witnesses today about that.
  • I wish we had time to go into depth on privacy and privacy issues but Mr. Chairman, you know, and so does Senator Thune and other colleagues of the Committee on my side, how important it is that we protect American consumers on privacy issues. That we’re not done with this work, that there is much to do to bring consensus in the United States on this important issue. And I hope that as we do have time or in the follow up to these questions, that we can ask the witnesses about that today.
  • But make no mistake, gentlemen, thank you for joining us, but this is probably one of many, many, many conversations that we will have about all of these issues. But again, let’s harness the information age, as you are doing, but let’s also make sure that consumers are fairly treated and that we are making it work for all of us to guarantee our privacy, our diversity of voices, and upholding our democratic principles and the fact that we, the United States of America, stand for freedom of information and freedom of the press.

Twitter CEO Jack Dorsey’s written testimony seeks to distinguish his platform’s good practices (e.g. transparency and no cow-towing to political powers that be) from Facebook’s bad practices. Regarding algorithms, the secret sauce of how users see what they see and why some content gets amplified, Dorsey seems to make the case that a platform should makes multiple algorithms available to users and they should choice. A couple of troubling implications follow from such an approach. First, if a user if seeing content that is objectionable, well, he bears some of the blame because he chose it. Secondly, allowing people to pick their own algorithms seems very similar to a platform using different algorithms for people in that the net effect will still be filter bubbles. The difference is with choice, there will be the illusion of control. Finally, on privacy, Dorsey sidesteps the issue of whether people should be allowed to stop platforms from collecting personal data by pledging his loyalty to giving people choice and control of its collection, use, and distribution.

In terms of Section 230, here are Dorsey’s thoughts:

  • As you consider next steps, we urge your thoughtfulness and restraint when it comes to broad regulatory solutions to address content moderation issues. We must optimize for new startups and independent developers. In some circumstances, sweeping regulations can further entrench companies that have large market shares and can easily afford to scale up additional resources to comply. We are sensitive to these types of competition concerns because Twitter does not have the same breadth of interwoven products or market size as compared to our industry peers. We want to ensure that new and small companies, like we were in 2006, can still succeed today. Doing so ensures a level playing field that increases the probability of competing ideas to help solve problems going forward. We must not entrench the largest companies further.
  • I believe the best way to address our mutually-held concerns is to require the publication of moderation processes and practices, a straightforward process to appeal decisions, and best efforts around algorithmic choice. These are achievable in short order. We also encourage Congress to enact a robust​ federal privacy framework that protects consumers while fostering competition and innovation.

Facebook CEO Mark Zuckerberg framed Section 230 as allowing free speech to thrive online because platforms would avoid legal liability and not host any material that could result in a lawsuit. He also praised the provisions that allow for content moderation, such as “basic moderation” for “removing hate speech and harassment that impacts the safety and security of their communities.” Zuckerberg avoids moderation of political content where the leaders of nations post material that is patently untrue or inflammatory. He then claimed Facebook supports giving people a voice, but, then this is contrary to media accounts of the company doing the bidding of authoritarian regimes to take down the posts of and shut down the accounts of dissidents and critics. Moreover, Zuckerberg argued that Section 230’s liability shield permits the company to police and remove material that creates risk through “harm by trying to organize violence, undermine elections, or otherwise hurt people.” Some have argued the opposite is the case, and if Facebook faced regulatory or legal jeopardy for hosting such material or not taking it down in a timely fashion, it would act much more quickly and expend more resources to do so.

Zuckerberg then detailed his company’s efforts to ensure the social media giant is providing Americans with accurate information about voting, much of which would please Democrats and displease Republicans, the latter of which have inveighed against the appending of fact checking to assertions made by Trump and others about the election.

Zuckerberg also pushed back on Cantwell’s assertions regarding the effect his platform and Google have had on journalism. He claimed Facebook is another venue by which media outlets can make money and touted the Facebook Journalism Project, in which Facebook has “invested more than $425 million in this effort, including developing news products;  providing grants, training, and tools for journalists; and working with publishers and educators to increase media literacy.”

As for Zuckerberg’s position on Section 230 legislation, he argued:

  • However, the debate about Section 230 shows that people of all political persuasions are unhappy with the status quo. People want to know that companies are taking responsibility for combatting harmful content—especially illegal activity—on their platforms. They want to know that when platforms remove content, they are doing so fairly and transparently. And they want to make sure that platforms are held accountable.
  • Section 230 made it possible for every major internet service to be built and ensured important values like free expression and openness were part of how platforms operate. Changing it is a significant decision. However, I believe Congress should update the law to make sure it’s working as intended. We support the ideas around transparency and industry collaboration that are being discussed in some of the current bipartisan proposals, and I look forward to a meaningful dialogue about how we might update the law to deal with the problems we face today.
  • At Facebook, we don’t think tech companies should be making so many decisions about these important issues alone. I believe we need a more active role for governments and regulators, which is why in March last year I called for regulation on harmful content, privacy, elections, and data portability. We stand ready to work with Congress on what regulation could look like in these areas. By updating the rules for the internet, we can preserve what’s best about it—the freedom for people to express themselves and for entrepreneurs to build new things—while also protecting society from broader harms. I would encourage this Committee and other stakeholders to make sure that any changes do not have unintended consequences that stifle expression or impede innovation.

Alphabet CEO Sundar Pichai framed Google’s many products as brining the world information for free. He voiced support for amorphous privacy legislation and highlighted Google’s $1 billion commitment to supporting some journalism outlets. He asserted Google, YouTube, and related properties exercise their content moderation without political bias. Pichia offered these sentiments on Section 230:

As you think about how to shape policy in this important area, I would urge the Committee to be very thoughtful about any changes to Section 230 and to be very aware of the consequences those changes might have on businesses and consumers.At the end of the day, we all share the same goal: free access to information for everyone and responsible protections for people and their data. We support legal frameworks that achieve these goals…

© Michael Kans, Michael Kans Blog and, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and with appropriate and specific direction to the original content.

Image by Gerd Altmann from Pixabay

Setting The Plate For Section 230 Hearing

The top Republican and Democrat on the Senate Commerce Committee seek to frame the 28 October hearing on Section 230 in the light they favor.

Before the Senate Commerce, Science, and Transportation Committee held its hearing today on 47 U.S.C. 230 (Section 230), both Chair Roger Wicker (R-MS) and Ranking Member Maria Cantwell (D-WA) sought to provide their slant on the proceedings. Wicker continued with the Republican narrative by suggesting social media platforms may be cooperating with the Biden Campaign, and Cantwell released a report on how these platforms have adversely affected local journalism to the detriment of American democracy.

Wicker sent letters to Facebook CEO Mark Zuckerberg and Twitter CEO Jack Dorsey that seem obliquely along the same lines as Senator Josh Hawley’s (R-MO) letter to the Federal Election Commission (FEC) claiming that the two platforms’ restriction on spreading the dubious New York Post story on Hunter Biden was an in-kind campaign contribution.

Wicker wrote to Zuckerberg and Dorsey

In the interest of fully disclosing any interactions with the candidates and their campaigns, I request that you provide the Committee with specific information regarding whether and how [Facebook/Twitter have] provided access to any data, analytics, or other information to either major political party, candidate, or affiliates thereof. This includes information related to advertising, post or page performance, engagement, or other data that might shape or influence decision-making by the candidate or campaign. In addition, please indicate whether this information is provided equitably to all candidates, and how decisions are made regarding what information is provided and to whom.

Clearly Wicker is after any indication that the Biden Campaign has received undue or extra help or information the Trump Campaign has not. Facebook taken millions in dollars of advertising from the two campaigns and from other parties. Twitter stopped accepting political advertising in late 2019. Consequently, it is likely there will be mountains of material to provide the committee. This inquiry may have been made in the interest of ensuring a fairly contested election. Or, perhaps Wicker and his staff have some inside information into the two platforms relations to the Biden Campaign. Perhaps the letter is meant as a fishing expedition in the hopes any such evidence will turn up.

Nonetheless, these letters may have the prophylactic effect of chilling any efforts Facebook and Twitter may take in the last week of the election lest they be hauled again before Congress to answer for their moderation and take down decisions regarding political and misinformation material. If it turns out the Trump Campaign has gotten advantageous treatment, it would be hard to see how Wicker and other Republicans would weave the fact of greater assistance to President Donald Trump into their perpetual campaign of decrying alleged but never proven anti-conservative bias.

But, as mentioned before, Wicker could attempt to portray any assistance provided to the Biden Campaign as an in-kind contribution as Hawley did after sharing of the dubious New York Post article was limited on the platforms even though there are clear exemptions for the media to federal laws and regulations on aid to campaigns.

Hawley claimed in a letter to the FEC that Twitter and Facebook have given the Biden Campaign an in-kind contribution by blocking the article in violation of federal campaign finance law. Hawley, however, was careful to couch his language in language suggesting that Twitter and Facebook’s actions (which he terms suppression) were in-kind contributions instead of outright asserting they are.

While Hawley quite accurately quotes the law on what constitutes a contribution (“[a] “contribution” includes “anything of value . . . for the purpose of influencing any election for Federal office”), he is apparently unaware of the regulations promulgated by the FEC to explicate gaps and unaddressed issues in the statute. FEC regulations shed further light on the issue at hand. Notably, in 11 CFR 100.71, the FEC’s regulations provide extensive exceptions to what is a contribution and provide “[t]he term contribution does not include payments, services or other things of value described in this subpart.” One such exception is found in 11 CFR 100.73, “News story, commentary, or editorial by the media,” which makes clear:

Any cost incurred in covering or carrying a news story, commentary, or editorial by any broadcasting station (including a cable television operator, programmer or producer), Web site, newspaper, magazine, or other periodical publication, including any Internet or electronic publication, is not a contribution unless the facility is owned or controlled by any political party, political committee, or candidate, in which case the costs for a news story.

One of the essential elements for such an action to be a contribution is control or ownership. I am fairly certain the Biden Campaign neither owns nor controls Twitter or Facebook. For if they do, they have been colossally inept in allowing President Donald Trump and his partisans to spread widely misinformation and lies about mail-in voting to name one such subject.

Moreover, the FEC and federal courts have long recognized the “press exemption” to what might otherwise be considered in-kind contributions or expenditures in violation of the law. This exemption includes websites and the internet. It would seem that Facebook and Twitter were acting in ways much more similar to how the traditional print media has. It is telling that Hawley and others have not pilloried the so-called liberal media for looking askance at the New York Post’s story and not taking it at face value to the extent they have covered it at all. Therefore, it seems like any value the Biden Campaign may have derived from social media platforms using 47 USC 230 in moderating content on their platform is not an in-kind contribution.

Cantwell released a report that she has mentioned during her opening statement at the 23 September hearing aimed at trying to revive data privacy legislation. She and her staff investigated the decline and financial troubles of local media outlets, which are facing a cumulative loss in advertising revenue of up to 70% since 2000. And since advertising revenue has long been the life blood of print journalism, this has devastated local media with many outlets shutting their doors or radically cutting their staff. This trend has been exacerbated by consolidation in the industry, often in concert with private equity or hedge funds looking to wring the last dollars of value from bargain basement priced newspapers. Cantwell also claimed that the overwhelming online advertising dominance of Google and Facebook has further diminished advertising revenue and other possible sources of funding through a variety of means. She intimates that much of this content may be illegal under U.S. law, and the Federal Trade Commission (FTC) may well be able to use its Section 5 powers against unfair and deceptive acts and its anti-trust authority to take action.

Cantwell detailed “Current and Suggested Congressional Considerations to Save Local News:”

  • Providing COVID-19 Emergency Financial Relief
    • As discussed in this report, the COVID-19 pandemic has had a devastating impact on local media outlets around the country. Congress should provide immediate support to stabilize these critical community institutions because it is very difficult to recreate a functioning local newsroom once its unique blend of knowledgeable local reporters, editorial controls, and regional subscribers is lost.
    • Congress should renew the Paycheck Protection Program (PPP), created by the Coronavirus Aid, Relief, and Economic Security (CARES) Act, to continue to support jobs at local news outlets. It should also expand the PPP to make thousands more local newspapers, radio, and television broadcasters eligible for emergency federal support.
    • Congress should also consider targeted tax incentives and grants as at least a short-term bridge to enable local news entities to survive the current economic turmoil.
  • Ensure Fair Return for Local News Content
    • Local news outlets create unmatched trusted content for local communities but, as discussed in this report, they are not being fairly compensated for their intellectual property by news aggregators, who are abusing their dominant positions in the marketplace.
    • Congress should consider requiring that news aggregation platforms enter into good faith negotiations with local news organizations and pay them fair market value for their content. Congress should also consider allowing local news organizations for a limited duration to collectively bargain for reuse of their content, provided there are strong controls in place to ensure that smaller publishers are not left behind.
  • Level the Playing Field for Local News
    • As detailed in this report, news aggregation platforms are using their market power and data aggregation practices to disadvantage local news.
    • Congress has a long history of addressing market abuses that stifle innovation and harm consumers. Rules preventing unfair, deceptive, and abusive practices can stop platforms from taking local news content without financial payment and retaliating against local news by hiding or removing their content from search engines or social media feeds. Similarly, statutes that prohibit market manipulation in other industries can serve as models to ensure online advertising markets are transparent and not contrived to benefit a dominant firm. Federal privacy protections can also serve to empower consumers to provide more support to local news organizations that provide them with more trusted and relevant information. Each of these changes should be crafted in a way to promote competition and consumer welfare and spur growth and innovation in the digital economy.

Cantwell’s report follows the House Judiciary Committee’s Antitrust, Commercial and Administrative Law Subcommittee’s “Investigation into Competition in Online Markets,” which also examined, in part, the effect of the digital dominance of Facebook and Google on the U.S. journalism industry. The Subcommittee asserted:

received testimony and submissions showing that the dominance of some online platforms has contributed to the decline of trustworthy sources of news, which is essential to our democracy. In several submissions, news publishers raised concerns about the “significant and growing asymmetry of power” between dominant platforms and news organizations, as well as the effect of this dominance on the production and availability of trustworthy sources of news. Other publishers said that they are “increasingly beholden” to these firms, and in particular, to Google and Facebook. Google and Facebook have an outsized influence over the distribution and monetization of trustworthy sources of news online, undermining the quality and availability of high-quality sources of journalism. This concern is underscored by the COVID-19 pandemic, which has laid bare the importance of preserving a vibrant free press in both local and national markets.

The Subcommittee recommended:

To address this imbalance of bargaining power, we recommend that the Subcommittee consider legislation to provide news publishers and broadcasters with a narrowly tailored and temporary safe harbor to collectively negotiate with dominant online platforms.

The Subcommittee noted:

In April 2019, Subcommittee Chairman [David] Cicilline (D-RI) and Doug Collins (R-GA), the former- Ranking Member of the Committee on the Judiciary, introduced H.R. 2054, the “Journalism Competition and Preservation Act of 2019.” H.R. 2054 would allow coordination by news publishers under the antitrust laws if it (1) directly relates to the quality, accuracy, attribution or branding, or interoperability of news; (2) benefits the entire industry, rather than just a few publishers, and is non-discriminatory to other news publishers; and (3) directly relates to and is reasonably necessary for these negotiations, instead of being used for other purposes.

Cantwell noted in her report “regulators across Europe and in Australia are taking steps to ensure that local publishers can continue to monetize their content and reach consumers.” She claimed “[p]artly in response to these regulatory actions, Google and Facebook have announced plans to provide limited compensation to a small slice of the news sector…[and [w]hether this compensation will be sufficient, or negotiated on fair terms, remains to be seen.”

In late July, the Australian Competition and Consumer Commission (ACCC) issued for public consultation a draft of “a mandatory code of conduct to address bargaining power imbalances between Australian news media businesses and digital platforms, specifically Google and Facebook.” The government in Canberra had asked the ACCC to draft this code earlier this year after talks broke down between the Australian Treasury and the companies. The ACCC explained:

The code would commence following the introduction and passage of relevant legislation in the Australian Parliament. The ACCC released an exposure draft of this legislation on 31 July 2020, with consultation on the draft due to conclude on 28 August 2020. Final legislation is expected to be introduced to Parliament shortly after conclusion of this consultation process.

This is not the ACCC’s first interaction with the companies. Late last year, the ACCC announced a legal action against Google “alleging they engaged in misleading conduct and made false or misleading representations to consumers about the personal location data Google collects, keeps and uses” according to the agency’s press release. In its initial filing, the ACCC is claiming that Google mislead and deceived the public in contravention of the Australian Competition Law and Android users were harmed because those that switched off Location Services were unaware that their location information was still be collected and used by Google for it was not readily apparent that Web & App Activity also needed to be switched off. Moreover, A year ago, the ACCC released its final report in its “Digital Platforms Inquiry” that “proposes specific recommendations aimed at addressing some of the actual and potential negative impacts of digital platforms in the media and advertising markets, and also more broadly on consumers.”

In mid-August, Google and the ACCC exchanged public letters, fighting over the latter’s proposal to ensure that media companies are compensated for articles and content the former uses.

  • In an Open Letter to Australians, Google claimed:
    • A proposed law, the News Media Bargaining Code, would force us to provide you with a dramatically worse Google Search and YouTube, could lead to your data being handed over to big news businesses, and would put the free services you use at risk in Australia.
    • You’ve always relied on Google Search and YouTube to show you what’s most relevant and helpful to you. We could no longer guarantee that under this law. The law would force us to give an unfair advantage to one group of businesses – news media businesses – over everyone else who has a website, YouTube channel or small business. News media businesses alone would be given information that would help them artificially inflate their ranking over everyone else, even when someone else provides a better result. We’ve always treated all website owners fairly when it comes to information we share about ranking. The proposed changes are not fair and they mean that Google Search results and YouTube will be worse for you.
    • You trust us with your data and our job is to keep it safe. Under this law, Google has to tell news media businesses “how they can gain access” to data about your use of our products. There’s no way of knowing if any data handed over would be protected, or how it might be used by news media businesses.
    • We deeply believe in the importance of news to society. We partner closely with Australian news media businesses — we already pay them millions of dollars and send them billions of free clicks every year. We’ve offered to pay more to license content. But rather than encouraging these types of partnerships, the law is set up to give big media companies special treatment and to encourage them to make enormous and unreasonable demands that would put our free services at risk.

In its response, the ACCC asserted:

  • The open letter published by Google today contains misinformation about the draft news media bargaining code which the ACCC would like to address. 
  • Google will not be required to charge Australians for the use of its free services such as Google Search and YouTube, unless it chooses to do so.
  • Google will not be required to share any additional user data with Australian news businesses unless it chooses to do so.
  • The draft code will allow Australian news businesses to negotiate for fair payment for their journalists’ work that is included on Google services.
  • This will address a significant bargaining power imbalance between Australian news media businesses and Google and Facebook.

Google has since published a follow up letter, claiming it does not oppose the draft code but rather wants a few changes. Google also dodged blame for the decline of media revenue, asserting “the fall in newspaper revenue over recent years was mainly the result of the loss of classified ads to online classifieds businesses.” Google trumpeted its 1 October decision to “to pay a number of publishers to license their content for a new product, including some in Australia, as well as helping train thousands of Australian journalists.” As announced by Google and Alphabet CEO Sundar Pichai, Google will pay some media outlets up to $1 billion over the next three years  “to create and curate high-quality content for a different kind of online news experience” for its new product, Google News Showcase. Pichai claimed:

This approach is distinct from our other news products because it leans on the editorial choices individual publishers make about which stories to show readers and how to present them. It will start rolling out today to readers in Brazil and Germany, and will expand to other countries in the coming months where local frameworks support these partnerships.

This decision was not well-received everywhere, especially in the European Union (EU), which is in the process of implementing an EU measure requiring Google and Facebook to pay the media for content. The European Publishers Council (EPC) noted:

The French Competition Authority decision from April considered that Google’s practices were likely to constitute an abuse of a dominant position and brought serious and immediate damage to the press sector. It calls on Google, within three months, to conduct negotiations in good faith with publishers and press agencies on the remuneration for their protected content. Google’s appeal in July seeks to get some legal clarity on parts of the decision.

Moreover, the European Union (EU) Directive on Copyright in the Digital Single Market is being implemented in EU member states and would allow them to require compensation from platforms like Facebook and Google. The EPC claimed:

Many are quite cynical about Google’s perceived strategy. By launching their own product, they can dictate terms and conditions, undermine legislation designed to create conditions for a fair negotiation, while claiming they are helping to fund news production.

Incidentally, earlier this month, a French appeals court ruled against Google in its fight to stop France’s competition authority to negotiate licensing fees for the use of French media. And, earlier today, Italy’s competition authority announced an investigation “against Google for an alleged abuse of dominant position in the Italian market for display advertising.” The agency asserted:

  • In the key market for online advertising, which Google controls also thanks to its dominant position on a large part of the digital value chain, the Authority questions the undertaking’s discriminatory use of the huge amount of data collected through its various applications, preventing rivals in the online advertising markets from competing effectively. More specifically, Google appears to have engaged in an internal/external discriminatory conduct, refusing to provide its competitors with Google ID decryption keys and excluding third-party tracking pixels. At the same time, Google has allegedly used tracking elements enabling its advertising intermediation services to achieve a targeting capability that some equally efficient competitors are unable to replicate.
  • The conducts investigated by the Authority may have a significant impact on competition in the various markets of the digital advertising value chain, with wide repercussions on competitors and consumers. The absence of competition in the intermediation of digital advertising, in fact, might reduce the resources allocated to website producers and publishers, thus impoverishing the quality of content directed to end customers. Moreover, the absence of effective competition based on merits could discourage technological innovation for the development of advertising technologies and techniques less intrusive for consumers.

© Michael Kans, Michael Kans Blog and, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and with appropriate and specific direction to the original content.

Photo by Roman Kraft on Unsplash

Further Reading, Other Developments, and Coming Events (13 October)

Further Reading

  •  “False Rumors Often Start at the Top” By Shira Ovide — The New York Times. This piece traces how misinformation can arise from poorly phrased or ambiguous statements and utterances from authorities or famous people. Throw in a very liberal dose of people misinterpreting, and it’s a miracle there’s any clear communication online.
  • With election day looming, Twitter imposes new limits on U.S. politicians — and ordinary users, too” By Elizabeth Dwoskin and Craig Timberg — The Washington Post. The social media platform will police misinformation and lies spread by American politicians with more than 100,000 followers, especially with respect to the outcome of elections that have not yet been decided. This change is part of a suite of measures to blunt the viral nature of incorrect or maliciously intended Tweets. An interesting change is one designed to add friction to retweeting by asking the user if they want to add their thoughts to a Tweet they are trying to retweet. Perhaps, such modifications point the way to blunting how quickly bad or wrong information goes viral.
  • Why Facebook Can’t Fix Itself” By Andrew Marantz — New Yorker. This article lays bare the central tension in the social media platform: its income is driven by content that outrages or hooks people and any serious effort to remove lies, misinformation, hate speech, and extremist material would remove the content it needs to outrage and hook people.
  • Feds may target Google’s Chrome browser for breakup” By Leah Nylen — Politico. It appears there may be two antitrust suits against Google targeting three of the company’s businesses: online advertising, the online search market, and its Google Chrome browser. The United States Department of Justice and state attorneys general may ask courts to break up the company. Of course, the resolution of such a massive undertaking could take years to play out.
  • Cyber Command has sought to disrupt the world’s largest botnet, hoping to reduce its potential impact on the election” By Ellen Nakashima — The Washington Post and “Microsoft seeks to disrupt Russian criminal botnet it fears could seek to sow confusion in the presidential election” By Jay Greene and Ellen Nakashima — The Washington Post. United States (U.S.) Cyber Command and Microsoft went at the same botnet from different directions ahead of the U.S. election in an attempt to batter and disorganize the Russian organization enough to foil any possible ransomware attacks on election systems.

Other Developments

  • The National Security Commission on Artificial Intelligence (NSCAI) sent its “its 2020 Interim Report and Third Quarter Recommendations” to Congress and the Trump Administration ahead of the March 2021 due date for its final report. Notably, the NSCAI is calling for Congress and the White House to figure out which entity in the Executive Office of the President (EOP) should lead and coordinate the United States’ (U.S.) artificial intelligence (AI) efforts. Again, the NSCAI framed AI as being a key part of the “great power” struggle between the U.S. and rivals like the People’s Republic of China; although it is often unsaid that the U.S. is also theoretically competing with ostensible allies like the United Kingdom (UK) and the European Union (EU) in leading AI development and reaping the national security and economic gains projected to accompany being the preeminent nation on this field. However, as with many of these commissions, Congress and the Administration must navigate the jurisdictions of current government stakeholders who are almost always reluctant to relinquish their claims to a policy field and will often work to preserve their role even at the cost of frustrating larger efforts. It is very likely Congress folds recommendations into a future National Defense Authorization Act (NDAA), quite possibly the bill for FY 2022 since the final report will be delivered in the midst of the drafting and consideration of the annual bill to set national security policy.
    • Nonetheless, the NSCAI stated “[t]his report represents our third quarterly memo as well as our second interim report mandated by Congress….[and] we present 66 recommendations flowing from several key ideas:
      • First, we must defend democracies from AI-enabled disinformation and other malign uses of AI by our adversaries.
      • Second, the government should expand and democratize basic AI research—the wellspring of our technological advantages.
      • Third, the government must build a digital ecosystem within national security departments and agencies for AI R&D.
      • Fourth, connecting technologists and operators will be the key to leveraging AI in all national security missions.
      • Fifth, we must close the tech talent deficit by strengthening STEM education, recruiting the best minds from around the world, and training the national security workforce.
      • Sixth, we must build a resilient domestic microelectronics industrial base.
      • Seventh, we will need interconnected strategies for technologies associated with AI including biotechnology and quantum computing.
      • Eighth, we cannot only focus on domestic initiatives in a global competition.
    • The NSCAI declared “[w]e must lead the development of AI technical standards and norms in international forums, and strengthen AI partnerships with allies and partners to build a digital future reflecting our values and protecting our interests.”
    • The NSCAI asserted:
      • The totality of the recommendations illustrates a key point: Laying out a vision is not enough. A winning strategy demands major muscle movements in and across departments and agencies, and significant executive and legislative action. It requires overcoming the technical, bureaucratic, and human obstacles to change, and driving very specific policies.
      • We believe the United States needs a new White House-led technology council to elevate AI- driven technology developments to the center of national decision-making, and a technology advisor to lead a new technology competitiveness strategy that integrates the complex interplay between technology, national security, and economic policies.
  • Key Republican stakeholders introduced the “Beat CHINA for 5G Act of 2020” that would require the Federal Communications Commission (FCC) to auction off a prized piece of mid-band spectrum to speed the roll out of 5G in the United States (U.S.) The bill was introduced by Commerce, Science, and Transportation Committee Chair Roger Wicker (R-MS), Communications, Technology, Innovation, and the Internet Subcommittee Chair John Thune (R-SD), House Energy and Commerce Committee Ranking Member Greg Walden (R-OR), and Oversight and Investigations Subcommittee Brett Guthrie (R-KY), and Representative Bob Latta (R-OH).
    • In their press release, they claimed:
      • The Beat CHINA for 5G Act of 2020 would empower the FCC to open more critical mid-band spectrum for non-federal, commercial wireless use by requiring the FCC to begin an auction of the 3.45-3.55 GHz band by December 2021.
      • In February 2018, the National Telecommunications and Information Administration (NTIA) identified the 3.45-3.55 GHz band as a candidate for potential repurposing. Earlier this year, NTIA released a technical report indicating that spectrum sharing opportunities were possible in this band.
      • In August 2020, the White House announced that it would make 100 MHz of mid-band spectrum in the 3.45-3.55 GHz band available for non-federal, commercial wireless use. In September 2020, the FCC took a first step to start transitioning existing services to make this band available for 5G use.
      • These actions by the Trump Administration are crucial to growing our economy and enhancing our national security. This legislation is the final step to making sure there are no delays and this auction stays on track.
    • In early August, the White House and the Department of Defense (DOD) announced it would make available 100 MHz of mid-band spectrum in the 3450-3550 MHz band. (See here for more detail.)
  • In a press release, the Department of Defense (DOD) detailed its “$600 million in awards for 5G experimentation and testing at five U.S. military test sites, representing the largest full-scale 5G tests for dual-use applications in the world.” These awards were made largely to prominent private sector technology and telecommunications companies vying to play prominent roles in 5G. However, of course, no awards were made to companies from the People’s Republic of China (PRC). Nonetheless, this announcement may provoke further claims from Members of Congress and stakeholders that the DOD’s effort is the camel’s nose under the tent of a nationalized 5G system.
    • This announcement is part of the DOD’s 5G Strategy that “provides the DOD approach to implementing the National Strategy to Secure 5G and aligns with the National Defense Authorization Act for Fiscal Year 2020 (FY2020), Section 254…[that] is also consistent with National Defense Strategy guidance to lead in key areas of great power competition and lethality to ensure 5G’s ‘impact on the battle network of the future.’”
    • In a related DOD release, it was explained:
      • The effort — Tranche 1 of the department’s larger 5G initiative — will accelerate adoption of 5G technology, enhance the effectiveness and lethality of U.S. combat forces, and further the development and use of common 5G standards to ensure interoperability with military partners and allies.
    • The DOD added:
      • Each installation will partner military Services, industry leaders, and academic experts to advance the Department’s 5G capabilities. Projects will include piloting 5G-enabled augmented/virtual reality for mission planning and training, testing 5G-enabled Smart Warehouses, and evaluating 5G technologies to enhance distributed command and control.
    • The DOD provided details on the 5G experimentation for these Tranche 1 sites:
      • Joint Base Lewis-McChord (JBLM), Washington – Augmented Reality/Virtual Reality Training 
        • The objective of this project is to rapidly field a scalable, resilient, and secure 5G network to provide a test bed for experimentation with a 5G-enabled Augmented Reality/Virtual Reality (AR/VR) capability for mission planning, distributed training, and operational use.  Industry partners at this site include:
        • GBL System Corp. (GBL): GBL’s Samsung-based 5G testbed will utilize mid-band spectrum to provide high capacity, low latency coverage at JBLM (Approximately 3 sq. mi.) and Yakima Training Center (Approximately 15 sq. mi.).
        • AT&T: AT&T will develop a system to allow use of 5G connectivity with present training devices.
        • Oceus Networks: Oceus will develop and field a Commercial Off-The-Shelf (COTS) based 5G handheld called Tough Mobile Device-5G (TMD-5G) for the field training environment.
        • Booz-Allen Hamilton (BAH): BAH will deliver an Army-owned, multivendor prototype for combat-like training using AR/VR technology in 5G-enhanced training locations based on an Open Systems Architecture (OSA).
      • Naval Base San Diego (NBSD), California – 5G Smart Warehousing (Transshipment)
        • The objective of this project is to develop a 5G-enabled Smart Warehouse focused on transshipment between shore facilities and naval units, to increase the efficiency and fidelity of naval logistic operations, including identification, recording, organization, storage, retrieval, and transportation of materiel and supplies.  Additionally, the project will create a proving ground for testing, refining, and validating emerging 5G-enabled technologies.  Industry partners at this site include:
        • AT&T: AT&T will quickly deploy (within 9 months) a network based on commercially available equipment to support 4G and 5G utilizing cellular spectrum in both the sub-6 GHz and millimeter wave bands.
        • GE Research: GE Research 5G-enabled applications will support real-time asset tracking, warehouse modeling and predictive analytics.
        • Vectrus Mission Solutions Corporation (Vectrus): Vectrus applications will provide industry-leading capabilities for inventory management, network security, robotic material moving, & environmental sensing.
        • Deloitte Consulting LLP (Deloitte): Deloitte will support a wide array of applications including Autonomous Mobile Robots, Unmanned Aircraft System (UAS) with autonomous drones, biometrics, cameras, AR/VR, and digitally tracked inventory.
      • Marine Corps Logistics Base (MCLB) Albany, Georgia – 5G Smart Warehousing (Vehicular)
        • This project will develop a 5G-enabled Smart Warehouse focused on vehicular storage and maintenance, to increase the efficiency and fidelity of MCLB Albany logistic operations, including identification, recording, organization, storage, retrieval, and inventory control of materiel and supplies.  Additionally, the project will create a proving ground for testing, refining, and validating emerging 5G-enabled technologies.  Industry partners at this site include:
        • Federated Wireless: Federated Wireless leverages open standards and an open solution to provide a testbed with both indoor and outdoor coverage, supporting a growing segment of the US 5G equipment market. 
        • GE Research: The GE approach will support real-time asset tracking, facility modeling and predictive analytics.
        • KPMG LLP: KPMG applications will create an integrated, automated, and digitized process for equipment and product movement throughout the warehouse.
        • Scientific Research Corporation (SRC): SRC’s 5G-enabled offering will demonstrate automated management and control of warehouse logistics, asset and inventory tracking, environmental management, and facility access control.
      • Nellis Air Force Base, Nevada – Distributed Command and Control
        • The objective of this effort is to develop a testbed for use of 5G technologies to aid in Air, Space, and Cyberspace lethality while enhancing command and control (C2) survivability.  Specifically, a 5G network will be employed to disaggregate and mobilize the existing C2 architectures in an agile combat employment scenario.
        • Industry partners at this site include:
        • AT&T: AT&T will provide an initially fixed then mobile 5G environment with high capacity and low latency to support the connectivity requirements associated with the mobile combined air operations centers.
      • Hill Air Force Base, Utah – Dynamic Spectrum Utilization
        • This project addresses the challenge of enabling Air Force radars to dynamically share spectrum with 5G cellular services.  The project will develop sharing/coexistence system prototypes and evaluate their effectiveness with real-world, at-scale networks in controlled environments.  The objective of this effort is to develop effective methodologies to allow the sharing or coexistence between airborne radar systems and 5G cellular telephony systems in the 3.1 – 3.45 GHz band.  Industry partners at this site include:
        • Nokia: The Nokia testbed includes traditional as well as open standards architectures including high-power massive multi-antenna systems.
        • General Dynamics Mission Systems, Inc. (GDMS): GDMS will develop and field a novel coexistence application that includes independent tracking of radar signals to support the radio access network in mitigation actions.
        • Booz Allen Hamilton (BAH): BAH’s approach utilizes Artificial Intelligence to provide a complete coexistence system with rapid response to interference. 
        • Key Bridge Wireless LLC: Key Bridge will demonstrate an adaptation of an existing commercial spectrum sharing approach for the 3.1-3.45 GHz band as a low risk solution to the coexistence issues.
        • Shared Spectrum Company (SSC): SSC’s approach aims to maintain continuous 5G communications via early radar detections and 5G-enabled Dynamic Spectrum Access.
        • Ericsson: Ericsson’s novel approach employs the 5G infrastructure to provide the required sensing coupled with Machine Learning and 5G-enabled spectrum aggregation.
  • Facebook announced it is suing two companies for data scraping in a suit filed in California state court. In its complaint, Facebook asserted:
    • Beginning no later than September 2019 and continuing until at least September 2020, Defendants BrandTotal Ltd. (BrandTotal ́) and Unimania, Inc. (Unimania) developed and distributed internet browser extensions (malicious extensions) designed to improperly collect data from Twitter, YouTube, LinkedIn, Amazon, Facebook, and Instagram. Defendants distributed the malicious extensions on the Google Chrome Store. Anyone who installed one of Defendants malicious extensions essentially self-compromised their browsers to run automated programs designed to collect data about its user from specific websites. As to Facebook and Instagram, when a user visited those sites with a self-compromised browser, Defendants used the malicious extensions to connect to Facebook computers and collect or scrape ́ user profile information (including name, user ID, gender, date of birth, relationship status, and location information), advertisements and advertising metrics (including name of the advertiser, image and text of the advertisement, and user interaction and reaction metrics), and user Ad Preferences (user advertisement interest information). Defendants used the data collected by the malicious extensions to sell marketing intelligence, and other services through the website Defendants’ conduct was not authorized by Facebook.
    • Facebook brings this action to stop Defendants’ violations of Facebook’s and Instagram’s Terms and Policies. Facebook also brings this action to obtain damages and disgorgement for breach of contract and unjust enrichment.
    • Of course, it is a bit entertaining to see Facebook take issue with the data collection techniques of others given the myriad ways it tracks so many people across the internet especially when they are not even interacting with Facebook or logged into the social media platform. See here, here, and here for more on Facebook’s practices, some of which may even be illegal in a number of countries, and, of course, some of the most egregious practices led to the record $5 billion fine levied by the Federal Trade Commission.

Coming Events

  • The European Union Agency for Cybersecurity (ENISA), Europol’s European Cybercrime Centre (EC3) and the Computer Emergency Response Team for the EU Institutions, Bodies and Agencies (CERT-EU) will hold the 4th annual IoT Security Conference series “to raise awareness on the security challenges facing the Internet of Things (IoT) ecosystem across the European Union:”
    • Artificial Intelligence – 14 October at 15:00 to 16:30 CET
    • Supply Chain for IoT – 21 October at 15:00 to 16:30 CET
  • The House Intelligence Committee will conduct a virtual hearing titled “Misinformation, Conspiracy Theories, and ‘Infodemics’: Stopping the Spread Online.”
  • The Federal Communications Commission (FCC) will hold an open commission meeting on 27 October, and the agency has released a tentative agenda:
    • Restoring Internet Freedom Order Remand – The Commission will consider an Order on Remand that would respond to the remand from the U.S. Court of Appeals for the D.C. Circuit and conclude that the Restoring Internet Freedom Order promotes public safety, facilitates broadband infrastructure deployment, and allows the Commission to continue to provide Lifeline support for broadband Internet access service. (WC Docket Nos. 17-108, 17-287, 11- 42)
    • Establishing a 5G Fund for Rural America – The Commission will consider a Report and Order that would establish the 5G Fund for Rural America to ensure that all Americans have access to the next generation of wireless connectivity. (GN Docket No. 20-32)
    • Increasing Unlicensed Wireless Opportunities in TV White Spaces – The Commission will consider a Report and Order that would increase opportunities for unlicensed white space devices to operate on broadcast television channels 2-35 and expand wireless broadband connectivity in rural and underserved areas. (ET Docket No. 20-36)
    • Streamlining State and Local Approval of Certain Wireless Structure Modifications – The Commission will consider a Report and Order that would further accelerate the deployment of 5G by providing that modifications to existing towers involving limited ground excavation or deployment would be subject to streamlined state and local review pursuant to section 6409(a) of the Spectrum Act of 2012. (WT Docket No. 19-250; RM-11849)
    • Revitalizing AM Radio Service with All-Digital Broadcast Option – The Commission will consider a Report and Order that would authorize AM stations to transition to an all-digital signal on a voluntary basis and would also adopt technical specifications for such stations. (MB Docket Nos. 13-249, 19-311)
    • Expanding Audio Description of Video Content to More TV Markets – The Commission will consider a Report and Order that would expand audio description requirements to 40 additional television markets over the next four years in order to increase the amount of video programming that is accessible to blind and visually impaired Americans. (MB Docket No. 11-43)
    • Modernizing Unbundling and Resale Requirements – The Commission will consider a Report and Order to modernize the Commission’s unbundling and resale regulations, eliminating requirements where they stifle broadband deployment and the transition to next- generation networks, but preserving them where they are still necessary to promote robust intermodal competition. (WC Docket No. 19-308)
    • Enforcement Bureau Action – The Commission will consider an enforcement action.
  • On October 29, the Federal Trade Commission (FTC) will hold a seminar titled “Green Lights & Red Flags: FTC Rules of the Road for Business workshop” that “will bring together Ohio business owners and marketing executives with national and state legal experts to provide practical insights to business and legal professionals about how established consumer protection principles apply in today’s fast-paced marketplace.”
  • The Senate Commerce, Science, and Transportation Committee will reportedly hold a hearing on 29 October regarding 47 U.S.C. 230 with testimony from:
    • Jack Dorsey, Chief Executive Officer of Twitter;
    • Sundar Pichai, Chief Executive Officer of Alphabet Inc. and its subsidiary, Google; and 
    • Mark Zuckerberg, Chief Executive Officer of Facebook.

© Michael Kans, Michael Kans Blog and, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and with appropriate and specific direction to the original content.

Image by Gerd Altmann from Pixabay

Section 230 Hearings

Republicans continue to spotlight Section 230 and supposed bias against conservatives on social media platforms.

On 1 October , against a backdrop of coordinated, increasing Republican focus on 47 U.S.C. 230 (aka Section 230), the Senate Commerce, Science, and Transportation Committee voted unanimously to subpoena three technology CEOs:

  • Jack Dorsey, Chief Executive Officer of Twitter;
  • Sundar Pichai, Chief Executive Officer of Alphabet Inc. and its subsidiary, Google; and 
  • Mark Zuckerberg, Chief Executive Officer of Facebook.

Ahead of the markup, it appeared that Democrats on the committee would oppose the efforts, which the top Democrat claimed

Taking the extraordinary step of issuing subpoenas is an attempt to chill the efforts of these companies to remove lies, harassment, and intimidation from their platforms. I will not participate in an attempt to use the committee’s serious subpoena power for a partisan effort 40 days before an election.

However, the chair and ranking member worked out an agreement that would expand the scope of the subpoenas beyond just Section 230 and would include privacy and “media domination.” With broader language, Democrats chose to vote yes, making it a unanimous vote.

This hearing, and the markup held by the Senate Judiciary Committee held the same day are part of the larger Republican narrative to cast technology companies as biased against conservative viewpoints. This was also on display during today’s House Judiciary Committee hearing on possible antitrust behavior by large technology companies when Ranking Member Jim Jordan’s (R-OH) statement had little to say about antitrust law or anticompetitive behavior and focused solely on Section 230. Republicans seem intent on shining a light on what they call a conservative bias among technology companies where allegedly viewpoints from the right are taken down, censored, and edited at a much higher rate than those on the left. These claims are made even though studies and data from Facebook have shown that conservative content is often the most popular on online platforms.

In his opening statement, Chair Roger Wicker (R-MS) said Dorsey, Zuckerberg, and Pichai declined to attend a hearing, necessitating subpoenas. Wicker stated

This Congress, the Commerce Committee and other committees in this body have examined the growing and unprecedented power and influence that Facebook, Google, and Twitter have in the United States. We have questioned how they are protecting and securing the data of millions of Americans. We have explored how they are combatting disinformation, fraud, and other online scams. We have examined whether they are providing a safe and secure internet experience for children and teens. We have discussed how they are removing content from their sites that encourages extremism and mass violence. We have examined their use of secret algorithms that may manipulate users and drive compulsive usage of the internet among our youth. And most recently, we have reviewed how they are moderating content across their platforms and applying their terms of service and community standards to their users.

Wicker added that “[w]ith over 4.5 billion internet users today, we recognize the challenge of addressing many of the issues I mentioned and policing obscene and other indecent material online…[and] Section 230 of the Communications Decency Act, however, was enacted almost 25 years ago to address this very challenge.” Wicker asserted

  • Over time, this law has undeniably allowed the modern internet to flourish. It has helped usher in more speech and more expression, and limited the proliferation of truly reprehensible content. However, following repeated and consistent reports of political bias and the suppression of certain viewpoints, I fear that Section 230’s sweeping liability protections for Big Tech are stifling a true diversity of political discourse on the internet. According to a 2018 Pew study, seven out of 10 Americans agree.
  • On the eve of a momentous and highly-charged election, it is imperative that this committee of jurisdiction and the American people receive a full accounting from the heads of these companies about their content moderation practices. 

Wicker claimed “[t]his is not a partisan issue…[because] [b]oth candidates for President today agree.” Wicker asserted:

In May, the President of the United States rightly questioned whether Section 230 has outlived its usefulness. The Democrat[ic] nominee for President has done the same, calling for Big Tech’s liability shield to be “revoked immediately.” One Democrat[ic] member of this committee stated in July that “there is no reason for these platforms to have blanket immunity, a shield against any accountability that is not enjoyed by any other industry in the same way.” This member also acknowledged that “there is a broad consensus that Section 230 as it currently exists no longer affords sufficient protection to the public.” And just last week, another Democrat[ic] member of this committee joined a letter to Facebook demanding answers regarding the company’s inconsistent enforcement of its content moderation policies.

Ranking Member Maria Cantwell (D-WA) remarked:

I actually can’t wait to ask Mr. Zuckerberg further questions. I’m so proud that when we had a hearing before with Mr. Zuckerberg, I asked him an infamous question that’s now part of a movie: “What was their interference in the last elections?” At which point, the woman who is a whistleblower inside the organization says to the camera, “He’s lying.” So, can’t wait to have Mr. Zuckerberg here again.

Cantwell stated:

  • I think the issues that we are discussing of how we function in an information age are of extreme importance. I think the issue of privacy and also media domination by the platforms when they put their foot on the throats of local news media is also an issue. So I appreciate [Wicker’s] offer today of adding to the subpoena language both privacy and media as a discussion point we can bring up in the subpoenas.
  • What I don’t want to see is a chilling effect on individuals who are in a process of trying to crack down on hate speech or misinformation about COVID during a pandemic. Part of this discussion will end up being about the fact that some of these social platforms have tried to move forward in a positive way and take down information that is incorrect.
  • I welcome the debate about 230. I think it should be a long and thoughtful process, not sure that a long and thoughtful process will happen before the election, but I understand my colleagues’ desires here today. So, happy to move forward on these subpoenas with the additions that [Wicker] so graciously added.

At the 1 October markup of Section 230 legislation, the Senate Judiciary Committee opted to hold over the “Online Content Policy Modernization Act” (S.4632) to try to reconcile the fifteen amendments submitted for consideration. The Committee could soon meet again to formally markup and report out this legislation. Even if the Senate passes Section 230 legislation, it is not clear there will be sufficient agreement with Democrats in the House to get a final bill to the President before the end of this Congress. The primary reason is that Democrats are focused on hate speech on online platforms aimed at women, minorities, and other groups, some of which is coming from the far right. In their public remarks, Republicans have not called this feature of platforms a problem, and they seem more focused on alleged bias and actions against conservative viewpoints.

Chair Lindsey Graham (R-SC) submitted an amendment revising the bill’s reforms to Section 230 that incorporate some of the below amendments but includes new language. For example, the bill includes a definition of “good faith,” a term not currently defined in Section 230. This term would be construed as a platform taking down or restricting content only according to its publicly available terms of service, not as a pretext, and equally to all similarly situated content. Moreover, good faith would require alerting the user and giving him or her an opportunity to respond subject to certain exceptions. The amendment also makes clear that certain existing means of suing are still available to users (e.g. suing claiming a breach of contract.)

Senator Mike Lee (R-UT) offered a host of amendments:

  • EHF20913 would remove “user[s]” from the reduced liability shield that online platforms would receive under the bill. Consequently, users would still not be legally liable for the content posted by another user.
  • EHF20914 would revise the language the language regarding the type of content platforms could take down with legal protection to make clear it would not just be “unlawful” content but rather content “in violation of a duly enacted law of the United States,” possibly meaning federal laws and not state laws. Or, more likely, the intent would be to foreclose the possibility a platform would say it is acting in concert with a foreign law and still assert immunity.
  • EHF20920 would add language making clear that taking down material that violates terms of service or use according to an objectively reasonable belief would be shielded from liability.
  • OLL20928 would expand legal protection to platforms for removing or restricting spam,
  • OLL20929 would bar the Federal Communications Commission (FCC) from a rulemaking on Section 230.
  • OLL20930 adds language making clear if part of the revised Section 230 is found unconstitutional, the rest of the law would still be applicable.
  • OLL20938 revises the definition of an “information content provider,” the term of art in Section 230 that identifies a platform, to expand when platforms may be responsible for the creation or development of information and consequently liable for a lawsuit.

Senator Josh Hawley (R-MO) offered an amendment that would create a new right of action for people to sue large platforms for taking down his or her content if not done in “good faith.” The amendment limits this right only to “edge providers” who are platforms with more than 30 million users in the U.S. , 300 million users worldwide, and with revenues of more than $1.5 billion. This would likely exclude all platforms except for Twitter, Facebook, Instagram, TikTok, Snapchat, and a select group of a few others.

Senator John Kennedy (R-LA) offered an amendment that removes all Section 230 legal immunity from platforms that collect personal data and then uses an “automated function” to deliver targeted or tailored content to a user unless a user “knowingly and intentionally elect[s]” to receive such content.

Also this week, Senators Joe Manchin (D-WV) and John Cornyn (R-TX) introduced a bill that would change Section 230 to ostensibly “to stop the illicit sale of opioids and other drugs online” per their press release through a new requirement that online platforms report content indicating such activity is occurring. However, this bill would sweep much wider than controlled substances. The customary explanatory preamble of legislation in Congress gives the game away: “[t]o require reporting of suspicious transmissions in order to assist in criminal investigations and counterintelligence activities relating to international terrorism, and for other purposes.”

Under the “See Something, Say Something Online Act of 2020” (S.4758), online platforms would need to report “known suspicious transmissions” of “major crimes,” a term defined to include “crimes of violence,” “domestic or international terrorism,” and “serious drug offense[s].” Online platforms would need to report all such transmissions it should reasonably know about in the form of a “suspicious transmission activity report” (STAR) to the Department of Justice (DOJ) within 30 days unless it has evidence of an active sale or solicitation of drugs or terrorism. However, these STARs would be exempt from Freedom of Information Act (FOIA) requests. Platforms must establish a mechanism by which people can report suspicious activity as well. Failing to report such activity will result in the removal of the Section 230 liability shield and could be sued in a civil or criminal action because a failure to report would make the platform itself the publisher of the content, opening it to legal jeopardy.

Consequently, platforms would need create a system to vet material posted online for anything objectionable that could then be reported. It is safe to assume we would see overreporting, which begs the question of how the or state and local law enforcement agencies would choose to manage those possible crimes. Also, does the DOJ or other law enforcement agencies even have the capacity to manage what could be a considerable number of reports, triaging those serious enough to require immediate action. Also, would such a statute create a greater incentive to move to encrypted platforms and also for the development of such platforms.

It is interesting the Manchin/Cornyn bill seems to steer clear of child pornography and other exploitative sexual material. In contrast, the “Eliminating Abusive and Rampant Neglect of Interactive Technologies Act of 2020” (EARN IT Act of 2020) (S.3398) would change Section 230 by narrowing the liability shield and potentially making online platforms liable to criminal and civil actions for having child sexual materials on their platforms. Perhaps Manchin and Cornyn are interested in trying to add the bill to the EARN IT Act during Senate consideration or the “Online Content Policy Modernization Act” during the Senate Judiciary Committee markup. Or it may be that Manchin and Cornyn are trying to have an approach to fighting the opioid epidemic they can show to voters in their states. In any event, it is unclear what their intentions are at this point. However, it bears note that the provision requiring the reporting of domestic terrorism may appeal to many Democratic stakeholders, for they have repeatedly expressed concerns about the online activity of white supremacists and the effect of this content offline.

© Michael Kans, Michael Kans Blog and, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and with appropriate and specific direction to the original content.

Image by Gerd Altmann from Pixabay

Senate Commerce Hearing On Privacy

Senate stakeholders appear no closer to resolving the two key impasses in privacy legislation: preemption and a private right of action.

A week after the introduction of the “Setting an American Framework to Ensure Data Access, Transparency, and Accountability (SAFE DATA) Act” (S.4626) was introduced, the Senate Commerce, Science, and Transportation Committee held a hearing titled “Revisiting the Need for Federal Data Privacy Legislation with four former Federal Trade Commission (FTC) Commissioners and California’s Attorney General. Generally speaking, Members used the hearing to elicit testimony on the aspects of a privacy bill they would like to see with the chair and ranking member asking the witnesses about the need for preemption and the benefits of one national privacy standard and the need for people to be able to sue as a means of supplementing limited capacity of the FTC and state attorneys general to police violations of a new law respectively.

The SAFE DATA Act (see here for more analysis) was introduced last week by Committee Chair Roger Wicker (R-MS), Senate Majority Whip and  Communications, Technology, Innovation, and the Internet Subcommittee Chair John Thune (R-SD), Transportation and Safety Subcommittee Chair Deb Fischer (R-NE), and Safety, and Senator Marsha Blackburn (R-TN). Wicker had put out for comment a discussion draft, the “Consumer Data Privacy Act of 2019” (CDPA) (See here for analysis) in November 2019 shortly after the Ranking Member on the committee, Senator Maria Cantwell (D-WA) and other Democrats had introduced their privacy bill, the “Consumer Online Privacy Rights Act“ (COPRA) (S.2968) (See here for more analysis).

Chair Roger Wicker (R-MS) stated “[d]uring this Congress, protecting consumer data privacy has been a primary focus of this Committee…[and] [w]e held one of the first hearings of my chairmanship to examine how Congress should address this issue.” He said “[a]t that time, we heard that individuals needed rigorous privacy protections to ensure that businesses do not misuse their data…[and] [w]e heard that individuals need to be able to access, control, and delete the data that companies have collected on them.” Wicker stated “[w]e heard that businesses need a consistent set of rules applied reasonably and fairly to allow for continued innovation and growth in the digital economy…[a]nd we heard that the FTC needs enhanced authority and resources in order to oversee and enforce privacy protections.”

Wicker stated “[i]n the nearly two years since, members of this Committee have done a great deal of work developing legislation to address data privacy.” He said “[w]hile we worked, the world of data privacy did not stand still…[and] [t]he state of California implemented its California Consumer Privacy Act (CCPA) and began enforcement this past summer.” Wicker contended “[l]ong-held concerns remain that the CCPA is difficult to understand and comply with and could become worse if the law is further expanded and amended through an upcoming ballot measure this fall.” He claimed “[t]he European Union has continued to enforce the General Data Protection Regulation (GDPR)…[and] [t]he EU’s main focus appears to be going after the biggest American companies rather than providing clear guidance for all businesses with European citizens as customers.”

Wicker noted

The picture in Europe is even more complex following the recent court ruling invalidating the EU-U.S. Privacy Shield framework, which governed how U.S. companies treated the data of EU citizens. Though the issues in that case were more related to national security than consumer privacy, the result was yet more uncertainty about the future of trans-Atlantic data flows. I look forward to holding a hearing before the end of the year on the now-invalidated Privacy Shield.

Wicker asserted “[t]he biggest new development that has impacted data privacy – as it has impacted so many facets of our life – is the COVID-19 pandemic, which has resulted in millions of Americans working from home.” He said “[t]he increased use of video conferencing, food delivery apps, and other online services increases the potential for privacy violations…[and] [t]he need to collect a great deal of data for contact tracing and to track the spread of the disease likewise raises privacy concerns if done improperly.”

Wicker declared that “[f]or all of these reasons and more, the need for a uniform, national privacy law is greater than ever…[and] [l]ast week I introduced the SAFE DATA Act.” He argued

The SAFE DATA Act would provide Americans with more choice and control over their data. It would require businesses to be more transparent and hold them to account for their data practices. It would strengthen the FTC’s ability to be an effective enforcer of new data privacy rules. And it would establish a nationwide standard so that businesses know how to comply no matter where their customers live, and so that consumers know their data is safe wherever the company that holds their data is located.

Wicker stated that “[t]he SAFE DATA Act is the result of nearly two years of discussions with advocacy groups, state and local governments, nonprofits, academics, and businesses of every size and from every sector of the economy – my thanks to all of those.” He claimed “[t]he diversity of voices was essential in crafting a law that would work consistently and fairly for all Americans.” Wicker contended “we have a chance to pass a strong national privacy law that achieves the goals of privacy advocates with real consensus among members of both parties and a broad array of industry members.”

Ranking Member Maria Cantwell (D-WA) stated “[p]rotecting Americans’ privacy rights is critical, and that has become even sharper in the focus of the COVID-19 crisis, where so much of our lives have moved online.” She noted “[t]he American people deserve strong privacy protections for their personal data, and Congress must work to act in establishing these protections.” Cantwell said “[l]ast year, along with Senators [Brian] Schatz (D-HI), [Amy] Klobuchar (D-MN), and [Ed] Markey (D-MA), I introduced the “Consumer Online Privacy Rights Act“ (COPRA) (S.2968).” She claimed “[t]he bill is pretty straightforward…[and] provides foundational privacy rights to consumers, creates rules to prevent abuse of consumer data, and holds companies accountable with real enforcement measures.” Cantwell said “[u]nfortunately, other legislation throughout the legislative process, I think has taken different approaches…[and] [t]hese bills allow companies to maintain the status quo, burying important disclosure information in long contracts, hiding where consumer data is sold, and changing the use of consumer data without their consent.” She conclude that “obviously, I believe these loopholes are unacceptable.”

Cantwell argued

Most strikingly, these bills would actually weaken consumer rights around the country by preempting stronger state laws. Attorney General Becerra is with us today and I appreciate him being able to join us, because this would have an impact on a broad preemption, I should say, would have an impact on 40 million Californians who are protected by your privacy law and the privacy protections in your state. So we need to resolve this issue. But we can’t do so at the expense of states who have already taken action to protect the privacy of their citizens.

Cantwell stated that “[f]inally, we also know that individuals must have the right to their day in court, when privacy is violated, even with the resources and expertise and enforcers like the FTC–many of you, I know, know these rules well, and Attorneys General–we will never be able to fully police the thousands and thousands of companies collecting consumer data if you are the only cop on the beat.” She said “I’d like to go further, but there are many issues that we’re going to address here. I want to say that the legislation also needs to cover the complex issues of, you know, health and safety standards and important issues.” Cantwell stated “[t]he Supreme Court discussion that we’re now having, I think will launch us into a very broad discussion of privacy rights and where they exist within the Constitution.” She explained “[j]ust this recent court ruling that put at risk the little known but vital important provision of the FTC Act 13b, which allows the FTC to go to court to obtain refunds and other redress for consumers–the 10 billion dollars for example in the Volkswagen case–without this provision, the core mission of the FTC would be crippled.”

Cantwell asserted “I think all of these issues, and the important issues of privacy rights, should and will have a fair discussion, if we can have time to discuss them in this process…[and] I believe the issue of how the government interferes in our own privacy rights, whether the government oversteps our privacy rights, is a major issue to be discussed by this body in the next month.” She added “I don’t believe in some of the tactics that government has used to basically invade the privacy rights of individuals.”

Cantwell stated that “next week the minority will be introducing a report that we’ve been working on about the value of local journalism…[and] I believe the famous economist who said that markets need perfect information.” She argued “[w]e’re talking about the fact that if markets are distorted by information, then that really cripples our economy…[and] I think local journalism in a COVID crisis is proving that it’s valued information with the correct information on our local communities, and I think that this is something we need to take into consideration as we consider privacy laws and we consider these issues moving forward.”

Former FTC Commissioner and Microsoft’s Corporate Vice President, Chief Privacy Officer, and Deputy General Counsel for Global Privacy and Regulatory Affairs Julie Brill explained that “Microsoft believes that comprehensive federal privacy legislation should support four key principles: consumer empowerment, transparency, corporate responsibility, and strong enforcement:

  • Consumer Empowerment. Empower consumers with the tools they need to control their personal information, including the ability to make informed choices about the data they provide to companies, to understand what data companies know about them, to obtain a copy of their data, to make sure the data is accurate and up to date, and to delete their data. Americans care deeply about having meaningful control over their data. In just the past nine months, from January 1, 2020 to September 18, 2020, Microsoft received over 14 and a half million unique global visitors to its privacy dashboard, where they were able to exercise their ability to control their data. This continued engagement with the control tools we provide included over 4 and a half million visitors from the United States, representing the greatest level engagement from any single country.
  • Transparency. Require companies to be transparent about their data collection and use practices, by providing people with concise and understandable information about what personal information is collected from them, and how that information is used and shared.
  • Corporate Responsibility. Place direct requirements on companies to ensure that they collect and use consumers’ data in ways that are responsible, and demonstrate that they are worthy stewards of that data.
  • Strong Enforcement. Provide for strong enforcement through regulators, and ensure they have sufficient resources to enforce the legal requirements that organizations must uphold, but also to be well-grounded in the data collection and analysis technologies that are used in the modern digital economy. These are the key elements that are required to build a robust and lasting U.S. privacy law.

George Washington University Law School Professor, King’s College Visiting Professor, and United Kingdom Competition and Markets Authority Non-Executive Director and former FTC Chair William E. Kovacic said:

As Congress defines the substantive commands of a new omnibus law, I suggest a close review of the FTC’s experience in implementing the Telemarketing Sales Rule. To my mind, this experience offers several insights into the design of privacy protections:

  • In addition to unfair or deceptive acts and practices, the definition of forbidden behavior should encompass abusive conduct, as the FTC has developed that concept in the elaboration of the Telemarketing Sales Rule (TSR). I single out 2003 TSR amendments, which established the National Do Not Call Registry, popularly known as the Do Not Call Rule (DNC Rule). In applying the concept of abusive conduct, the DNC Rule used a definition of harm that reached beyond quantifiable economic costs of the challenged practice (i.e., the time lost and inconvenience associated with responding to unwanted telephone calls to the home). The DNC Rule’s theory of harm focused on the fact that, to many citizens, telemarketing calls were annoying, irritating intrusions into the privacy of the home. A new privacy regime could build on this experience and allow privacy regulators, by rulemaking and by law enforcement, to address comparable harms and to create standards that map onto common expectations for data protection and security.
  • The coverage of the omnibus statute should be comprehensive. Privacy authorities should have power to apply the law to all commercial actors (i.e., with no exclusions for specific economic sectors)and to not-for-profit institutions such as charitable bodies and universities.
  • The omnibus law should clarify that its restrictions on the accumulation and use of date about individuals apply to their status as consumers and employees. Since the late 1990s, the FTC at times has engaged in debatable interpretations of its authority under Section 5 of the Federal Trade Commission Act to assure foreign jurisdictions that it has authority to enforce promises regarding the collection and transfer by firms of information about their employees.

Kovacic stated “[w]ith this general framework in mind, my testimony proposes that an omnibus privacy law should enhance the institutional arrangements for administering anew substantive privacy framework. This statement

  • Sets out criteria to assess the performance of the entities implementing U.S. privacy policy, and to determine how to allocate tasks to institutions responsible for policy development and law enforcement.
  • Suggests approaches to increase the coherence and effectiveness of the US privacy system and to make the United States a more effective participant in the development of international privacy policy.
  • Considers whether the FTC, with an enhanced mandate, should serve as the national privacy regulator, or whether the FTC’s privacy operations should be spun off to provide the core of a new privacy institution.

Kovacic explained

This statement concludes that the best solution is to take steps that would enhance the FTC’s role by (a) eliminating gaps in its jurisdiction, (b) expanding its capacity to promote cooperation among agencies with privacy portfolios and to encourage convergence upon superior policy norms, and (c) providing resources necessary to fulfill these duties. The proposal for an enlarged FTC role considers two dimensions of privacy regulation. The first is what might be called the “consumer-facing” elements of a privacy. My testimony deals mainly with the relationship between consumers and enterprises (for-profit firms and not-for-profit institutions, such as universities) that provide them with goods and services. My testimony does not address the legal mechanisms that protect privacy where the actors are government institutions. Thus, I do not examine the appropriate framework for devising and implementing policies that govern data collection and record-keeping responsibilities of federal agencies, such as bodies that conduct surveillance for national security purposes.

21st Century Privacy Coalition Co-Chair and Former FTC Chair Jon Leibowitz asserted:

  • Congress does not need to reinvent the wheel. Many of the elements I would propose are consistent with recommendations made by my former agency in its 2012 Privacy Report, drafted after years of work and engagement with stakeholders of all kinds. Technology will continue to change, but the basic principles enshrined in the Report remain the most effective way to give consumers the protections they deserve.
  • My view, and that of the Report, is that national privacy legislation must give consumers statutory rights to control how their personal information is used and shared, and provide increased visibility into companies’ practices when it comes to managing consumer data. Such an approach should provide consumers with easy-to-understand privacy choices based upon the nature of the information itself—its sensitivity, the risk of consumer harm if such information is the subject of an unauthorized disclosure—and the context in which it is collected. For example, consumers expect sensitive information—including health and financial data, precise geolocation, Social Security numbers, and children’s information—to receive heightened protection to ensure confidentiality.
  • Therefore, a muscular privacy law should require affirmative express consent for the use and sharing of consumers’ sensitive personally identifiable information, and opt-out rights for non-sensitive information. But consumers do not expect to consistently provide affirmative consent to ensure that companies fulfill their online orders or protect them from fraud; thus, inferred consent for certain types of operational uses of information by companies makes sense. Consumers should also have rights of access and deletion where appropriate, and deserve civil rights protections thoughtfully built for the Internet age.
  • Another key tenet of the FTC Report is that privacy should not be about who collects an individual’s personal information, but rather should be about what information is collected and how it is protected and used. That is why federal privacy legislation should be technology- and industry-neutral. Companies that collect, use, or share the same type of covered personal information should not be subject to different privacy requirements based on how they classify themselves in the marketplace.
  • Rigorous standards should be backed up with tough enforcement. To that end, Congress should provide the FTC with the ability to impose civil penalties on violators for first-time offenses, something all of the current Commissioners—and I believe all the former Commissioners testifying here today—support. Otherwise, malefactors will continue to get two bites at the apple of the unsuspecting consumer. And there is no question in my mind that the FTC should have the primary authority to administer the national privacy law. The FTC has the unparalleled institutional knowledge and experience gained from bringing more than 500 cases to protect the privacy and security of consumer information, including those against large companies like Google, Twitter, Facebook, Uber, Dish Network, and others. Congress should not stop there.
  • The way to achieve enhanced enforcement is by giving the FTC, an agency that already punches above its weight, the resources and authority to carry out its mandate effectively. As of 2019, there were fewer employees (“FTEs”) at the agency now than there were in 1980, and the American population has grown by more than 100 million people since then. The number of FTEs has actually decreased since I left the agency in 2013 until this year.
  • Moreover, the FTC clearly has a role to play in developing rules to address details that Congress may not want to tackle in the legislation itself as well as new developments in technology that could overwhelm (or circumvent) enforcement. For that reason, you should give the agency some APA rulemaking authority to effectively implement your law. Having said that, Congress should not overwhelm the FTC with mandated rulemaking after rulemaking, which would only bog the agency down instead of permitting it to focus on enforcing the new law.

California Attorney General Xavier Becerra argued:

  • In the data privacy space, the optimal federal legal framework recognizes that privacy protections must keep pace with innovation, the hallmark of our data-driven economy. State law is the backbone of consumer privacy in the United States. Federal law serves as the glue that ties our communities together. To keep pace, we must all work from the same baseline playbook, but be nimble enough to adapt to real-world circumstances on the field where we meet them. I urge this committee to proceed in your work in a manner that respects—and does not preempt—more rigorous state laws, including those we have in California.
  • Like any law, the CCPA is not perfect, but it is an excellent first step. Consumers deserve more privacy and easier tools. For example, in the regulations implementing the CCPA, the California Department of Justice tried to address the frustration of consumers who must proceed website-by-website, browser-by-browser in order to opt out of the sale of their personal information. One provision of our regulations intended to facilitate the submission of a request to opt-out of sale by requiring businesses to comply when a consumer has enabled a global privacy control at the device or browser level, which should be less time-consuming and burdensome. I urge the technology community to develop consumer-friendly controls to make exercise of the right to opt out of the sale of information meaningful and frictionless. Making technology work for consumers is just as important as the benefits businesses receive in innovating.
  • There are also ways in which CCPA could go further and require refinement of its compliance measures. For example, the CCPA currently only requires disclosure of “categories of sources” from which personal information is collected and “categories of third parties” to whom personal information is sold. More specific disclosures, including the names of businesses that were the source or recipient of the information, should be required so that consumers can know the extent to which their information has been shared, bartered, and sold. If I receive junk mail from a company, I should be able to find out how it got my address and to whom it shared the information so I can stop the downstream purchase of my personal data. For now, businesses are not legally required to share that granularity of information. Consumers should also have the ability to correct the personal information collected about them, so as to prevent the spreading of misinformation.
  • On a broader level, if businesses want to use consumers’ data, they should have a duty to protect and secure it, and wherever feasible, minimize data collection. Businesses should no longer approach consumer data with the mindset, “collect now, monetize later.” There should be a duty imposed to use a consumer’s personal information in accordance with the purposes for which the consumer allowed its collection, and in the consumer’s interest, especially with the collection and storage of sensitive information, like precise geolocation. Although CCPA requires transparent notice at collection, moving beyond a notice-and-consent framework to contemplate use limitations would make our privacy rights more robust and balanced.
  • We need clear lines on what is illegal data use from the context of civil rights protections. Indirect inferences based on personal information should not be used against us in healthcare decisions, insurance coverage or employment determinations. We need greater transparency on how algorithms impact people’s fundamental rights of healthcare, housing and employment, and how they may be perpetuating systemic racism and bias. Predatory online practices, such as increased cross-site tracking after a user browses healthcare websites, must be addressed.
  • Finally, new laws should include a private right of action to complement and fortify the work of state enforcers. While my office is working hard to protect consumer privacy rights in California, and our sister states do the same in their jurisdictions, we cannot do this work alone. While we endeavor to hold companies accountable for violations of privacy laws, trying to defend the privacy rights of 40 million people in California alone is a massive undertaking. Violators know this. They know our scope and reach are limited to remedying larger and more consequential breaches of privacy. Consumers need the authority to pursue remedies themselves for violations of their rights. Private rights of action provide a critical adjunct to government enforcement, and enable consumers to assert their rights and seek appropriate remedies. Consumer privacy must be real, it deserves its day in court.

© Michael Kans, Michael Kans Blog and, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and with appropriate and specific direction to the original content.

Image by KaraSuva from Pixabay

Another Federal Privacy Bill

Senate Commerce Republicans revise and release privacy bill that does not budge on main issues setting them apart from their Democratic colleagues.

Last week, in advance of tomorrow’s hearing on privacy legislation, the chair and key Republicans released a revised version of draft legislation released last year to mark their position on what United States (U.S.) federal privacy regulation should be. Notably, last year’s draft and the updated version would still preempt state laws like the “California Consumer Privacy Act” (CCPA) (AB 375), and people in the U.S. would not be given the right to sue entities that violate the privacy law. These two issues continue to be the main battle lines between Democratic and Republican bills to establish a U.S. privacy law. Given the rapidly dwindling days left in the 116th Congress and the possibility of a Democratic White House and Senate next year, this may be both a last gasp effort to get a bill out of the Senate and to lay down a marker for next year.

The “Setting an American Framework to Ensure Data Access, Transparency, and Accountability (SAFE DATA) Act” (S.4626) was introduced by Senate Commerce, Science, and Transportation Committee Chair Roger Wicker (R-MS), Senate Majority Whip and  Communications, Technology, Innovation, and the Internet Subcommittee Chair John Thune (R-SD), Transportation and Safety Subcommittee Chair Deb Fischer (R-NE), and Safety, and Senator Marsha Blackburn (R-TN). However, a notable Republican stakeholder is not a cosponsor: Consumer Protection Subcommittee Chair Jerry Moran (R-KS) who introduced his own bill, the “Consumer Data Privacy and Security Act of 2020” (S.3456) (See here for analysis).

As mentioned, Wicker had put out for comment a discussion draft, the “Consumer Data Privacy Act of 2019” (CDPA) (See here for analysis) in November 2019 shortly after the Ranking Member on the committee, Senator Maria Cantwell (D-WA) and other Democrats had introduced their privacy bill, the “Consumer Online Privacy Rights Act“ (COPRA) (S.2968) (See here for more analysis). Here’s how I summarized the differences at the time: in the main, CDPA shares the same framework with COPRA with some key, significant differences, including:

  • COPRA expands the FTC’s jurisdiction in policing privacy harms whereas CDPA would not
  • COPRA places a duty of loyalty on covered entities to people whose covered data they process or transfer; CDPA does not have any such duty
  • CDPA does not allow people to sue if covered entities violate the new federal privacy and security regime; COPRA would allow such suits to move forward
  • CDPA preempts state privacy and data security laws; COPRA would establish a federal floor that states like California would be able to legislate on top of
  • CDPA would take effect two years after enactment, and COPRA would take effect six months after enactment.
  • The bar against a person waiving her privacy rights under COPRA is much broader than CDPA
  • COPRA would empower the FTC to punish discriminatory data processing and transfers; CDPA would require the FTC to refer these offenses to the appropriate federal and state agencies
  • CDPA revives a concept from the Obama Administration’s 2015 data privacy bill by allowing organizations and entities to create standards or codes of conduct and allowing those entities in compliance with these standards or codes to be deemed in compliance with CDPA subject to FTC oversight; COPRA does not have any such provision
  • COPRA would require covered entities to conduct algorithmic decision-making impact assessments to determine if these processes are fair, accurate, and unbiased; no such requirement is found in CDPA

As a threshold matter, the SAFE DATA Act is in the latest in a line of enhanced notice and consent bills founded on the logic that if people were informed and able to make choices about how and when their data are used, then the U.S. would have an ideal data and privacy ecosystem. This view, perhaps coincidentally, dovetails with Republican views on other issues where people should merely be given information and the power to choose, and any bad outcomes being the responsibility of those who made poor choices. This view runs counter to those who see privacy and data security as being akin to environmental or pollution problems, that is being beyond the ability of any one person to manage or realistically change.

Turning to the bill before us, we see that while covered entities may not outright deny services and products to people if they choose to exercise the rights granted under the bill visa vis their covered data, a covered entity may charge different prices. This structure would predictably lead to only those who can afford it or are passionately committed to their privacy being able to pay for more privacy. And yet, the rights established by the bill for people to exercise some control over their private information cannot be waived, forestalling the possibility that some covered entities would make such a waiver a term of service like many companies do with a person’s right to sue.

Covered entities must publish privacy policies before or at the point of data collection, including:

  • The identity of the entity in charge of processing and using the covered data
  • The categories of covered data collected and the processing purposes of each category
  • Whether transfers of covered data occur, the categories of those receiving such data, and the purposes for which transfers occur
  • The entity’s data retention and data security policies generally; and
  • How individuals may exercise their rights.

Any material changes mean new privacy policies provided to people and consent again must be obtained before collection and processing may resume.

There is, however, language not seen in other privacy bills: “[w]here the ownership of an individual’s device is transferred directly from one individual to another individual, a covered entity may satisfy its obligation to disclose a privacy policy prior to or at the point of collection of covered data by making the privacy policy available under (a)(2)” (i.e. by posting on the entity’s website.) So, if I give an old phone to a friend, a covered entity may merrily continue collecting and processing data because I consented and my friend’s consent is immaterial. Admittedly, this would seem to be a subset of all devices used in the U.S., but it does not seem to be a stretch for covered entities to need to obtain consent if they determine a different person has taken over a device. After all, they will almost certainly be able to discern the identity of the new user and that the device is now being used by someone new.

Section 103 of the SAFE DATA Act establishes a U.S. resident’s rights to access, correct, delete, and port covered data. People would be able to access their covered data and correct “material” inaccuracies or incomplete information at least twice a year at no cost provided the covered entity can verify their identity. Included with the right to access would be provision of the categories of third parties to whom covered data has been transferred and a list of the categories of purposes. There is a long list of reasons why covered entities would not need to comply, including but not limited to:

  • If the covered entity must “retain any covered data for the sole purpose of fulfilling the request; “
  • If it would “be impossible or demonstrably impracticable to comply with;”
  • If a request would “require the covered entity to combine, relink, or otherwise reidentify covered data that has been deidentified;”
  • If it would “result in the release of trade secrets, or other proprietary or confidential data or business practices;”
  • If it would “interfere with law enforcement, judicial proceedings, investigations, or reasonable efforts to guard against, detect, or investigate malicious or unlawful activity, or enforce contracts;”
  • If it would “require disproportionate effort, taking into consideration available technology, or would not be reasonably feasible on technical grounds;”
  • If it would “compromise the privacy, security, or other rights of the covered data of an- other individual;”
  • If it would “be excessive or abusive to another individual; or
  • If t would “violate Federal or State law or the rights and freedoms of another individual, including under the Constitution of the United States.”

This extensive list will give companies not interested in complying with plenty of reason to proffer as to why they will not provide access or correct. Nonetheless, the FTC would need to draft and implement regulations “establishing requirements for covered entities with respect to the verification of requests to exercise rights” to access and correct. Perhaps the agency will be able to address some foreseeable problems with the statute as written.

Explicit consent is needed before a covered entity may transfer or process the “sensitive covered data” of a person. The first gloss on this right is that a person’s consent is not needed to collect, process, and transfer the “covered data” of a person. Elsewhere in the section, it is clear that one has a limited opt out right: “a covered entity shall provide an individual with the ability to opt out of the collection, processing, or transfer of such individual’s covered data before such collection, processing, or transfer occurs.”

Nonetheless, a bit of a detour back into the definitions section of the bill is in order to understand which types of data lay on which side of the consent line. “Covered data” are “information that identifies or is linked or reasonably linkable to an individual or a device that is linked or reasonably linkable to an individual” except for publicly available data, employment data, aggregated data, and de-identified data. Parenthetically, I would note the latter two exceptions would seem to be incentives for companies to hold personal information in the aggregate or in a de-identified state as much as possible so as to avoid triggering the requirements of the SAFE DATA Act.

“Sensitive covered data” would be any of the following (and, my apologies, the list is long):

  • A unique, government-issued identifier, such as a Social Security number, passport number, or driver’s license number, that is not required to be displayed to the public.
  • Any covered data that describes or reveals the diagnosis or treatment of the past, present, or future physical health, mental health, or disability of an individual.
  • A financial account number, debit card number, credit card number, or any required security or access code, password, or credentials allowing access to any such account.
    Covered data that is biometric information.
  • A persistent identifier.
  • Precise geolocation information (defined elsewhere as anything within 1750 feet)
  • The contents of an individual’s private communications, such as emails, texts, direct messages, or mail, or the identity of the parties subject to such communications, unless the covered entity is the intended recipient of the communication (meaning metadata is fair game; and this can be incredibly valuable. Just ask he National Security Agency)
  • Account log-in credentials such as a user name or email address, in combination with a password or security question and answer that would permit access to an online account.
  • Covered data revealing an individual’s racial or ethnic origin, or religion in a manner inconsistent with the individual’s reasonable expectation regarding the processing or transfer of such information. (of course, this sort of qualifying language always makes me think according to who’s definition of “reasonable expectation”)
  • Covered data revealing the sexual orientation or sexual behavior of an individual in a manner inconsistent with the individual’s reasonable expectation regarding the processing or transfer of such information. (See the previous clause)
  • Covered data about the online activities of an individual that addresses or reveals a category of covered data described in another subparagraph of this paragraph. (I suppose this is intended as a backstop against covered entities trying to backdoor their way into using sensitive covered data by claiming its covered data from online activities.)
  • Covered data that is calendar information, address book information, phone or text logs, photos, or videos maintained for private use on an individual’s device.
  • Any covered data collected or processed by a covered entity for the purpose of identifying covered data described in another paragraph of this paragraph. (again, this seems aimed at plugging a possible loophole in that ordinary covered data can probably be processed or combined with other covered data to arrive at some categories of “sensitive covered data.”)
  • Any other category of covered data designated by the Commission pursuant to a rulemaking under section 553 of title 5, United States Code (meaning the FTC can use normal rulemaking authority and not the shackles of the Moss-Magnuson rulemaking procedures to expand this definition as needed).

So, we have a subset of covered data that would be subject to consent requirements, including notice with a “clear description of the processing purpose for which the sensitive covered data will be processed;” that “clearly identif[ies] any processing purpose that is necessary to fulfill a request made by the individual” that “include[s] a prominent heading that would enable a reasonable individual to easily identify the processing purpose for which consent is sought; and “clearly explain[s] the individual’s right to provide or withhold consent.”

Finally, the FTC may but does not have “to establish requirements for covered entities regarding clear and conspicuous procedures for allowing individuals to provide or withdraw affirmative express consent for the collection of sensitive covered data.” If the agency chooses to do so, it may use the normal notice and comment procedures virtually every other U.S. agency may.

Covered entities must minimize collection, processing, and retention of covered data to “what is reasonably necessary, proportionate, and limited” except if permitted elsewhere in the SAFE DATA Act or another federal statute. Interestingly, the FTC would not be tasked with conducting a rulemaking but would instead need to issue guidelines with best practices on how covered entities would undertake such minimization.

Service providers must follow the direction of the covered entity with whom they are working and delete or deidentify data after they have finished work upon it. Third parties are limited in processing covered data to only those purposes consistent with the reasonable expectations of the individual to whom the data belong. However, third parties do not need to obtain consent for processing sensitive covered data or covered data. However, covered entities must perform due diligence to ensure that service providers and third parties will comply with the requirements particular to these two classes of entities. However, there is no obligation beyond due diligence and no suggestion of liability for the misdeeds and violations of service providers and third parties.

Large data holders would need to conduct periodic privacy impact analyses with an eye toward helping these entities improve their privacy policies. This class of covered entities are those that have processed or transferred the covered data of 8 million or more people in a given year or the sensitive covered data of 300,000 people.

The SAFE DATA Act would generally allow covered entities to collect, process, and transfer the covered data of people without their consent so long as these activities are reasonably necessary, proportionate and limited to the following purposes:

  • To initiate or complete a transaction or to fulfill an order or provide a service specifically requested by an individual, including associated routine administrative activities such as billing, shipping, financial reporting, and accounting.
  • To perform internal system maintenance, diagnostics, product or service management, inventory management, and network management.
  • To prevent, detect, or respond to a security incident or trespassing, provide a secure environment, or maintain the safety and security of a product, service, or individual.
  • To protect against malicious, deceptive, fraudulent, or illegal activity.
  • To comply with a legal obligation or the establishment, exercise, analysis, or defense of legal claims or rights, or as required or specifically authorized by law.
  • To comply with a civil, criminal, or regulatory inquiry, investigation, subpoena, or summons by an Executive agency.
  • To cooperate with an Executive agency or a law enforcement official acting under the authority of an Executive or State agency concerning conduct or activity that the Executive agency or law enforcement official reasonably and in good faith believes may violate Federal, State, or local law, or pose a threat to public safety or national security.
  • To address risks to the safety of an individual or group of individuals, or to ensure customer safety, including by authenticating individuals in order to provide access to large venues open to the public.
  • To effectuate a product recall pursuant to Federal or State law.

People would not be able to opt out of collection, processing, and transferring covered data. As mentioned earlier, U.S. residents would receive a limited right to opt out, and it is in Section 108 that one learns the things a person cannot opt out of. I suppose it should go without saying that covered entities will interpret these terms as broadly as they can in order to forestall people from opting out. The performance of “internal system maintenance, diagnostics, product or service management, inventory management, and network management” seems like a potentially elastic definition that could be asserted to give cover to some covered entities.

Speaking of exceptions, small businesses would not need to heed the rights of individuals regarding their covered data, do not need to minimize their collection, processing, and transferring covered data, and will not need to have data privacy and security officers. These are defined as entities with gross annual revenues below $50 million per year, that has processed the covered data of less than 1 million people, has fewer than 500 employees, and earns less than 50% of its revenue from transferring covered data. On its face, this seems like a very generous definition of what shall be a small business.

The FTC would not be able to police processing and transferring of covered data that violates discrimination laws. Instead the agency would need to transfer these matters to agencies of jurisdiction. The FTC would be required to use its 6(b) authority to “examin[e] the use of algorithms to process covered data in a manner that may violate Federal anti-discrimination laws” and then publish a report in its findings and guidance on how covered entities can avoid violating discrimination laws.

Moreover, the National Institute of Standards and Technology (NIST) must “develop and publish a definition of ‘‘digital content forgery’’ and accompanying explanatory materials.” One year afterwards, the FTC must “publish a report regarding the impact of digital content forgeries on individuals and competition.”

Data brokers would need to register with the FTC, which would then publish a registry of data brokers on its website.

There would be additional duties placed on covered entities. For example, these entities must “establish, implement, and maintain reasonable administrative, technical, and physical data security policies and practices to protect against risks to the confidentiality, security, and integrity of covered data.” However, financial services companies subject to and in compliance with Gramm-Leach-Bliley regulations would be deemed to be in compliance with these data security obligations. The same would be true of entities subject to and in compliance with the “Health Insurance Portability and Accountability Act” and “Health Information Technology for Economic and Clinical Health Act.” Additionally, the FTC may “issue regulations to identify processes for receiving and assessing information regarding vulnerabilities to covered data that are reported to the covered entity.”

The SAFE DATA Act has language new to federal privacy bills on “opaque algorithms.” Specifically, covered internet platforms would not be able to use opaque algorithms unless notice is provided to users and an input-transparent algorithm version is available to users. The term ‘‘covered internet platform’’ is broad and encompasses “any public-facing website, internet application, or mobile application, including a social network site, video sharing service, search engine, or content aggregation service.” An “opaque algorithm” is “an algorithmic ranking system that determines the order or manner that information is furnished to a user on a covered internet platform based, in whole or part, on user-specific data that was not expressly provided by the user to the platform for such purpose.”

The bill makes it an unfair and deceptive practice for “large online operator[s]” “to design, modify, or manipulate a user interface with the purpose or substantial effect of obscuring, subverting, or impairing user autonomy, decision-making, or choice to obtain consent or user data.”

A covered entity must have

  • 1 or more qualified employees or contractors as data privacy officers; and
  • 1 or more qualified employees or contractors…as data security officers.

Moreover, “[a] covered entity shall maintain internal controls and reporting structures to ensure that appropriate senior management officials of the covered entity are involved in assessing risks and making decisions that implicate compliance with this Act.”

There are also provisions protecting whistleblowers inside covered entities that “voluntarily provide[] [“original information”] to the [FTC]…relating to non-compliance with, or any violation or alleged violation of, this Act or any regulation promulgated under this Act.”

Like virtually all the other bills, the FTC would be able to levy civil fines of more than $42,000 per violation, and state attorneys general would also be able to enforce the new privacy regime. However, the FTC would be able to intervene and take over the action if it chose, and if two or more state attorneys general are bringing cases regarding the same violations, then the cases would be consolidated and heard in the federal court in the District of Columbia. The FTC would also get jurisdiction over common carriers and non-profits for purposes of enforcing the SAFE DATA Act.

And then there is new language in the SAFE DATA Act that seems aimed at addressing a pair of cases before the Supreme Court on the extent of the FTC’s power to seek and obtain certain monetary damages and equitable relief. The FTC has appealed an adverse ruling from the U.S. Court of Appeals for the Seventh Circuit while the other case is coming from the U.S. Court of Appeals for the Ninth Circuit.

Like the forerunner bill released last November, the FTC would be empowered to “approve voluntary consensus standards or certification programs that covered entities may use to comply with 1 or more provisions in this Act.” These provisions came from an Obama Administration privacy bill allowing for the development and usage of voluntary consensus-based standards for covered entities to comply with in lieu of the provisions of that bill.

The SAFE DATA Act would not impinge existing federal privacy laws but would preempt all privacy laws at the state level. Ironically, the bill would not preempt data breach notification laws. One would think if uniformity across the U.S. were a driving motivation, doing so would be desirable.

© Michael Kans, Michael Kans Blog and, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and with appropriate and specific direction to the original content.

Image by Gerd Altmann from Pixabay

Uncertainty As Deadlines Approach On TikTok and WeChat EOs

It is still not clear how matters will play out with a proposed Oracle/TikTok deal and the ban on WeChat (and possibly TikTok if an acceptable deal cannot be made.)

Today, the Trump Administration issued orders barring TikTok and WeChat pursuant to executive orders issued an “Executive Order on Addressing the Threat Posed by TikTok” and an “Executive Order on Addressing the Threat Posed by WeChat” that bar any transactions with the companies that made, distribute, and operate TikTok and WeChat respectively, the former being much more popular in the United States (U.S.) than the latter. Working in the background is a potential deal between United States’ (U.S.) company Oracle and ByteDance that may address U.S. concerns about TikTok. On this front, there have been multiple stories from the Trump Administration about the positions of stakeholders on whether Oracle’s proposed role as a “trusted technology partner” will satisfy the national security concerns articulated in the executive order banning the app and the order from the United States government to ByteDance to divest a key part of their platform. Moreover, there is growing pressure from Republicans in Congress to reject the Oracle/TikTok arrangement as it stands.

In his public remarks this week, President Donald Trump seemed underwhelmed about the proposed Oracle/TikTok deal. He said that “[c]onceptually, I can tell you I don’t like [ByteDance maintaining a stake].” Trump stated “[i]f that’s the case, I’m not going to be happy with that.” He added any acceptable deal “has to be 100 percent as far as national security is concerned, and no, I’m not prepared to sign off on anything…[and] I have to see the deal.” On the other hand, Secretary of the Treasury and chair of Committee on Foreign Investment in the United States (CFIUS) Steven Mnuchin seemed to be taking a different view. He stated “I will just say from our standpoint, we’ll need to make sure that the code is, one, secure, Americans’ data is secure, that the phones are secure and we’ll be looking to have discussions with Oracle over the next few days with our technical teams.” And to this end, the New York Times is reporting that ByteDance has accepted some unspecified changes to the deal in order to address national security concerns, and Reuters is claiming ByteDance has agreed to an initial public offering within a year.

As noted, the U.S. Department of Commerce (Commerce) issued orders effectuating the executive orders, which are set to take effect this weekend. In a press release, Commerce explained:

As of September 20, 2020, the following transactions are prohibited:

  1. Any provision of service to distribute or maintain the WeChat or TikTok mobile applications, constituent code, or application updates through an online mobile application store in the U.S.;
  2. Any provision of services through the WeChat mobile application for the purpose of transferring funds or processing payments within the U.S.

As of September 20, 2020, for WeChat and as of November 12, 2020, for TikTokthe following transactions are prohibited:

  1. Any provision of internet hosting services enabling the functioning or optimization of the mobile application in the U.S.;
  2. Any provision of content delivery network services enabling the functioning or optimization of the mobile application in the U.S.;
  3. Any provision directly contracted or arranged internet transit or peering services enabling the function or optimization of the mobile application within the U.S.;
  4. Any utilization of the mobile application’s constituent code, functions, or services in the functioning of software or services developed and/or accessible within the U.S.

Commerce added:

Any other prohibitive transaction relating to WeChat or TikTok may be identified at a future date. Should the U.S. Government determine that WeChat’s or TikTok’s illicit behavior is being replicated by another app somehow outside the scope of these executive orders, the President has the authority to consider whether additional orders may be appropriate to address such activities. The President has provided until November 12 for the national security concerns posed by TikTok to be resolved. If they are, the prohibitions in this order may be lifted.

Commerce has submitted notices to be published next week in the Federal Register identifying the transactions that will be illegal regarding TikTok and WeChat:

  • Pursuant to Executive Order 13942, the Secretary of Commerce is publishing the list of prohibited transactions by any person, or with respect to any property, subject to the jurisdiction of the United States, with ByteDance Ltd. (a.k.a. Zìjié Tiàodòng), Beijing, China, or its subsidiaries, including TikTok Inc., in which any such company has any interest, to address the national emergency with respect to the information and communications technology and services supply chain declared in Executive Order 13873, May 15, 2019 (Securing the Information and Communications Technology and Services Supply Chain), and particularly to address the threat identified in Executive Order 13942 posed by mobile application TikTok.
  • Pursuant to Executive Order 13943, the Secretary of Commerce is publishing this Identification of Prohibited Transactions related to WeChat by any person, or with respect to any property, subject to the jurisdiction of the United States, with Tencent Holdings Ltd. (a.k.a. Téngxùn Kònggŭ Yŏuxiàn Gōngsī), Shenzhen, China, or any subsidiary of that entity, to address the national emergency with respect to the information and communications technology and services supply chain declared in Executive Order 13873, May 15, 2019 (Securing the Information and Communications Technology and Services Supply Chain), and particularly to address the threat identified in Executive Order 13943 posed by mobile application WeChat.

While the TikTok order could be rescinded if a deal with Oracle is approved by the U.S. government, it seems unlikely that the WeChat order will be undone, at least in the short term. Moreover, these orders will undoubtedly be challenged further in court. Last month, TikTok filed suit in United States federal court in Northern California, asking for an injunction to stop enforcement of the EO and a declaration that it is illegal. It is possible the company, along with Tencent, WeChat’s parent, ask a federal court to stop the Trump Administration from proceeding.

Moreover, there are questions about enforcement, for the Administration cannot reasonably expect people in the U.S. to stop using and delete TikTok and WeChat. There may also be a case to be made on First Amendment grounds that the orders violate rights of free speech and association.

As mentioned, a number of Republicans have come out against the Oracle/TikTok deal. At the beginning of the week, Senator Josh Hawley (R-MO) wrote Mnuchin “calling on CFIUS to reject Oracle’s proposed partnership with ByteDance to obtain control of TikTok’s U.S. operations…[because]…the proposed partnership allows for continued Chinese Communist Party (CCP) control of TikTok, putting American data at risk and violating President Trump’s executive order.” Hawley added:

CFIUS should promptly reject any Oracle-ByteDance collaboration and send the ball back to ByteDance’s court so that the company can come up with a more acceptable solution. ByteDance can still pursue a full sale of TikTok, its code, and its algorithm to a U.S. company, so that the app can be rebuilt from the ground up to remove any trace of CCP influence.

Acting Senate Intelligence Committee Chair Marco Rubio (R-FL), Senate Commerce, Science, and Transportation Committee Chair Roger Wicker (R-MS), and Thom Tillis (R-NC), Rick Scott (R-FL), Dan Sullivan (R-AK), and John Cornyn (R-TX) sent a letter to the President “outlining significant concerns regarding reports that Oracle Corp. confirmed a deal with ByteDance to become a “trusted technology provider” for TikTok’s U.S. operations, including that the “arrangement could violate the requirements set about in the August 6, 2020 Executive Order on Addressing the Threat Posed by TikTok and would do little to satisfy the range of concerns expressed in that order.”

Senator Ted Cruz (R-TX) also wrote Mnuchin arguing:

The Chinese Communist Party and its expansionist actions represent a threat the United States, its interests, and its allies. This Administration has correctly recognized this threat and has taken substantial counter-measures in response to protect our national security. I urge you to do the same when reviewing the newly submitted plan of a transaction between the Chinese company ByteDance and Oracle.

So far, Democrats in Congress, and the Biden campaign, have remained silent, apparently willing to let Republicans criticize the proposed deal from the right. The White House may ultimately prove susceptible to criticism and seek a modified deal to allay these concerns. However, these Republican Senators seem to be laying out a case for a much more dramatic transaction, but one that would likely run afoul of new regulations issued by the People’s Republic of China on export controls. Late last month, two PRC agencies changed the PRC’s export control rules for the first time since 2008 to likely have leverage over TikTok’s sale to a U.S. entity. Ostensibly, the changes are “to regulate technology exports, promote scientific and technological progress and economic and technological cooperation, and maintain national economic security,” but the inclusion of “personalised information recommendation service technology based on data analysis” and “artificial intelligence interactive interfaces” likely point to ByteDance’s app, TikTok. In fact, a researcher with the PRC Ministry of Commerce was quoted as asserting “[t]he time to publish the new update of the export control list has been expedited due to the TikTok sale.”

© Michael Kans, Michael Kans Blog and, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and with appropriate and specific direction to the original content.

Image by 雪飞 王 from Pixabay

Dueling COVID-19 Privacy Bills Released

Democratic stakeholders answer a Republican proposal on how to regulate privacy issues raised by COVID-19 contact tracing. The proposals have little chance of enactment and are mostly about positioning.  

First things first, if you would like to receive my Technology Policy Update, email me. You can find some of these Updates from 2019 and 2020 here.

Late last week, a group of Democratic stakeholders on privacy and data security issues released the “Public Health Emergency Privacy Act” (S.3749), a bill that serves as a counterpoint to the “COVID-19 Consumer Data Protection Act” (S.3663) legislation introduced a few weeks ago to pose a solution to the privacy issues raised by contact tracing of COVID-19. However, the Democratic bill contains a number of provisions that many Republicans consider non-starters such as a private right of action for individuals and no preemption of state laws. It is not likely this bill will advance in the Senate even though it may possibly be moved in the House. S. 3749 was introduced by Senators Richard Blumenthal (D-CT) and Mark Warner (D-VA) and Representatives Anna Eshoo (D-CA), Jan Schakowsky (D-IL), and Suzan DelBene (D-WA). 

In a way, the “Public Health Emergency Privacy Act” makes the case that “Health Insurance Portability and Accountability Act” (HIPAA)/“Health Information Technology for Economic and Clinical Health Act” (HITECH Act) regulations are inadequate to protect the privacy and data of people in the United States for the reason that these regulations apply only to healthcare providers, their business associates, and other named entities in the healthcare system. Entities collecting healthcare information, even very sensitive information, are not subject to HIPAA/HITECH regulations and would likely be regulated, to the extent they are, by the Federal Trade Commission (FTC) or state agencies and attorneys general.

The “Public Health Emergency Privacy Act” would cover virtually all entities, including government agencies except for public health authorities (e.g. a state department of health or the Centers for Disease Control and Prevention), healthcare providers, service providers, people acting in a household capacity. This remarkable definition likely reflects plans announced around by the world by governments to eschew Google and Apple’s contact tracing app to develop its own. Moreover, it would also touch some efforts aside and apart from contact tracing apps. Moreover, this is the first bill introduced recently that proposes to treat public and private entities the same way with respect to how and when they may collect personal data.

The types of data protected under the act are “emergency health data” which are defined as “data linked or reasonably linkable to an individual or device, including data inferred or derived about the individual or device from other collected data provided such data is still linked or reasonably linkable to the individual or device, that concerns the public COVID–19 health emergency.” The bill then lists a number of examples of emergency health data that is sweeping and comprehensive. This term includes “information that reveals the past, present, or future physical or behavioral health or condition of, or provision of healthcare to, an individual” related to testing for COVID-19 and related genetic and biometric information. Geolocation and proximity information would also be covered by this definition. Finally, emergency health data encompasses “any other data collected from a personal device,” which seems to cover all data collection from a person’s phone, the primary means of contact tracing proposed thus far. However, it would appear that data on people collected at the household or higher level may not be subject to this definition and hence much of the bill’s protections.

The authority provided to covered entities is limited. First, collection, use, and disclosure of emergency health data are restricted to good faith health purposes. And yet, this term is not defined in the act, and the FTC may need to define it during the expedited rulemaking the bill. However, it seems a fairly safe assumption the agency and courts would not construe using emergency health data for advertising would not be a good faith public health purpose. Next, covered entities must allow people to correct inaccurate information and the entity itself has the duty to take reasonable efforts on its own to correct this information. Covered entities must implement reasonable safeguards to prevent discrimination on the basis of these data, which is a new obligation placed on covered entities in a privacy or data security bill. This provision may have been included to bar government agencies collecting and using covered data for discriminatory purposes. However, critics of more expansive bills may see this as the camel’s nose under the tent for future privacy and data security bills. Finally, the only government agencies that can legally be provided emergency health data are public health authorities and then only “for good faith public health purposes and in direct response to exigent circumstances.” Again, the limits of good faith public health purposes remain unclear and then such disclosure can only occur in exigent circumstances, which likely means when a person has contracted or been exposed to COVID-19.

Covered entities and service providers must implement appropriate measure to protect the security and confidentiality of emergency health data, but the third member of the usual triumvirate, availability, is not identified as requiring protection.

The “Public Health Emergency Privacy Act” outright bars a number of uses for emergency health data:

  • Commercial advertising, recommendations for e-commerce, or machine learning for these purposes
  • Any discriminatory practices related to employment, finance, credit, insurance, housing, or education opportunities; and
  • Discriminating with respect to goods, services, facilities, privileges, advantages, or accommodations of any place of public accommodation

It bears note the covered data cannot be collected or disclosed for these purposes either.

The act contains very strong consent language. Covered entities may not collect, use, or disclose emergency health data unless a person has provided express affirmative consent, which requires knowing, informed choice that cannot be obtained through the use of deceptive practices nor inferred through a person’s inaction. However, there are significant exceptions to this consent requirement that would allow covered entities to collect, use, or disclose these data, including

  • protecting against malicious, deceptive, fraudulent, or illegal activity; or
  • detecting, responding to, or preventing information security incidents or threats;
    the covered organization is compelled to do so by a legal obligation.

But these purposes are valid only when necessary and solely for the fulfillment of named purpose.

In a related vein, covered entities must allow people to revoke their consent and it must be effective as soon as practicable but no later than 15 days afterward. Moreover, the person’s emergency health data must be destroyed or rendered not linkable within 30 days of revocation of consent.

Covered entities must provide clear and conspicuous notice before or at the point of data collection that explains how and why the emergency health data is collected, used, and disclosed. Moreover, it must also disclose the categories of recipients to whom a covered entity discloses covered data. This notice must also disclose the covered entity’s data retention policies and data security policies and practices but only with respect to emergency health data. The notice must also inform consumers on how they may exercise rights under the act and how to file a complaint with the FTC.

There would also be a public reporting requirement for those covered entities collecting, using, or disclosing the emergency health data of 100,000 or more people. Every three months, these entities would need to report on aggregate figures on the number of people from whom data was collected, used, and disclosed. These reports must also detail the categories of emergency health data collected, used and disclosed and the categories of third parties with whom such data are shared.

The bill requires covered entities to destroy or make not linkable emergency health data 60 days after the Secretary of Health and Human Services declares the end of the COVID-19 public health emergency, or a state does so, or 60 days after collection.

The FTC would be given the daunting task of beginning a rulemaking 7 days after enactment “to ensure a covered organization that has collected, used, or disclosed emergency health data before the date of enactment of this Act is in compliance with this Act, to the degree practicable” that must be completed within 45 days. There is also a provision requiring the Department of Health and Human Services (HHS) to issue guidance so that HIPAA/HITECH Act regulated entities do not need to meet duplicative requirements, and, moreover, the bill exempts these entities when operating under the HIPAA/HITECH Act regulations.

The FTC, state attorneys general, and people would be able to enforce this new act through a variety of means. The FTC would treat violations as f they were violations of regulation barring a deceptive or unfair practice, meaning the agency could levy fines in the first instance of more than $43,000 a violation. The FTC could seek a range of other relief, including injunctions, restitution, disgorgement, and remediation. However, the FTC could go to federal court without having to consult with the Department of Justice, which is a departure from current law and almost all the privacy bills introduced in this Congress. The FTC’s jurisdiction would be broadened to include common carriers and non-profits for purposes of this bill, too. The FTC would receive normal notice and comment rulemaking authority that is very broad as the agency would be free to promulgate any regulations it sees necessary to effectuate this act.

State attorneys general would be able to seek all the same relief the FTC can so long as the latter is not already bringing an action. Moreover, any other state officials empowered by state statute to bring similar actions may do so for violations of this act.

People would be allowed to sue in either federal or state court to vindicate violations. The bill states that any violation is to be considered an injury in fact to forestall any court from finding that a violation does not injure the person, meaning her suit cannot proceed. The act would allow people to recover between $100-1000 for any negligent violation and between $500-5000 for any reckless, willful, or intentional violation with no cap on total damages. People may also ask for and receive reasonable attorney’s fees and costs and any equitable or declaratory relief a court sees fit to grant. Moreover, the bill would disallow all pre-lawsuit arbitration agreements or waivers of rights. Much of the right to sue granted to people by the “Public Health Emergency Privacy Act” will be opposed by many Republicans and industry stakeholders.

The enforcement section raises a few questions given that entities covered by the bill include government agencies. Presumably, the FTC cannot enforce this act against government agencies for the very good reason they do not have jurisdiction over them. However, does the private right of action waive the federal and state government’s sovereign immunity? This is not clear and may need clarification of the bill is acted upon in either chamber of Congress.

This bill would not preempt state laws, which, if enacted, could subject covered entities to meeting more than one regime for collecting, using, and disclosing emergency health data.

Apart from contact tracing, the “Public Health Emergency Privacy Act” also bars the use of emergency health data to abridge a person’s right to vote and it requires HHS, the United States Commission on Civil Rights, and the FTC to “prepare and submit to Congress reports that examines the civil rights impact of the collection, use, and disclosure of health information in response to the COVID–19 public health emergency.”

Given the continued impasse over privacy legislation, it is little wonder that the bill unveiled a few weeks ago by four key Senate Republicans takes a similar approach that differs in key aspects. Of course, there is no private right of action and it expressly preempts state laws to the contrary.

Generally speaking, the structure of the “COVID–19 Consumer Data Protection Act of 2020” (S.3663) tracks with the bills that have been released thus far by the four sponsors: Senate Commerce, Science, and Transportation Committee Chair Roger Wicker (R-MS) (See here for analysis of the “Consumer Data Privacy Act of 2019”)and Senators John Thune (R-SD), Jerry Moran (R-KS) (See here for analysis of “Consumer Data Privacy and Security Act of 2020” (S.3456)), and Marsha Blackburn (R-TN) (See here for analysis of the “Balancing the Rights Of Web Surfers Equally and Responsibly Act of 2019” (BROWSER Act) (S. 1116)). In short, people would be provided with notice about what information the app collects, how it is processed, and with whom and under what circumstances this information will be shared. Then a person would be free to make an informed choice about whether or not she wants to consent and allow the app or technology to operate on her smartphone. The FTC and state attorneys general would enforce the new protections.

The scope of the information and entities covered is narrower than the Democratic bill. “Covered data” is “precise geolocation data, proximity data, a persistent identifier, and personal health information” but not aggregated data, business contact information, de-identified data, employee screening data, and publicly available information. Those entities covered by the bill are those already subject to FTC jurisdiction along with common carriers and non-profits, but government agencies are not, again unlike the bill put forth by Democrats. Entities must abide by this bill to the extent they “collect[], process[], or transfer[] such covered data, or determine[] the means and purposes for the collection, processing, or transfer of covered data.”

Another key definition is how an “individual” is determined, for it excludes any person acting in her role as an employee and almost all work-related capacities, making clear employers will not need to comply with respect to those working for them.

“Personal health information” is “information relating to an individual that is

  • genetic information of the individual; or
  • information relating to the diagnosis or treatment of past, present, or future physical, mental health, or disability of the individual; and
  • identifies, or is reasonably linkable to, the individual.

And yet, this term excludes educational information subject to the “Family Educational Rights and Privacy Act of 1974” or “information subject to regulations promulgated pursuant to the HIPAA/HITECH Acts.

The “COVID–19 Consumer Data Protection Act of 2020” bars the collection, processing, or transferring of covered data for a covered purpose unless prior notice is provided, a person has provided affirmative consent, and the covered entity has agreed not to collect, process, or transfer such covered data for any other purpose than those detailed in the notice. However, leaving aside the bill’s enumerated allowable purposes for which covered data may collected with consent, the bill provides a number of exemptions from this general bar. For example, collection, processing, or transfers necessary for a covered entity to comply with another law are permissible apparently in the absence of a person’s consent. Moreover, a covered entity need not obtain consent for operational or administrative tasks not disclosed in the notice provided to people.

The act does spell out “covered purposes” for which covered entities may collect, process, or transfer covered data with consent after notice has been given:

  • Collecting, processing, or transferring the covered data of an individual to track the spread, signs, or symptoms of COVID–19.
  • Collecting, processing, or transferring the covered data of an individual to measure compliance with social distancing guidelines or other requirements related to COVID–19 that are imposed on individuals under a Federal, State, or local government order.
  • Collecting, processing, or transferring the covered data of an individual to conduct contact tracing for COVID–19 cases.

Covered entities would be required to publish a privacy policy detailing for which of the above covered purposes a person’s covered data would be collected, processed, or transferred. This policy must also detail the categories of entities that receive covered data, its data retention and data security policies.

There would be reporting requirements that would affect more covered entities than the Democratic bill. Accordingly, any covered entity collecting, processing or transferring covered data for one of the enumerated covered processes must issue a public report 30 days after enactment and then every 60 days thereafter.

Among other provisions in the bill, people would be able to revoke consent, a request that covered entities must honor within 14 days. Covered entities must also ensure the covered data are accurate, but this requirement falls a bit short of people being granted to right to correct inaccurate data as they would, instead, be merely able to report inaccuracies. There is no recourse if a covered entity chooses not to heed these reports. Covered entities would need to “delete or de-identify all covered data collected, processed, or transferred for a [covered purpose] when it is no longer being used for such purpose and is no longer necessary to comply with a Federal, State, or local legal obligation, or the establishment, exercise, or defense of a legal claim.” Even though there is the commandment to delete or de-identify, the timing as to when that happens seems somewhat open-ended as some covered entities could seek legal obligations to meet in order to keep the data on hand.

Covered entities must also minimize covered data by limiting collection, processing, and transferring to “what is reasonably necessary, proportionate, and limited to carry out [a covered purpose.]” The FTC must draft and release guidelines “recommending best practices for covered entities to minimize the collection, processing, and transfer of covered data.” However, these guidelines would most likely be advisory in nature and would not carry the force of law or regulation, leaving covered entities to disregard some of the document if they choose. Covered entities must “establish, implement, and maintain reasonable administrative, technical, and physical data security policies and practices to protect against risks to the confidentiality, security, and integrity of such data,” a mandate broader than the duty imposed by the Democratic bill.

The FTC and state attorneys general would enforce the new regime. The FTC would be able to seek civil fines for first violations in the same way as the Democrat’s privacy bill. However, unlike the other bill, the Republican bill would nullify any Federal Communications Commission (FCC) jurisdiction to the extent it conflicted with the FTC’s in enforcement of the new act. Presumably, this would address jurisdictional issues raised by placing common carriers under FTC jurisdiction when they are usually overseen by the FCC. Even though state laws are preempted, state attorneys general would be able to bring actions to enforce this act at the state level. And, as noted earlier, there is no private right of action.

© Michael Kans, Michael Kans Blog and, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and with appropriate and specific direction to the original content.

Senate Commerce Republicans Vow To Introduce Privacy Bill To Govern COVID-19 Apps and Tech

Key Republican stakeholders on privacy legislation float a bill on COVID-19 relating to privacy that seems unlikely to garner the necessary Democratic buy-in to advance.  

Late last week, key Republicans on the Senate Commerce, Science, and Transportation announced they would introduce the “COVID-19 Consumer Data Protection Act” that provide new privacy and data security protections for the use of a COVID-19 contact tracing app and similar technologies. To date, text of the legislation has not been released and so any analysis of the bill is derived from a short summary issued by the committee and reports from media outlets that have apparently been provided a copy of the bill.

Based on this information, to no great surprise, the basic structure of the bill tracks privacy and data protection legislation previously introduced by the co-sponsors of the new bill: Chair Roger Wicker (R-MS) (See here for analysis of the “Consumer Data Privacy Act of 2019”)and Senators John Thune (R-SD), Jerry Moran (R-KS) (See here for analysis of “Consumer Data Privacy and Security Act of 2020” (S.3456)), and Marsha Blackburn (R-TN) (See here for analysis of the “Balancing the Rights Of Web Surfers Equally and Responsibly Act of 2019” (BROWSER Act) (S. 1116)). In short, people would be provided with notice about what information the app collects, how it is processed, and with whom and under what circumstances this information will be shared. Then a person would be free to make an informed choice about whether or not she wants to consent and allow the app or technology to operate on her smartphone. The Federal Trade Commission (FTC) and state attorneys general would enforce the new protections, and as there was no mention of a private right of action, and given these Members opposition to such provisions, it is likely the bill does not provide such redress. Moreover, according to media reports, the bill would preempt state laws contrary to its provision, which would be another likely non-starter among Democrats.

Wicker, Thune, Moran, and Blackburn claimed their bill “would provide all Americans with more transparency, choice, and control over the collection and use of their personal health, geolocation, and proximity data…[and] would also hold businesses accountable to consumers if they use personal data to fight the COVID-19 pandemic” as they asserted in their press release.

Wicker, Thune, Moran, and Blackburn provided this summary of the “COVID-19 Consumer Data Protection Act:”

  • Require companies under the jurisdiction of the Federal Trade Commission to obtain affirmative express consent from individuals to collect, process, or transfer their personal health, geolocation, or proximity information for the purposes of tracking the spread of COVID-19.
  • Direct companies to disclose to consumers at the point of collection how their data will be handled, to whom it will be transferred, and how long it will be retained.
  • Establish clear definitions about what constitutes aggregate and de-identified data to ensure companies adopt certain technical and legal safeguards to protect consumer data from being re-identified.
  • Require companies to allow individuals to opt out of the collection, processing, or transfer of their personal health, geolocation, or proximity information.
  • Direct companies to provide transparency reports to the public describing their data collection activities related to COVID-19.
  • Establish data minimization and data security requirements for any personally identifiable information collected by a covered entity.
  • Require companies to delete or de-identify all personally identifiable information when it is no longer being used for the COVID-19 public health emergency.
  • Authorize state attorneys general to enforce the Act.

If such legislation were to pass, it would add to the patchwork of privacy and data security bills already enacted that are geared to addressing certain sectors or populations (e.g. the “Health Insurance Portability and Accountability Act” (HIPAA) protects some healthcare information and “Children’s Online Privacy Protection Act” (COPPA) broadly protects children online.)

© Michael Kans, Michael Kans Blog and, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and with appropriate and specific direction to the original content.