UK and EU Defer Decision On Data Flows

Whether there will be an adequacy decision allowing the free flow of personal data under the GDPR from the EU to the recently departed UK has been punted. And, its recent status as a member of the EU notwithstanding, the UK might not get an adequacy decision.

In reaching agreement on many aspects of the United Kingdom’s (UK) exit from the European Union (EU), negotiators did not reach agreement on whether the EU would permit the personal data of EU persons to continue flowing to the UK under the easiest means possible. Instead, the EU and UK agreed to let the status quo continue until an adequacy decision is made or six months lapse. The value of data flowing between the UK and EU was valued at more than £100 billion in 2017 according to British estimates, with the majority of this trade being from the UK to the EU.

Under the General Data Protection Regulation (GDPR), the personal data of EU people can be transferred to other nations for most purposes once the European Commission (EC) has found the other nation has adequate protection equal to those granted in the EU. Of course, this has been an ongoing issue with data flows to the United States (U.S.) as two agreements (Safe Harbor and Privacy Shield) and their EC adequacy decisions were ruled illegal, in large part, because, according to the EU’s highest court, U.S. law does not provide EU persons with the same rights they have in the EU. Most recently, this occurred in 2020 when the Court of Justice of the European Union (CJEU) struck down the adequacy decision underpinning the EU-United States Privacy Shield (aka Schrems II). It bears note that transfers of personal data may occur through other means under the GDPR that may prove more resource intensive: standard data protection clauses (SCC), binding corporate rules (BCR), and others.

Nevertheless, an adequacy decision is seen as the most desirable means of transfer and the question of whether the UK’s laws are sufficient has lingered over the Brexit discussions, with some claiming that the nation’s membership in the Five Eyes surveillance alliance with the U.S. and others possibly disqualifying the UK. Given the range of thorny issues the UK and EU punted (e.g. how to handle the border between Northern Ireland and Ireland), it is not surprising that the GDPR and data flows was also punted.

The UK-EU Trade and Cooperation Agreement (TCA) explained the terms of the data flow agreement and, as noted, in the short term, the status quo will continue with data flows to the UK being treated as if it were still part of the EU. This state will persist until the EC reaches an adequacy decision or for four months with another two months of the status quo being possible in the absence of an adequacy decision so long as neither the UK nor EU object. Moreover, these provisions are operative only so long as the UK has its GDPR compliant data protection law (i.e. UK Data Protection Act 2018) in place and does exercise specified “designated powers.” The UK has also deemed EU and European Economic Area (EEA) and European Free Trade Association (EFTA) nations to be adequate for purposes of data transfers from the UK on a transitional basis.

Specifically, the TCA provides

For the duration of the specified period, transmission of personal data from the Union to the United Kingdom shall not be considered as transfer to a third country under Union law, provided that the data protection legislation of the United Kingdom on 31 December 2020, as it is saved and incorporated into United Kingdom law by the European Union (Withdrawal) Act 2018 and as modified by the Data Protection, Privacy and Electronic Communications (Amendments etc) (EU Exit) Regulations 2019 (“the applicable data protection regime”), applies and provided that the United Kingdom does not exercise the designated powers without the agreement of the Union within the Partnership Council.

The UK also agreed to notify the EU if it “enters into a new instrument which can be relied on to transfer personal data to a third country under Article 46(2)(a) of the UK GDPR or section 75(1)(a) of the UK Data Protection Act 2018 during the specified period.” However, if the EU were to object, it appears from the terms of the TCA, all the EU could do is force the UK “to discuss the relevant object.” And yet, should the UK sign a treaty allowing personal data to flow to a nation the EU deems inadequate, this could obviously adversely affect the UK’s prospects of getting an adequacy decision.

Not surprisingly, the agreement also pertains to the continued flow of personal data as part of criminal investigations and law enforcement matters but not national security matters. Moreover, these matters fall outside the scope of the GDPR and would not be affected in many ways by an adequacy decision or a lack of one. In a British government summary, it is stated that the TCA

provide[s] for law enforcement and judicial cooperation between the UK, the EU and its Member States in relation to the prevention, investigation, detection and prosecution of criminal offences and the prevention of and fight against money laundering and financing of terrorism.

The text of the TCA makes clear national security matters visa vis data flows and information sharing are not covered:

This Part only applies to law enforcement and judicial cooperation in criminal matters taking place exclusively between the United Kingdom, on the one side, and the Union and the Member States, on the other side. It does not apply to situations arising between the Member States, or between Member States and Union institutions, bodies, offices and agencies, nor does it apply to the activities of authorities with responsibilities for safeguarding national security when acting in that field.

The TCA also affirms:

  • The cooperation provided for in this Part is based on the Parties’ long-standing commitment to ensuring a high level of protection of personal data.
  • To reflect that high level of protection, the Parties shall ensure that personal data processed under this Part is subject to effective safeguards in the Parties’ respective data protection regimes…

The United Kingdom’s data protection authority (DPA), the Information Commissioner’s Office (ICO), issued an explanation of how British law enforcement entities should act in light of the TCA. The ICO explained to British entities on law enforcement-related data transfers to the UK:

  • We are now a ‘third country’ for EU data protection purposes. If you receive personal data from a law enforcement partner in the EU, this means the sender will need to comply with the transfer provisions under their national data protection law (which are likely to be similar to those in Part 3 of the DPA 2018).
  • This means the EU sender needs to make sure other appropriate safeguards are in place – probably through a contract or other binding legal instrument, or by making their own assessment of appropriate safeguards. The sender can take into account the protection provided by the DPA 2018 itself when making this assessment.
  • If you receive personal data from other types of organisations in the EU or EEA who are subject to the GDPR, the sender will need to comply with the transfer provisions of the UK GDPR. You may want to consider putting standard contractual clauses (SCCs) in place to ensure adequate safeguards in these cases. We have produced an interactive tool to help you use the SCCs.

The ICO explained for transfers from the UK to the EU (but not the EEA):

  • There is a transitional adequacy decision in place to cover transfers to EU member states and Gibraltar. This will not extend to EEA countries outside the EU, where you should continue to consider other safeguards.
  • This means you can continue to send personal data from the UK to your law enforcement partners in the EU, as long as you can show the transfer is necessary for law enforcement purposes. You can also transfer personal data to non-law enforcement bodies in the EU if you can meet some additional conditions, but you will need to notify the ICO.

Turning back to an adequacy decision and commercial transfers of personal data from the EU to the UK, in what may well be a preview of a world in which there is no adequacy decision between the UK and EU, the European Data Protection Board (EDPB) issued an “information note” in mid-December that spells out how the GDPR would be applied:

  • In the absence of an adequacy decision applicable to the UK as per Article 45 GDPR, such transfers will require appropriate safeguards(e.g., standard data protection clauses, binding corporate rules, codes of conduct…), as well as enforceable data subject rights and effective legal remedies for data subjects, in accordance with Article 46 GDPR.
  • Subject to specific conditions, it may still be possible to transfer personal data to the UK based on a derogation listed in Article 49 GDPR. However, Article 49 GDPR has an exceptional nature and the derogations it contains must be interpreted restrictively and mainly relate to processing activities that are occasional and non-repetitive.
  • Moreover, where personal data are transferred to the UK on the basis of Article 46 GDPR safeguards, supplementary measures might be necessary to bring the level of protection of the data transferred up to the EU standard of essential equivalence, in accordance with the Recommendations 01/2020 on measures that supplement transfer tools to ensure compliance with the EU level of protection of personal data.

Regarding commercial data transfers, the ICO issued a statement urging British entities to start setting up “alternative transfer mechanisms” to ensure data continues to flow from the EU to UK:

  • The Government has announced that the Treaty agreed with the EU will allow personal data to flow freely from the EU (and EEA) to the UK, until adequacy decisions have been adopted, for no more than six months.
  • This will enable businesses and public bodies across all sectors to continue to freely receive data from the EU (and EEA), including law enforcement agencies.
  • As a sensible precaution, before and during this period, the ICO recommends that businesses work with EU and EEA organisations who transfer personal data to them, to put in place alternative transfer mechanisms, to safeguard against any interruption to the free flow of EU to UK personal data.

However, even with these more restrictive means of transferring personal data to the UK exist, there will likely be legal challenges. It bears note that in light of Schrems II, EU DPAs are likely to apply a much higher level of scrutiny to SCCs, and challenges to the legality of using SCCs to transfer personal data to the U.S. have already been commenced. It also seems certain the legality of using SCCs to transfer data to the UK would be challenged, as well.

However, returning to the preliminary issue of whether the EC will give the UK an adequacy decision, there may a number of obstacles to a finding that the UK’s data protection and surveillance laws are indeed adequate under EU law[1]. Firstly, the UK’s surveillance practices in light of a recent set of CJEU rulings may prove difficult for the EC to stomach. In 2020, the CJEU handed down a pair of rulings (here and here) on the extent to which European Union (EU) nations may engage in bulk, indiscriminate collection of two types of data related to electronic communications. The CJEU found that while EU member nations may conduct these activities to combat crime or national security threats during periods limited by necessity and subject to oversight, nations may not generally require the providers of electronic communications to store and provide indiscriminate location data and traffic data in response to an actual national security danger or a prospective one. The CJEU combined three cases into two rulings that came from the UK, France, and Belgium to elucidate the reach of the Privacy and Electronic Communications Directive in relation to foundational EU laws.

The UK is, of course, one of the U.S.’s staunchest allies and partners when it comes to government surveillance of electronic communications. On this point, the CJEU summarized the beginning of the case out of the UK:

  • At the beginning of 2015, the existence of practices for the acquisition and use of bulk communications data by the various security and intelligence agencies of the United Kingdom, namely GCHQ, MI5 and MI6, was made public, including in a report by the Intelligence and Security Committee of Parliament (United Kingdom). On 5 June 2015, Privacy International, a non-governmental organisation, brought an action before the Investigatory Powers Tribunal (United Kingdom) against the Secretary of State for Foreign and Commonwealth Affairs, the Secretary of State for the Home Department and those security and intelligence agencies, challenging the lawfulness of those practices.

Secondly, the government of Prime Minister Boris Johnson may aspire to change data laws in ways the EU does not. In media accounts, unnamed EC officials were critical of the UK’s 2020 “National Data Strategy,” particularly references to “legal barriers (real and perceived)” to accessing data that “must be addressed.”

Thirdly, it may become a matter of politics. The EU has incentives to make the UK’s exit from the EU difficult to dissuade other nations from following the same path. Moreover, having previously been the second largest economy in the EU as measured by GDP, the UK may prove a formidable economy competitor, lending more weight to the view that the EU may not want to help the UK’s  businesses compete with the EU’s.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2021. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by succo from Pixabay


[1] European Union Parliament, “The EU-UK relationship beyond Brexit: options for Police Cooperation and Judicial Cooperation in Criminal Matters,” Page 8: Although the UK legal framework is currently broadly in line with the EU legal framework and the UK is a signatory to the European Convention on Human Rights (ECHR), there are substantial questions over whether the Data Protection Act fully incorporates the data protection elements required by the Charter of Fundamental Rights, concerning the use of the national security exemption from the GDPR used by the UK, the retention of data and bulk powers granted to its security services, and over its onward transfer of this data to third country security partners such as the ‘Five Eyes’ partners (Britain, the USA, Australia, New Zealand and Canada).

Privacy Shield Hearing

The focus was on how the U.S. and EU can reach agreement on an arrangement that will not be struck down by the EU’s highest court.

Last week, the Senate Commerce, Science, and Transportation Committee held a hearing on the now invalidated European Union (EU)-United States (U.S.) Privacy Shield, a mechanism that allowed companies to transfer the personal data of EU residents to the U.S. The EU’s highest court struck down the adequacy decision that underpinned the system on the basis of U.S. surveillance activities and lack of redress that violated EU law. This is the second time in the decade the EU’s top court has invalidated a transfer arrangement, the first being the Safe Harbor system. Given the estimated billions, or even trillions, of dollars in value realized from data flows between the EU and U.S. there is keen interest on both sides of the Atlantic in finding a legal path forward. However, absent significant curtailment of U.S. surveillance and/or a significant expansion of the means by which EU nationals could have violations of their rights rectified, it would appear a third agreement may not withstand the inevitable legal challenges. Moreover, there are questions as to the legality of other transfer tools in light of the Court of Justice for the European Union’s decision in the case known as Schrems II, and the legality of some Standard Contractual Clauses (SCC) and Binding Corporate Rules (BCR) may be soon be found in violation, too.

Consequently, a legislative fix, or some portion thereof, could be attached to federal privacy legislation. Hence, the striking down of Privacy Shield may provide additional impetus to Congress and the next Administration to reach a deal on privacy. Moreover, the lapsed reauthorization of some Foreign Intelligence Surveillance Act authorities may be another legislative opportunity for the U.S. to craft an approach amendable to the EU in order to either obtain an adequacy decision or a successor agreement to the Privacy Shield.

Chair Roger Wicker (R-MS) approached the issue from the perspective of international trade and the economic benefit accruing to businesses on both sides of the Atlantic. His opening remarks pertained less to the privacy and surveillance aspects of the CJEU’s ruling. Wicker appears to be making the case that the EU seems to misunderstand that redress rights in the U.S. are more than adequate, and the U.S.’ surveillance regime is similar to those of some EU nations. One wonders if the CJEU is inclined to agree with this position. Nonetheless, Wicker expressed hope that the EU and U.S. can reach “a durable and lasting data transfer framework…that provides meaningful data protections to consumers, sustains the free flow of information across the Atlantic, and encourages continued economic and strategic partnership with our European allies – a tall order but an essential order.” He worried about the effect of the CJEU’s ruling on SCCs. Wicker made the case that the EU and U.S. share democratic values and hinted that the ongoing talks in the committee to reach a federal data privacy law might include augmented redress rights that might satisfy the CJEU.

Ranking Member Maria Cantwell (D-WA) spoke very broadly about a range of issues related to data transfers and privacy. She stressed the importance of data flows in the context of larger trade relations. Cantwell also stressed the shared values between the U.S. and the EU and her hope that the two entities work “together on these very important national concerns, trade and technology, so that we can continue to improve economic opportunities and avoid moves towards protectionism.” She also called for federal privacy legislation but hinted that states should still be able to regulate privacy, suggesting her commitment to having a federal law be a floor for state laws. Cantwell also asserted that bulk surveillance, the likes of which the National security Agency has engaged in, may simply not be legal under EU law.

Deputy Assistant Secretary of Commerce for Services James Sullivan blurred the issues presented by Schrems II much like Cantwell did. The CJEU’s decision that focused on U.S. surveillance practices and the lack of meaningful recourse in the U.S. if an EU resident’s rights were violated was merged into a call for like-minded nations to unite against authoritarian nations. Sullivan distinguished between U.S. surveillance and the surveillance conducted by the People’s Republic of China (without naming the nation) and other regimes as if this should satisfy the EU as to the legality and propriety of U.S. treatment of EU personal data. Sullivan stated:

  • The Schrems II decision has created enormous uncertainties for U.S. companies and the transatlantic economy at a particularly precarious time. Immediately upon issuance of the ruling, the 5,400 Privacy Shield participants and their business partners in the EU could no longer rely on the Framework as a lawful basis for transferring personal data from Europe to the United States. Because neither the Court nor European data protection authorities provided for any enforcement grace period, Privacy Shield companies were left with three choices: (1) risk facing potentially huge fines (of up to 4 percent of total global turnover in the preceding year) for violating GDPR, (2) withdraw from the European market, or (3) switch right away to another more expensive data transfer mechanism.
  • Unfortunately, because of the Court’s ruling in the Privacy Shield context that U.S. laws relating to government access to data do not confer adequate protections for EU personal data, the use of other mechanisms like SCCs and BCRs to transfer EU personal data to the United States is now in question as well.
  • The objective of any potential agreement between the United States and the European Commission to address Schrems II is to restore the continuity of transatlantic data flows and the Framework’s privacy protections by negotiating targeted enhancements to Privacy Shield that address the Court’s concerns in Schrems II. Any such enhancements must respect the U.S. Government’s security responsibilities to our citizens and allies.
  • To be clear, we expect that any enhancements to the Privacy Shield Framework would also cover transfers under all other EU-approved data transfer mechanisms like SCCs and BCRs as well.
  • The Schrems II decision has underscored the need for a broader discussion among likeminded democracies on the issue of government access to data. Especially as a result of the extensive U.S. surveillance reforms since 2015, the United States affords privacy protections relating to national security data access that are equivalent to or greater than those provided by many other democracies in Europe and elsewhere.
  • To minimize future disruptions to data transfers, we have engaged with the European Union and other democratic nations in a multilateral discussion to develop principles based on common practices for addressing how best to reconcile law enforcement and national security needs for data with protection of individual rights.
  • It is our view that democracies should come together to articulate shared principles regarding government access to personal data—to help make clear the distinction between democratic societies that respect civil liberties and the rule of law and authoritarian governments that engage in the unbridled collection of personal data to surveil, manipulate, and control their citizens and other individuals without regard to personal privacy and human rights. Such principles would allow us to work with like-minded partners in preserving and promoting a free and open Internet enabled by the seamless flow of data.

Federal Trade Commission (FTC) Commissioner Noah Joshua Phillips stressed he was speaking in a personal capacity and not for the FTC. He extolled the virtues of the “free and open” internet model in the U.S. with the double implication that it is superior both to nations like the PRC and Russia but also the EU model. Phillips seemed to be advocating for talking the EU into accepting that the U.S.’s privacy regime and civil liberties are stronger than any other nation. Her also made the case, like other witnesses, that the U.S. data privacy and protection regulation is more similar to the EU than the PRC, Russia, and others. Phillips also sought to blur the issues and recast Privacy Shield in the context of the global struggle between democracies and authoritarian regimes. Phillips asserted:

  • First, we need to find a path forward after Schrems II, to permit transfers between the U.S. and EU. I want to recognize the efforts of U.S. and EU negotiators to find a replacement for Privacy Shield. While no doubt challenging, I have confidence in the good faith and commitment of public servants like Jim Sullivan, with whom I have the honor of appearing today, and our partners across the Atlantic. I have every hope and expectation that protecting cross-border data flows will be a priority for the incoming Administration, and I ask for your help in ensuring it is.
  • Second, we must actively engage with nations evaluating their approach to digital governance, something we at the FTC have done, to share and promote the benefits of a free and open Internet. There is an active conversation ongoing internationally, and at every opportunity—whether in public forums or via private assistance—we must ensure our voice and view is heard.
  • Third, we should be vocal in our defense of American values and policies. While we as Americans always look to improve our laws—and I commend the members of this committee on their important work on privacy legislation and other critical matters—we do not need to apologize to the world. When it comes to civil liberties or the enforcement of privacy laws, we are second to none. Indeed, in my view, the overall U.S. privacy framework—especially with the additional protections built into Privacy Shield—should certainly qualify as adequate under EU standards.
  • Fourth, as European leaders call to strengthen ties with the U.S., we should prioritize making our regimes compatible for the free flow of data. This extends to the data governance regimes of like-minded countries outside of Europe as well. Different nations will have different rules, but relatively minor differences need not impede mutually-beneficial commerce. We need not and should not purport to aim for a single, identical system of data governance. And we should remind our allies, and remind ourselves, that far more unites liberal democracies than divides us.
  • Fifth and finally, if we must draw lines, those lines should be drawn between allies with shared values—the U.S., Europe, Japan, Australia, and others—and those, like China and Russia, that offer a starkly different vision. I am certainly encouraged when I hear recognition of this distinction from Europe. European Data Protection Supervisor Wojciech Wiewiórowski recently noted that the U.S. is much closer to Europe than is China and that he has a preference for data being processed by countries that share values with Europe. Some here in the U.S. are even proposing agreements to solidify the relationships among technologically advanced democracies, an idea worth exploring in more detail

Washington University Professor of Law Neil Richards stressed that the Schrems II decision spells out how the U.S. would achieve adequacy: reforming surveillance and providing meaningful redress for alleged privacy violations. Consequently, FISA would need to be rewritten and narrowed and a means for EU residents to seek relief beyond the current Ombudsman system is needed, possibly a statutory right to sue. Moreover, he asserted strong data protection and privacy laws are needed and some of the bills introduced in this Congress could fit the bill. Richards asserted:

In sum, the Schrems litigation is a creature of distrust, and while it has created problems for American law and commerce, it has also created a great opportunity. That opportunity lies before this Committee –the chance to regain American leadership in global privacy and data protection by passing a comprehensive law that provides appropriate safeguards, enforceable rights, and effective legal remedies for consumers. I believe that the way forward can not only safeguard the ability to share personal data across the Atlantic, but it can do so in a way that builds trust between the United States and our European trading partners and between American companies and their American and European customers. I believe that there is a way forward, but it requires us to recognize that strong, clear, trust-building rules are not hostile to business interest, that we need to push past the failed system of “notice and choice,” that we need to preserve effective consumer remedies and state-level regulatory innovation, and seriously consider a duty of loyalty. In that direction, I believe, lies not just consumer protection, but international cooperation and economic prosperity.

Georgia Tech University Professor Peter Swire explained that the current circumstances make the next Congress the best possibility in memory to enact privacy legislation because of the need for a Privacy Shield replacement, passage of the new California Privacy Rights Act (Proposition 24), and the Biden Administration’s likely support for such legislation. Swire made the following points:

  1. The European Data Protection Board in November issued draft guidance with an extremely strict interpretation of how to implement the Schrems II case.
  2. The decision in Schrems II is based on EU constitutional law. There are varying current interpretations in Europe of what is required by Schrems II, but constitutional requirements may restrict the range of options available to EU and U.S. policymakers.
  3. Strict EU rules about data transfers, such as the draft EDPB guidance, would appear to result in strict data localization, creating numerous major issues for EU- and U.S.-based businesses, as well as affecting many online activities of EU individuals.
  4. Along with concerns about lack of individual redress, the CJEU found that the EU Commission had not established that U.S. surveillance was “proportionate” in its scope and operation. Appendix 2 to this testimony seeks to contribute to an informed judgment on proportionality, by cataloguing developments in U.S. surveillance safeguards since the Commission’s issuance of its Privacy Shield decision in 2016.
  5. Negotiating an EU/U.S. adequacy agreement is important in the short term.
  6. A short-run agreement would assist in creating a better overall long-run agreement or agreements.
  7. As the U.S. considers its own possible legal reforms in the aftermath of Schrems II, it is prudent and a normal part of negotiations to seek to understand where the other party – the EU – may have flexibility to reform its own laws.
  8. Issues related to Schrems II have largely been bipartisan in the U.S., with substantial continuity across the Obama and Trump administrations, and expected as well for a Biden administration.
  9. Passing comprehensive privacy legislation would help considerably in EU/U.S. negotiations.
  10. This Congress may have a unique opportunity to enact comprehensive commercial privacy legislation for the United States.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Dooffy Design from Pixabay

EDPB Publishes Schrems II Recommendations; EU Parliament Issues Draft SCC Revisions

The EU takes steps to respond to the CJEU’s striking down of the EU-US Privacy Shield by augmenting SCCs and other transfer mechanisms.

The European Data Protection Board (EDPB) published recommendations for entities exporting and importing the personal data of European Union (EU) residents in light of the court decision striking down the adequacy decision that allowed transfers to the United States (U.S.). The EDPB noted that alternate mechanisms like standard contractual clauses (SCC) may still be used for transfers to nations without adequate protections of EU rights provided that supplemental measures are used. It should be noted that the EDPB said that supplemental measures will be needed for the use of any transfers to nations that do not guarantee the same level of rights as the EU, which would include Binding Corporate Rules (BCR). While, the EDPB’s recommendations will undoubtedly prove persuasive with the Supervisory Authorities (SA), each SA will ultimately assess whether the mechanisms and supplementary measures used by entities comport with General Data Protection Regulation (GDPR) and the EU’s Charter of Fundamental Rights.

In a summary of its decision Data Protection Commissioner v. Facebook Ireland and Maximillian Schrems, Case C-311/18 (Schrems II), the Court of Justice for the European Union (CJEU) explained:

The GDPR provides that the transfer of such data to a third country may, in principle, take place only if the third country in question ensures an adequate level of data protection. According to the GDPR, the Commission may find that a third country ensures, by reason of its domestic law or its international commitments, an adequate level of protection. In the absence of an adequacy decision, such transfer may take place only if the personal data exporter established in the EU has provided appropriate safeguards, which may arise, in particular, from standard data protection clauses adopted by the Commission, and if data subjects have enforceable rights and effective legal remedies. Furthermore, the GDPR details the conditions under which such a transfer may take place in the absence of an adequacy decision or appropriate safeguards.

Ultimately, the CJEU found the U.S. lacks the requisite safeguards needed under EU law, and so the general means of transferring the data of EU citizens from the EU to the U.S. was essentially struck down. This marked the second time in the last five years such an agreement had been found to violate EU law. However, the CJEU left open the question of whether SCCs may permit the continued exporting of EU personal data into the U.S. for companies like Facebook, Google, and many, many others. Consequently, there has been no small amount of interpreting and questioning of whether this may be a way for the trans-Atlantic data flow worth billions, perhaps even trillions, of dollars to continue. And yet, the CJEU seemed clear that additional measures would likely be necessary. Indeed, the CJEU asserted “[c]ontrollers and processors should be encouraged to provide additional safeguards via contractual commitments that supplement standard protection clauses” and “[i]n so far as those standard data protection clauses cannot, having regard to their very nature, provide guarantees beyond a contractual obligation to ensure compliance with the level of protection required under EU law, they may require, depending on the prevailing position in a particular third country, the adoption of supplementary measures by the controller in order to ensure compliance with that level of protection.”

In “Recommendations 01/2020 on measures that supplement transfer tools to ensure compliance with the EU level of protection of personal data,” the EDPB explained the genesis and rationale for the document:

  • The GDPR or the [CJEU] do not define or specify the “additional safeguards”, “additional measures” or “supplementary measures” to the safeguards of the transfer tools listed under Article 46.2 of the GDPR that controllers and processors may adopt to ensure compliance with the level of protection required under EU law in a particular third country.
  • The EDPB has decided, on its own initiative, to examine this question and to provide controllers and processors, acting as exporters, with recommendations on the process they may follow to identify and adopt supplementary measures. These recommendations aim at providing a methodology for the exporters to determine whether and which additional measures would need to be put in place for their transfers. It is the primary responsibility of exporters to ensure that the data transferred is afforded in the third country of a level of protection essentially equivalent to that guaranteed within the EU. With these recommendations, the EDPB seeks to encourage consistent application of the GDPR and the Court’s ruling, pursuant to the EDPB’s mandate

Broadly speaking, whether SCCs and supplemental measures will pass muster under the GDPR will be determined on a case-by-case basis. The EDPB did not offer much in the way of bright line rules. Indeed, it will be up to SAs to determine if transfers to nations like the U.S. are possible under the GDPR, meaning these recommendations may shed more light on this central question without deciding it. One wonders, as a practical matter, if the SAs will have the capacity, resources, and will to police SCCs to ensure the GDPR and Charter are being met.

Nonetheless, the EDPB stressed the principle of accountability under which controllers which export personal data must ensure that whatever mechanism and supplemental measures govern a data transfer, the data must receive the same protection it would in the EU. The EDPB made the point that EU protections travel with the data and should EU personal data make its way to a country where it is not possible for appropriate protection to occur, then the transfer violates the GDPR. Moreover, these recommendations pertain to both public and private transfers of EU data to private sector entities outside the EU.

These recommendations work like a decision tree with exporters needing to ask themselves a series of questions to determine whether they must use supplemental measures. This may prove a resource intensive process, for exporters will need to map all transfers (i.e. know exactly) where the data are going. The exporter must understand the laws and practices of the third nation in order to put in place appropriate measures if this is possible in order to meet the EU’s data protection standards.

Reading between the lines leads one to conclude that data exporters may not send personal data to the U.S. for its federal surveillance regime is not “necessary and proportionate,” at least from the EU’s view. The U.S. lacks judicial redress in the case a U.S. national, let alone a foreign national, objects to the sweeping surveillance. The U.S. also has neither a national data protection law nor a dedicated data protection authority. These hints seem to also convey the EDPB’s view on the sorts of legal reforms needed in the U.S. before an adequacy decision would pass muster with the CJEU.

The EDPB said it was still evaluating how Schrems II affects the use of BCR and ad hoc contractual clauses, two of the other alternate means of transferring EU personal data in the absence of an adequacy agreement.

Nevertheless, in an annex, the EDPB provided examples of supplementary measures that may be used depending on the circumstances, of course, such as “flawlessly implemented” encryption and pseudonymizing data. However, the EDPB discusses these in the context of different scenarios and calls for more conditions than just the two aforementioned. Moreover, the EDPB rules out two scenarios categorically as being inadequate: “Transfer to cloud services providers or other processors which require access to data in the clear” and “Remote access to data for business purposes.”

The EDPB also issued an update to guidance published after the first lawsuit brought by Maximilian Schrems resulted in the striking down of the Safe Harbor transfer agreement. The forerunner to the EDPB, the Working Party 29, had drafted and released the European Essential Guarantees, and so, in light of Schrems II, the EDPB updated and published “Recommendations 02/2020 on the European Essential Guarantees for surveillance measures” “to provide elements to examine, whether surveillance measures allowing access to personal data by public authorities in a third country, being national security agencies or law enforcement authorities, can be regarded as a justifiable interference or not” with fundamental EU rights and protections. As the EDPB explains, these recommendations are intended to help data controllers and exporters determine whether other nations have protections and processes in place equivalent to those of the EU visa vis their surveillance programs. The EDPB stressed that these are the essential guarantees and other features and processes may be needed for a determination of lawfulness under EU law.

The EDPB formulated the four European Essential Guarantees:

A. Processing should be based on clear, precise and accessible rules

B. Necessity and proportionality with regard to the legitimate objectives pursued need to be demonstrated

C. An independent oversight mechanism should exist

D. Effective remedies need to be available to the individual

The European Commission (EC) has also released for comment a draft revision of SCC for transfers of personal data to countries outside the EU. The EC is accepting comments and input until 10 December. It may be no accident that the EDPB and EC more or less acted in unison to address the practical and statutory changes necessary to effectuate the CJEU’s striking down of the EU-US Privacy Shield. Whatever the case, the EC released draft legislative language and, in an Annex, actual contract language for use by controllers and processors in the form of modules that are designed to be used in a variety of common circumstances (e.g., transfers by controllers to other controllers or a controller to a processor.) However, as the EDPB did, the EC stressed that SCCs form a floor and controllers, processors, and other parties are free to add additional language so long as it does not contradict or denigrate the rights protected by SCCs.

In the implementing decision, the EC asserted

the standard contractual clauses needed to be updated in light of new requirements in Regulation (EU) 2016/679. Moreover, since the adoption of these decisions, important developments have taken place in the digital economy, with the widespread use of new and more complex processing operations often involving multiple data importers and exporters, long and complex processing chains as well as evolving business relationships. This calls for a modernisation of the standard contractual clauses to better reflect those realities, by covering additional processing and transfer situations and to use a more flexible approach, for example with respect to the number of parties able to join the contract.

The EC continued:

The standard contractual clauses set out in the Annex to this Decision may be used by a controller or a processor in order to provide appropriate safeguards within the meaning of Article 46(1) of Regulation (EU) 2016/679 for the transfer of personal data to a processor or a controller established in a third country. This also includes the transfer of personal data by a controller or processor not established in the Union, to the extent that the processing is subject to Regulation (EU) 2016/679 pursuant to Article 3(2) thereof, because it relates to the offering of goods or services to data subjects in the Union or the monitoring of their behaviour as far as their behaviour takes place within the Union.

The EC explained the design and intent of the SCC language in the Annex:

  • The standard contractual clauses set out in the Annex to this Decision combine general clauses with a modular approach to cater for various transfer scenarios and the complexity of modern processing chains. In addition to the general clauses, controllers and processors should select the module applicable to their situation, which makes it possible to tailor their obligations under the standard contractual clauses to their corresponding role and responsibilities in relation to the data processing at issue. It should be possible for more than two parties to adhere to the standard contractual clauses. Moreover, additional controllers and processors should be allowed to accede to the standard contractual clauses as data exporters or importers throughout the life cycle of the contract of which those clauses form a part.
  • These Clauses set out appropriate safeguards, including enforceable data subject rights and effective legal remedies, pursuant to Article 46(1), and Article 46 (2)(c) of Regulation (EU) 2016/679 and, with respect to data transfers from controllers to processors and/or processors to processors, standard contractual clauses pursuant to Article 28(7) of Regulation (EU) 2016/679, provided they are not modified, except to add or update information in the Annexes. This does not prevent the Parties from including the standard contractual clauses laid down in this Clauses in a wider contract, and to add other clauses or additional safeguards provided that they do not contradict, directly or indirectly, the standard contractual clauses or prejudice the fundamental rights or freedoms of data subjects. These Clauses are without prejudice to obligations to which the data exporter is subject by virtue of the Regulation (EU) 2016/679

In October, the Trump Administration released a crib sheet they are hoping U.S. multinationals will have success in using to argue to SAs that SCC and BCR and U.S. law satisfy the European Court of Justice’s ruling that struck down the EU-U.S. Privacy Shield. And, the Trump Administration is basically arguing, sure, we spy, but most EU citizens data is not surveilled and EU governments themselves often share in the proceeds of the surveillance we conduct. Moreover, there are plenty of safeguards and means of redress in the U.S. system because, you know, we say so. It is unlikely this analysis will be very persuasive in the EU, especially since these broad arguments do not go to the criticisms the EU has had under Privacy Shield about U.S. surveillance and privacy rights nor to the basis for the CJEU’s ruling.

Earlier this month, the European Data Protection Supervisor (EDPS) published a strategy detailing how EU agencies and bodies should comply with the CJEU ruling that struck down the EU-US Privacy Shield and threw into question the compliance of SCC with EU law and the GDPR. The EDPS has already started working with EU Institutions’, bodies, offices and agencies (EUIs) on the process of determining if their transfers of the personal data of people in the EU to the U.S. meets the CJEU’s judgement.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Photo by Anthony Beck from Pexels

Schrems II Guidance

The agency that oversees the data protection of EU agencies has laid out its view on how they should comply with the GDPR after the EU-US Privacy Shield.

The European Data Protection Supervisor (EDPS) has published a strategy detailing how European Union (EU) agencies and bodies should comply with the Court of Justice of the European Union’s (CJEU) ruling that struck down the EU-United States (U.S.) Privacy Shield (aka Schrems II) and threw into question the compliance of Standard Contractual Clauses (SCC) with EU law and the General Protection Data Regulation (GDPR). The EDPS has already started working with EU Institutions’, bodies, offices and agencies (EUIs) on the process of determining if their transfers of the personal data of people in the EU to the U.S. meets the CJEU’s judgement.

The EDPS makes clear most of the transfers by EUIs to the U.S. are on account of using U.S. information and communications technology (ICT) products and services, meaning U.S. multinationals like Microsoft, Google, and others. The EDPS has proposed a strategy that would first identify risks and then move to address them. It bears stressing that this strategy applies only to EUIs and not private sector controllers, but it is likely the European Data Protection Board (EDPB) and EU DPAs will take notice of the EDPS’ strategy on how to comply with Schrems II. However, the EDPS acknowledges that it is obliged to follow the EDPB’s lead and vows to change its strategy upon issuance of EDPB guidance on Schrems II and SCC. And yet, the EDPS explained that EUIs will need to report back on how they are implementing the steps in the strategy, particularly on those ongoing transfers to countries like the U.S. that have inadequate data protection laws, those transfers that have been suspended, and any transfers being conducted per derogations in the GDPR. On the basis of this feedback, the EDPS will “establish long-term compliance” in 2021.

It seems a bit backwards for the EDPS to task the EUIs with determining which transfers under SCC may proceed under the GDPR when it might be a more efficient process for the EDPS to take on this job directly and rule on the ICT services and providers, permitting all EUIs to understand which comply with EU law and which do not. However, the EDPS is exploring the possibility of determining the sufficiency of data protection in other nations, most likely, first and foremost the U.S., and then working with EU stakeholders to coordinate compliance with the CJEU’s ruling and the GDPR.

The EDPS claimed the CJEU “clarified the roles and responsibilitiesof controllers, recipients of data outside of the European Economic Area (EEA) (data importers) and supervisory authorities…[and] ruled the following:

  • The Court invalidated the Privacy Shield adequacy Decision and confirmed that the SCCs were valid providing that they include effective mechanisms to ensure compliance in practice with the “essentially equivalent” level of protection guaranteed within the EU by the General Data Protection Regulation (GDPR). Transfers of personal data pursuant to the SCCs are suspended or prohibited in the event of a breach of such clauses, or in case it is impossible to honour them.
  • The SCCs for transfers may then require, depending on the prevailing position of a particular third country, the adoption of supplementary measures by the controller in order to ensure compliance with the level of protection guaranteed within the EU.
  • In order to continue these data transfers, the Court stresses that before transferring personal data to a third country, it is the data exporters’ and data importers’ responsibility to assess whether the legislation of the third country of destination enables the data importer to comply with the guarantees provided through the transfer tools in place. If this is not the case, it is also the exporter and the importer’s duty to assess whether they can implement supplementary measures to ensure an essentially equivalent level of protection as provided by EU law. Should data exporters, after taking into account the circumstances of the transfer and possible supplementary measures, conclude that appropriate safeguards cannot be ensured, they are required to suspend or terminate the transfer of personal data. In case the exporter intends nevertheless to continue the transfer of personal data, they must notify their competent SA.
  • The competent supervisory authority is required to suspend or prohibit a transfer of personal data to a third country pursuant to the SCCs if, when considering the circumstances of that transfer, those clauses are not or cannot be complied with in the third country of destination and the protection of the data transferred under EU law cannot be ensured by other means.

EDPS explained:

The EDPS’ report on the 2017 survey entitled, Measuring compliance with data protection rules in EU institutions, provides evidence that there has been a significant rise in the number of transfers related to the core business of EUIs in recent years. This number is even higher now, due to the increased use of ICT services and social media. The EDPS’ own-initiative investigation into the use of Microsoft products and services by EUIs and subsequent recommendations in that regard confirms the importance to ensure a level of protection that is essentially equivalent as the one guaranteed within the EU, as provided by relevant data protection laws, to be interpreted in accordance with the EU Charter. In this context, the EDPS has already flagged a number of linked issues concerning sub-processors, data location, international transfers and the risk of unlawful disclosure of data – issues that the EUIs were unable to control and ensure proper safeguards to protect data that left the EU/EEA. The issues we raised in our investigation report are consistent with the concerns expressed in the Court’s Judgment, which we are assessing in relation to any processor agreed to by EUIs.

Regarding data flows to the U.S. quite possibly in violation of the GDPR and Schrems II, the EDPS:

  • Moreover, a majority of data flows to processors most probably happen because EUIs use service providers that are either based in the U.S. or that use sub-processors based in the U.S., in particular for ICT services, which fall under the scope of U.S. surveillance laws. Such companies have primarily relied on the Privacy Shield adequacy Decision to transfer personal data to the U.S. and the use of SCCs as a secondary measure.
  • Therefore, the present Strategy emphasizes the priority to address transfers of data by EUIs or on their behalf in the context of controller to process or contract and/or processor to sub-processor contracts, in particular towards the United States.

The EDPS is calling for “a twofold approach as the most appropriate:

(1) Identify urgent compliance and/or enforcement actions through a risk based approach for transfers towards the U.S. presenting high risks for data subjects and in parallel

(2) provide guidance and pursue mid-term case-by-case EDPS compliance and or enforcement actions for all transfers towards the U.S. or other third countries.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Pete Linforth from Pixabay

Further Reading, Other Developments, and Coming Events (5 November)

Further Reading

  • Confusion and conflict stir online as Trump claims victory, questions states’ efforts to count ballots” By Craig Timberg, Tony Romm, Isaac Stanley-Becker and Drew Harwell — Washington Post. When the post-mortem on the 2020 Election is written, it is likely to be the case that foreign disinformation was not the primary threat. Rather, it may be domestic interference given the misinformation, disinformation, and lies circulating online despite the best efforts of social media platforms to label, take down, and block such material. However, if this article is accurate, much of it is coming from the right wing, including the President.
  • Polls close on Election Day with no apparent cyber interference” By Kevin Collier and Ken Dilanian — NBC News. Despite crowing from officials like The United States (U.S.) Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA) Director Christopher Krebs and U.S. Cyber Command head General Paul Naksone, it is not altogether clear that U.S. efforts, especially publicized offensive operations are the reason there were no significant cyber attacks on Election Day. However, officials are cautioning the country is not out of the woods as vote counting is ongoing and opportunities for interference and mischief remain.
  • Russian hackers targeted California, Indiana Democratic parties” By Raphael Satter, Christopher Bing, Joel Schectman — Reuters. Apparently, Microsoft helped foil Russian efforts to hack two state Democratic parties and think tanks, some of which are allied with the Democratic party. However, it appears none of the attempts, which occurred earlier this year, were successful. The article suggests but does not claim that increased cyber awareness and defenses foiled most of the attempts by hacking group, Fancy Bear.
  • LexisNexis to Pay $5 Million Class Action Settlement for Selling DMV Data” By Joseph Cox — Vice. Data broker LexisNexis is settling a suit that it violated the Drivers’ Privacy Protection Act (DPPA) by obtaining Department of Motor Vehicles (DMV) records on people for a purpose not authorized under the law. Vice has written a number of articles on the practices of DMVs selling people’s data, which has caught the attention of at least two Democratic Members of Congress who have said they will introduce legislation to tighten the circumstances under which these data may be shared or sold.
  • Spy agency ducks questions about ‘back doors’ in tech products” By Joseph Menn — Reuters. Senator Ron Wyden (D-OR) is demanding that the National Security Agency (NSA) reveal the guidelines put in place after former NSA contractor Edward Snowden revealed the agency’s practice of getting backdoors in United States (U.S.) technology it could use in the future. This practice allowed the NSA to sidestep warrant requirements, but it also may have weakened technology that was later exploited by other governments as the People’s Republic of China (PRC) allegedly did to Juniper in 2015. After Snowden divulged the NSA’s practice, reforms were supposedly put in place but never shared with Congress.

Other Developments

  • Australia’s Joint Committee on Intelligence and Security issued a new report into Australia’s mandatory data retention regime that makes 22 recommendations to “increase transparency around the use of the mandatory data retention and increase the threshold for when data can be accessed…[and] reduce the currently very broad access to telecommunications data under the Telecommunications Act.” The committee stated “[t]he report’s 22 recommendations include:
    • access to data kept under the mandatory data retention regime will only be available under specific circumstances
    • the Department of Home Affairs develop guidelines for data collection including an ability for enforcement agencies and Home Affairs to produce reports to oversight agencies or Parliament when requested
    • the repeal of section 280(1)(b) of the Telecommunications Act which allows for access where ‘disclosure or use is required or authorised by or under law.’ It is the broad language in this subsection that has allowed the access that concerned the committee
    • The committee explained:
      • The Parliamentary Joint Committee on Intelligence and Security (the Committee) is required by Part 5-1A of the Telecommunications (Interception and Access) Act 1979 (TIA Act) to undertake a review of the mandatory data retention regime (MDRR).
      • The mandatory data retention regime is a legislative framework which requires carriers, carriage service providers and internet service providers to retain a defined set of telecommunications data for two years, ensuring that such data remains available for law enforcement and national security investigations.
  • Senators Ron Wyden (D-OR) and Sherrod Brown (D-OH) wrote a letter “to trade associations urging them to take immediate action to ensure their members are not complicit in China’s state-directed human rights abuses, including by relocating production from the Xinjiang Uyghur Autonomous Region.” They stated:
    • We write to express our concerns over reports that the industries and companies that the U.S. Chamber of Commerce represents have supply chains that have been implicated in the state-sanctioned forced labor of Uyghurs and other Muslim groups in the Xinjiang Uyghur Autonomous Region of China (XUAR) and in sites where Uyghurs have been relocated.  The decision to operate or contract with production facilities overseas must be accompanied by high standards of supply chain accountability and transparency to ensure that no company’s products are made with forced labor.  We urge your members to take immediate action to ensure goods manufactured for them are not complicit in the China’s state-directed human rights abuses, including by relocating production from the XUAR.  In addition, we ask your members to take critical, comprehensive steps to achieve the supply chain integrity and transparency American consumers and workers deserve.  It is past time for American multinational companies to be part of the solution, not part of the problem, on efforts to eradicate forced labor and end human rights abuses against workers in China. 
  • The Federal Trade Commission (FTC) finalized a settlement alleging violations of the now struck down European Union-United States Privacy Shield. In its press release, the agency explained it had “alleged that NTT Global Data Centers Americas, Inc. (NTT), formerly known as RagingWire Data Centers, Inc., claimed in its online privacy policy and marketing materials that the company participated in the Privacy Shield framework and complied with the program’s requirements.” The FTC noted “the company’s certification lapsed in January 2018 and it failed to comply with certain Privacy Shield requirements while it was a participant in the framework.” The FTC stated:
    • Under the settlement, the company, among other things, is prohibited not just from misrepresenting its compliance with or participation in the Privacy Shield framework, but also any other privacy or data security program sponsored by the government or any self-regulatory or standard-setting organization. The company also must continue to apply the Privacy Shield requirements or equivalent protections to personal information it collected while participating in the framework or return or delete the information.
    • Although the European Court of Justice invalidated the Privacy Shield framework in July 2020, that decision does not affect the validity of the FTC’s decision and order relating to NTT’s misrepresentations about its participation in and compliance with the framework. The framework allowed participants to transfer data legally from the European Union to the United States.
  • The Commission nationale de l’informatique et des libertés (CNIL) issued a press release, explaining that France’s “Council of State acknowledges the existence of a risk of data transfer from the Health Data Hub to the United States and requests additional safeguards.” CNIL stated it “will advise the public authorities on appropriate measures and will ensure, for research authorization related to the health crisis, that there is a real need to use the platform.” This announcement follows from the Court of Justice of the European Union (CJEU) striking down the adequacy decision underpinning the European Union-United States Privacy Shield (aka Schrems II). CNIL summarized the “essentials:”
    • Fearing that some data might be transferred to the United States, some claimants lodged an appeal with the Council of State requesting the suspension of the “Health Data Hub”, the new platform designed to ultimately host all the health data of people who receive medical care in France.
    • The Court considers that a risk cannot be excluded with regard to the transfer of health data hosted on the Health Data Hub platform to the US intelligence.
    • Because of the usefulness of the Health Data Hub in managing the health crisis, it refuses to suspend the operation of the platform.
    • However, it requires the Health Data Hub to strengthen its contract with Microsoft on a number of points and to seek additional safeguards to better protect the data it hosts.
    • It is the responsibility of the CNIL to ensure, for authorization of research projects on the Health Data Hub in the context of the health crisis, that the use of the platform is technically necessary, and to advise public authorities on the appropriate safeguards.
    • These measures will have to be taken while awaiting a lasting solution that will eliminate any risk of access to personal data by the American authorities, as announced by the French Secretary of State for the Digital Agenda.
  • The United Kingdom’s (UK) National Cyber Security Centre (NCSC) has published its annual review that “looks back at some of the key developments and highlights from the NCSC’s work between 1 September 2019 and 31 August 2020.” In the foreword, new NCSC Chief Executive Officer Lindy Cameron provided an overview:
    • Expertise from across the NCSC has been surged to assist the UK’s response to the pandemic. More than 200 of the 723 incidents the NCSC handled this year related to coronavirus and we have deployed experts to support the health sector, including NHS Trusts, through cyber incidents they have faced. We scanned more than one million NHS IP addresses for vulnerabilities and our cyber expertise underpinned the creation of the UK’s coronavirus tracing app.
    • An innovative approach to removing online threats was created through the ‘Suspicious Email Reporting Service’ – leading to more than 2.3 million reports of malicious emails being flagged by the British public. Many of the 22,000 malicious URLs taken down as a result related to coronavirus scams, such as pretending to sell PPE equipment to hide a cyber attack. The NCSC has often been described as world-leading, and that has been evident over the last 12 months. Our innovative ‘Exercise in a Box’ tool, which supports businesses and individuals to test their cyber defences against realistic scenarios, was used in 125 countries in the last year.
    • Recognising the change in working cultures due to the pandemic, our team even devised a specific exercise on remote working, which has helped organisations to understand where current working practices may be presenting alternative cyber risks. Proving that cyber really is a team sport, none of this would be possible without strong partnerships internationally and domestically. We worked closely with law enforcement – particularly the National Crime Agency – and across government, industry, academia and, of course, the UK public.
    • The NCSC is also looking firmly ahead to the future of cyber security, as our teams work to understand both the risks and opportunities to the UK presented by emerging technologies. A prominent area of work this year was the NCSC’s reviews of high-risk vendors such as Huawei – and in particular the swift and thorough review of US sanctions against Huawei. The NCSC gave advice on the impact these changes would have in the UK, publishing a summary of the advice given to government as well as timely guidance for operators and the public.
  • Australia’s Department of Industry, Science, Energy and Resources has put out for comment a discussion paper titled “An AI Action Plan for all Australians” to “shape Australia’s vision for artificial intelligence (AI).” The department said it “is now consulting on the development of a whole-of-government AI Action Plan…[that] will help us maximise the benefits of AI for all Australians and manage the potential challenges.” The agency said “[t]he will help to:
    • ensure the development and use of AI in Australia is responsible
    • coordinate government policy and national capability under a clear, common vision for AI in Australia
    • explore the actions needed for our AI future
    • The department explained:
      • Building on Australia’s AI Ethics Framework, the Australian Government is developing an AI Action Plan. It is a key component of the government’s vision to be a leading digital economy by 2030. It builds on almost $800 million invested in the 2020-21 Budget to enable businesses to take advantage of digital technologies to grow their businesses and create jobs. It is an opportunity to leverage AI as part of the Australian Government’s economic recovery plan. We must work together to ensure all Australians can benefit from advances in AI.

Coming Events

  • On 10 November, the Senate Commerce, Science, and Transportation Committee will hold a hearing to consider nominations, including Nathan Simington’s to be a Member of the Federal Communications Commission.
  • On 17 November, the Senate Judiciary Committee will reportedly hold a hearing with Facebook CEO Mark Zuckerberg and Twitter CEO Jack Dorsey on Section 230 and how their platforms chose to restrict The New York Post article on Hunter Biden.
  • On 18 November, the Federal Communications Commission (FCC) will hold an open meeting and has released a tentative agenda:
    • Modernizing the 5.9 GHz Band. The Commission will consider a First Report and Order, Further Notice of Proposed Rulemaking, and Order of Proposed Modification that would adopt rules to repurpose 45 megahertz of spectrum in the 5.850-5.895 GHz band for unlicensed operations, retain 30 megahertz of spectrum in the 5.895-5.925 GHz band for the Intelligent Transportation Systems (ITS) service, and require the transition of the ITS radio service standard from Dedicated Short-Range Communications technology to Cellular Vehicle-to-Everything technology. (ET Docket No. 19-138)
    • Further Streamlining of Satellite Regulations. The Commission will consider a Report and Order that would streamline its satellite licensing rules by creating an optional framework for authorizing space stations and blanket-licensed earth stations through a unified license. (IB Docket No. 18-314)
    • Facilitating Next Generation Fixed-Satellite Services in the 17 GHz Band. The Commission will consider a Notice of Proposed Rulemaking that would propose to add a new allocation in the 17.3-17.8 GHz band for Fixed-Satellite Service space-to-Earth downlinks and to adopt associated technical rules. (IB Docket No. 20-330)
    • Expanding the Contribution Base for Accessible Communications Services. The Commission will consider a Notice of Proposed Rulemaking that would propose expansion of the Telecommunications Relay Services (TRS) Fund contribution base for supporting Video Relay Service (VRS) and Internet Protocol Relay Service (IP Relay) to include intrastate telecommunications revenue, as a way of strengthening the funding base for these forms of TRS and making it more equitable without increasing the size of the Fund itself. (CG Docket Nos. 03-123, 10-51, 12-38)
    • Revising Rules for Resolution of Program Carriage Complaints. The Commission will consider a Report and Order that would modify the Commission’s rules governing the resolution of program carriage disputes between video programming vendors and multichannel video programming distributors. (MB Docket Nos. 20-70, 17-105, 11-131)
    • Enforcement Bureau Action. The Commission will consider an enforcement action.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Further Reading, Other Developments, and Coming Events (3 November)

Further Reading

  • How Facebook and Twitter plan to handle election day disinformation” By Sam Dean — The Los Angeles Times; “How Big Tech is planning for election night” By Heather Kelly — The Washington Post; and “What to Expect From Facebook, Twitter and YouTube on Election Day” By Mike Isaac, Kate Conger and Daisuke Wakabayashi — The New York Times. Social media platforms have prepared for today and will use a variety of measures to combat lies, disinformation, and misinformation from sources foreign and domestic. Incidentally, my read is that these tech companies are more worried as measured by resource allocation about problematic domestic content.
  • QAnon received earlier boost from Russian accounts on Twitter, archives show” By Joseph Menn — Reuters. Researchers have delved into Twitter’s archive of taken down or suspended Russian accounts and are now claiming that the Russian Federation was pushing and amplifying the QAnon conspiracy in the United States (U.S.) as early as November 2017. This revised view of Russian involvement differs from conclusions reached in August that Russia played a small but significant role in the proliferation of the conspiracy.
  • Facial recognition used to identify Lafayette Square protester accused of assault” By Justin Jouvenal and Spencer S. Hsu — The Washington Post. When police were firing pepper balls and rolling gas canisters towards protestors in Lafayette Park in Washington, D.C. on 1 June 2020, a protestor was filmed assaulting two officers. A little known and not publicly revealed facial recognition technology platform available to many federal, state, and local law enforcement agencies in the Capital area resulted in a match with the footage. Now, a man stands accused of crimes during a Black Lives Matter march, and civil liberties and privacy advocates are objecting to the use of the National Capital Region Facial Recognition Investigative Leads System (NCRFRILS) on a number of grounds, including the demonstrated weakness of these systems to accurately identify people of color, the fact it has been used but not disclosed to a number of defendants, and the potential chilling effect it will have on people going to protests. Law enforcement officials claim there are strict privacy and process safeguards and an identification alone cannot be used as the basis of an arrest.
  • CISA’s political independence from Trump will be an Election Day asset” By Joseph Marks — The Washington Post. The United States’ (U.S.) Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA) has established its bona fides with nearly all stakeholders inside and outside the U.S. government since its creation in 2018. Many credit its Director, Christopher Krebs, and many also praise the agency for conveying information and tips about cyber and other threats without incurring the ire of a White House and President antithetical to any hint of Russian hacking or interference in U.S. elections. However, today and the next few days may prove the agency’s metal as it will be in uncharted territory trying to tamp down fear about unfounded rumors and looking to discredit misinformation and disinformation in close to real time.
  • Trump allies, largely unconstrained by Facebook’s rules against repeated falsehoods, cement pre-election dominance” By Isaac Stanley-Becker and Elizabeth Dwoskin — The Washington Post. Yet another article showing that if Facebook has a bias, it is against liberal viewpoints and content, for conservative material is allowed even if it breaks the rules of the platform. Current and former employees and an analysis support this finding. The Trump Family have been among those who have benefitted from the kid gloves used by the company regarding posts that would have likely resulted in consequences from other users. However, smaller conservative outlets or less prominent conservative figures have faced consequences for content that violates Facebook’s standards, however.

Other Developments

  • The European Union’s (EU) Executive Vice-President Margrethe Vestager made a speech titled “Building trust in technology,” in which she previewed one long awaited draft EU law on technology and another to address antitrust and anti-competitive practices of large technology companies. Vestager stated “in just a few weeks, we plan to publish two draft laws that will help to create a more trustworthy digital world.” Both drafts are expected on 2 December and represent key pieces of the new EU leadership’s Digital Strategy, the bloc’s initiative to update EU laws to account for changes in technology since the beginning of the century. The Digital Services Act will address and reform the legal treatment of both online commerce and online content. The draft Digital Markets Act would give the European Commission (EC) more tools to combat the same competition and market dominance issues the United States (U.S.), Australia, and other nations are starting to tackle visa vis companies like Apple, Amazon, Facebook, and Google.
    • Regarding the Digital Services Act, Vestager asserted:
      • One part of those rules will be the Digital Services Act, which will update the E-Commerce Directive, and require digital services to take more responsibility for dealing with illegal content and dangerous products. 
      • Those services will have to put clear and simple procedures in place, to deal with notifications about illegal material on their platforms. They’ll have to make it harder for dodgy traders to use the platform, by checking sellers’ IDs before letting them on the platform. And they’ll also have to provide simple ways for users to complain, if they think their material should not have been removed – protecting the right of legitimate companies to do business, and the right of individuals to freedom of expression. 
      • Those new responsibilities will help to keep Europeans just as safe online as they are in the physical world. They’ll protect legitimate businesses, which follow the rules, from being undercut by others who sell cheap, dangerous products. And by applying the same standards, all over Europe, they’ll make sure every European can rely on the same protection – and that digital businesses of all sizes can easily operate throughout Europe, without having to meet the costs of complying with different rules in different EU countries. 
      • The new rules will also require digital services – especially the biggest platforms – to be open about the way they shape the digital world that we see. They’ll have to report on what they’ve done to take down illegal material. They’ll have to tell us how they decide what information and products to recommend to us, and which ones to hide – and give us the ability to influence those decisions, instead of simply having them made for us. And they’ll have to tell us who’s paying for the ads that we see, and why we’ve been targeted by a certain ad.  
      • But to really give people trust in the digital world, having the right rules in place isn’t enough. People also need to know that those rules really work – that even the biggest companies will actually do what they’re supposed to. And to make sure that happens, there’s no substitute for effective enforcement.  
      • And effective enforcement is a vital part of the draft laws that we’ll propose in December. For instance, the Digital Services Act will improve the way national authorities cooperate, to make sure the rules are properly enforced, throughout the EU. Our proposal won’t change the fundamental principle, that digital services should be regulated by their home country. But it will set up a permanent system of cooperation that will help those regulators work more effectively, to protect consumers all across Europe. And it will give the EU power to step in, when we need to, to enforce the rules against some very large platforms.
    • Vestager also discussed the competition law the EC hopes to enact:
      • So, to keep our markets fair and open to competition, it’s vital that we have the right toolkit in place. And that’s what the second set of rules we’re proposing – what we call the Digital Markets Act – is for. 
      • That proposal will have two pillars. The first of those pillars will be a clear list of dos and don’ts for big digital gatekeepers, based on our experience with the sorts of behaviour that can stop markets working well. 
      • For instance, the decisions that gatekeepers take, about how to rank different companies in search results, can make or break businesses in dozens of markets that depend on the platform. And if platforms also compete in those markets themselves, they can use their position as player and referee to help their own services succeed, at the expense of their rivals. For instance, gatekeepers might manipulate the way that they rank different businesses, to show their own services more visibly than their rivals’. So, the proposal that we’ll put forward in a few weeks’ time will aim to ban this particular type of unfair self-preferencing. 
      • We also know that these companies can collect a lot of data about companies that rely on their platform – data which they can then use, to compete against those very same companies in other markets. That can seriously damage fairness in these markets – which is why our proposal aims to ban big gatekeepers from misusing their business users’ data in that way. 
      • These clear dos and don’ts will allow us to act much faster and more effectively, to tackle behaviour that we know can stop markets working well. But we also need to be ready for new situations, where digitisation creates deep, structural failures in the way our markets work.  
      • Once a digital company gets to a certain size, with the big network of users and the huge collections of data that brings, it can be very hard for anyone else to compete – even if they develop a much better service. So, we face a constant risk that big companies will succeed in pushing markets to a tipping point, sending them on a rapid, unstoppable slide towards monopoly – and creating yet another powerful gatekeeper. 
      • One way to deal with risks like this would be to stop those emerging gatekeepers from locking users into their platform. That could mean, for instance, that those gatekeepers would have to make it easier for users to switch platform, or to use more than one service. That would keep the market open for competition, by making it easier for innovative rivals to compete. But right now, we don’t have the power to take this sort of action when we see these risks arising. 
      • It can also be difficult to deal with large companies which use the power of their platforms again and again, to take over one related market after another. We can deal with that issue with a series of cases – but the risk is that we’ll always find ourselves playing catch-up, while platforms move from market to market, using the same strategies to drive out their rivals. 
      • The risk, though, is that we’ll have a fragmented system, with different rules in different EU countries. That would make it hard to tackle huge platforms that operate throughout Europe, and to deal with other problems that you find in digital markets in many EU countries. And it would mean that businesses and consumers across Europe can’t all rely on the same protection. 
      • That’s why the second pillar of the Digital Markets Act would put a harmonised market investigation framework in place across the single market, giving us the power to tackle market failures like this in digital markets, and stop new ones from emerging. That would give us a harmonised set of rules that would allow us to investigate certain structural problems in digital markets. And if necessary, we could take action to make these markets contestable and competitive.
  • California Governor Gavin Newsom (D) vetoed one of the bills sent to him to amend the “California Consumer Privacy Act” (AB 375) last week. In mid-October, he signed two bills that amended the CCPA but one will only take effect if the “California Privacy Rights Act” (CPRA) (Ballot Initiative 24) is not enacted by voters in the November election. Moreover, if the CPRA is enacted via ballot, then the two other statutes would likely become dead law as the CCPA and its amendments would become moot once the CPRA becomes effective in 2023.
    • Newsom vetoed AB 1138 that would amend the recently effective “Parent’s Accountability and Child Protection Act” would bar those under the age of 13 from opening a social media account unless the platform got the explicit consent from their parents. Moreover, “[t]he bill would deem a business to have actual knowledge of a consumer’s age if it willfully disregards the consumer’s age.” In his veto message, Newsom explained that while he agrees with the spirit of the legislation, it would create unnecessary confusion and overlap with federal law without any increase in protection for children. He signaled an openness to working with the legislature on this issue, however.
    • Newsom signed AB 1281 that would extend the carveout for employers to comply with the CCPA from 1 January 2021 to 1 January 2022. The CCPA “exempts from its provisions certain information collected by a business about a natural person in the course of the natural person acting as a job applicant, employee, owner, director, officer, medical staff member, or contractor, as specified…[and also] exempts from specified provisions personal information reflecting a written or verbal communication or a transaction between the business and the consumer, if the consumer is a natural person who is acting as an employee, owner, director, officer, or contractor of a company, partnership, sole proprietorship, nonprofit, or government agency and whose communications or transaction with the business occur solely within the context of the business conducting due diligence regarding, or providing or receiving a product or service to or from that company, partnership, sole proprietorship, nonprofit, or government agency.” AB 1281 “shall become operative only” if the CPRA is not approved by voters.
    • Newsom also signed AB 713 that would:
      • except from the CCPA information that was deidentified in accordance with specified federal law, or was derived from medical information, protected health information, individually identifiable health information, or identifiable private information, consistent with specified federal policy, as provided.
      • except from the CCPA a business associate of a covered entity, as defined, that is governed by federal privacy, security, and data breach notification rules if the business associate maintains, uses, and discloses patient information in accordance with specified requirements.
      • except information that is collected for, used in, or disclosed in research, as defined.
      • additionally prohibit a business or other person from reidentifying information that was deidentified, unless a specified exception is met.
      • beginning January 1, 2021, require a contract for the sale or license of deidentified information to include specified provisions relating to the prohibition of reidentification, as provided.
  • The Government Accountability Office (GAO) published a report requested by the chair of a House committee on a Federal Communications Commission (FCC) program to provide subsidies for broadband providers to provide service for High cost areas, typically for people in rural or hard to reach areas. Even though the FCC has provided nearly $5 billion in 2019 through the Universal Service Fund’s (USF) high cost program, the agency lack data to determine whether the goals of the program are being met. House Energy and Commerce Committee Chair Frank Pallone Jr (D-NJ) had asked for the assessment, which could well form the basis for future changes to how the FCC funds and sets conditions for use of said funds for broadband.
    • The GAO noted:
      • According to stakeholders GAO interviewed, FCC faces three key challenges to accomplish its high-cost program performance goals: (1) accuracy of FCC’s broadband deployment data, (2) broadband availability on tribal lands, and (3) maintaining existing fixed-voice infrastructure and attaining universal mobile service. For example, although FCC adopted a more precise method of collecting and verifying broadband availability data, stakeholders expressed concern the revised data would remain inaccurate if carriers continue to overstate broadband coverage for marketing and competitive reasons. Overstating coverage impairs FCC’s efforts to promote universal voice and broadband since an area can become ineligible for high-cost support if a carrier reports that service already exists in that area. FCC has also taken actions to address the lack of broadband availability on tribal lands, such as making some spectrum available to tribes for wireless broadband in rural areas. However, tribal stakeholders told GAO that some tribes are unable to secure funding to deploy the infrastructure necessary to make use of spectrum for wireless broadband purposes.
    • The GAO concluded:
      • The effective use of USF resources is critical given the importance of ensuring that Americans have access to voice and broadband services. Overall, FCC’s high-cost program performance goals and measures are often not conducive to providing FCC with high-quality performance information. The performance goals lack measurable or quantifiable bases upon which numeric targets can be set. Further, while FCC’s performance measures met most of the key attributes of effective measures, the measures often lacked linkage, clarity, and objectivity. Without such clarity and specific desired outcomes, FCC lacks performance information that could help FCC make better-informed decisions about how to allocate the program’s resources to meet ongoing and emerging challenges to ensuring universal access to voice and broadband services. Furthermore, the absence of public reporting of this information leaves stakeholders, including Congress, uncertain about the program’s effectiveness to deploy these services.
    • The GAO recommended that
      • The Chairman of FCC should revise the high-cost performance goals so that they are measurable and quantifiable.
      • The Chairman of FCC should ensure high-cost performance measures align with key attributes of successful performance measures, including ensuring that measures clearly link with performance goals and have specified targets.
      • The Chairman of FCC should ensure the high-cost performance measure for the goal of minimizing the universal service contribution burden on consumers and businesses takes into account user-fee leading practices, such as equity and sustainability considerations.
      • The Chairman of FCC should publicly and periodically report on the progress it has made for its high-cost program’s performance goals, for example, by including relevant performance information in its Annual Broadband Deployment Report or the USF Monitoring Report.
  • An Irish court has ruled that the Data Protection Commission (DPC) must cover the legal fees of Maximillian Schrems for litigating and winning his case in which the Court of Justice of the European Union (CJEU) struck down the adequacy decision underpinning the European Union-United States Privacy Shield (aka Schrems II). It has been estimated that Schrems’ legal costs could total between €2-5 million for filing a complaint against Facebook, the low end of which would represent 10% of the DPC’s budget this year. In addition, the DPC has itself spent almost €3 million litigating the case.
  • House Transportation and Infrastructure Chair Peter DeFazio (D-OR) and Ranking Member Sam Graves (R-MO) asked that the Government Accountability Office (GAO) review the implications of the Federal Communications Commission’s (FCC) 2019 proposal to allow half of the Safety Band (i.e. 5.9 GHz spectrum) to be used for wireless communications even though the United States (U.S.) Department of Transportation weighed in against this decision. DeFazio and Graves are asking for a study of “the safety implications of sharing more than half of the 5.9 GHz spectrum band” that is currently “reserved exclusively for transportation safety purposes.” This portion of the spectrum was set aside in 1999 for connected vehicle technologies, according to DeFazio and Graves, and given the rise of automobile technology that requires spectrum, they think the FCC’s proposed proceeding “may significantly affect the efficacy of current and future applications of vehicle safety technologies.” DeFazio and Graves asked the GAO to review the following:
    • What is the current status of 5.9 GHz wireless transportation technologies, both domestically and internationally?
    • What are the views of industry and other stakeholders on the potential uses of these technologies, including scenarios where the 5.9 GHz spectrum is shared among different applications?
    • In a scenario in which the 5.9 GHz spectrum band is shared among different applications, what effect would this have on the efficacy, including safety, of current wireless transportation technology deployments?
    • What options exist for automakers and highway management agencies to ensure the safe deployment of connected vehicle technologies in a scenario in which the 5.9 GHz spectrum band is shared among different applications?
    • How, if at all, have DOT and FCC assessed current and future needs for 5.9 GHz spectrum in transportation applications? 
    • How, if at all, have DOT and FCC coordinated to develop a federal spectrum policy for connected vehicles?
  • The Harvard Law School’s Cyberlaw Clinic and the Electronic Frontier Foundation (EFF) have published “A Researcher’s Guide to Some Legal Risks of Security Research,” which is timely given the case before the Supreme Court of the United States to determine whether the “Computer Fraud and Abuse Act” (CFAA). The Cyberlaw Clinic and EFF stated:
    • Just last month, over 75 prominent security researchers signed a letter urging the Supreme Court not to interpret the CFAA, the federal anti-hacking/computer crime statute, in a way that would criminalize swaths of valuable security research.
    • In the report, the Cyberlaw Clinic and EFF explained:
      • This guide overviews broad areas of potential legal risk related to security research, and the types of security research likely implicated. We hope it will serve as a useful starting point for concerned researchers and others. While the guide covers what we see as the main areas of legal risk for security researchers, it is not exhaustive.
    • In Van Buren v. United States, the Court will consider the question of “[w]hether a person who is authorized to access information on a computer for certain purposes violates Section 1030(a)(2) of the Computer Fraud and Abuse Act if he accesses the same information for an improper purpose.” In this case, the defendant was a police officer who took money as part of a sting operation to illegally use his access to Georgia’s database of license plates to obtain information about a person. The Eleventh Circuit Court of Appeals denied his appeal of his conviction under the CFAA per a previous ruling in that circuit that “a defendant violates the CFAA not only when he obtains information that he has no “rightful[]” authorization whatsoever to acquire, but also when he obtains information “for a nonbusiness purpose.”
    • As the Cyberlaw Clinic and EFF noted in the report:
      • Currently, courts are divided about whether the statute’s prohibition on “exceed[ing] authorized access” applies to people who have authorization to access data (for some purpose), but then access it for a (different) purpose that violates a contractual terms of service or computer use policy. Some courts have found that the CFAA covers activities that do not circumvent any technical access barriers, from making fake profiles that violate Facebook’s terms of service to running a search on a government database without permission. Other courts, disagreeing with the preceding approach, have held that a verbal or contractual prohibition alone cannot render access punishable under the CFAA.
  • The United Kingdom’s Institution of Engineering and Technology (IET) and the National Cyber Security Centre (NCSC) have published “Code of Practice: Cyber Security and Safety” to help engineers develop technological and digital products with both safety and cybersecurity in mind. The IET and NCSC explained:
    • The aim of this Code is to help organizations accountable for safety-related systems manage cyber security vulnerabilities that could lead to hazards. It does this by setting out principles that when applied will improve the interaction between the disciplines of system safety and cyber security, which have historically been addressed as distinct activities.
    • The objectives for organizations that follow this Code are: (a) to manage the risk to safety from any insecurity of digital technology; (b) to be able to provide assurance that this risk is acceptable; and (c) where there is a genuine conflict between the controls used to achieve safety and security objectives, to ensure that these are properly resolved by the appropriate authority.
    • It should be noted that the focus of this Code is on the safety and security of digital technology, but it is recognized that addressing safety and cyber security risk is not just a technological issue: it involves people, process, physical and technological aspects. It should also be noted that whilst digital technology is central to the focus, other technologies can be used in pursuit of a cyber attack. For example, the study of analogue emissions (electromagnetic, audio, etc.) may give away information being digitally processed and thus analogue interfaces could be used to provide an attack surface.

Coming Events

  • On 10 November, the Senate Commerce, Science, and Transportation Committee will hold a hearing to consider nominations, including Nathan Simington’s to be a Member of the Federal Communications Commission.
  • On 17 November, the Senate Judiciary Committee will reportedly hold a hearing with Facebook CEO Mark Zuckerberg and Twitter CEO Jack Dorsey on Section 230 and how their platforms chose to restrict The New York Post article on Hunter Biden.
  • On 18 November, the Federal Communications Commission (FCC) will hold an open meeting and has released a tentative agenda:
    • Modernizing the 5.9 GHz Band. The Commission will consider a First Report and Order, Further Notice of Proposed Rulemaking, and Order of Proposed Modification that would adopt rules to repurpose 45 megahertz of spectrum in the 5.850-5.895 GHz band for unlicensed operations, retain 30 megahertz of spectrum in the 5.895-5.925 GHz band for the Intelligent Transportation Systems (ITS) service, and require the transition of the ITS radio service standard from Dedicated Short-Range Communications technology to Cellular Vehicle-to-Everything technology. (ET Docket No. 19-138)
    • Further Streamlining of Satellite Regulations. The Commission will consider a Report and Order that would streamline its satellite licensing rules by creating an optional framework for authorizing space stations and blanket-licensed earth stations through a unified license. (IB Docket No. 18-314)
    • Facilitating Next Generation Fixed-Satellite Services in the 17 GHz Band. The Commission will consider a Notice of Proposed Rulemaking that would propose to add a new allocation in the 17.3-17.8 GHz band for Fixed-Satellite Service space-to-Earth downlinks and to adopt associated technical rules. (IB Docket No. 20-330)
    • Expanding the Contribution Base for Accessible Communications Services. The Commission will consider a Notice of Proposed Rulemaking that would propose expansion of the Telecommunications Relay Services (TRS) Fund contribution base for supporting Video Relay Service (VRS) and Internet Protocol Relay Service (IP Relay) to include intrastate telecommunications revenue, as a way of strengthening the funding base for these forms of TRS and making it more equitable without increasing the size of the Fund itself. (CG Docket Nos. 03-123, 10-51, 12-38)
    • Revising Rules for Resolution of Program Carriage Complaints. The Commission will consider a Report and Order that would modify the Commission’s rules governing the resolution of program carriage disputes between video programming vendors and multichannel video programming distributors. (MB Docket Nos. 20-70, 17-105, 11-131)
    • Enforcement Bureau Action. The Commission will consider an enforcement action.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Pexels from Pixabay

Further Reading, Other Developments, and Coming Events (22 October)

Further Reading

  •  “A deepfake porn Telegram bot is being used to abuse thousands of women” By Matt Burgess — WIRED UK. A bot set loose on Telegram can take pictures of women and, apparently teens, too, and “takes off” their clothing, rendering a naked image of females who never took naked pictures. This seems to be the next iteration in deepfake porn, a problem that will surely get worse until governments legislate against it and technology companies have incentives to locate and take down such material.
  • The Facebook-Twitter-Trump Wars Are Actually About Something Else” By Charlie Warzel — The New York Times. This piece makes the case that there are no easy fixes for American democracy or for misinformation on social media platforms.
  • Facebook says it rejected 2.2m ads for breaking political campaigning rules” — Agence France-Presse. Facebook’s Vice President of Global Affairs and Communications Nick Clegg said the social media giant is employing artificial intelligence and humans to find and remove political advertisements that violate policy in order to avoid a repeat of 2016 where untrue information and misinformation played roles in both Brexit and the election of Donald Trump as President of the United States.
  • Huawei Fallout—Game-Changing New China Threat Strikes At Apple And Samsung” By Zak Doffman — Forbes. Smartphone manufacturers from the People’s Republic of China (PRC) appear ready to step into the projected void caused by the United States (U.S.) strangling off Huawei’s access to chips. Xiaomi and Oppo have already seen sales surge worldwide and are poised to pick up where Huawei is being forced to leave off, perhaps demonstrating the limits of U.S. power to blunt the rise of PRC technology companies.
  • As Local News Dies, a Pay-for-Play Network Rises in Its Place” By Davey Alba and Jack Nicas — The New York Times. With a decline and demise of many local media outlets in the United States, new groups are stepping into the void, and some are politically minded but not transparent about biases. The organization uncovered in this article is nakedly Republican and is running and planting articles at both legitimate and artificial news sites for pay. Sometimes conservative donors pay, sometimes campaigns do. Democrats are engaged in the same activity but apparently to a lesser extent. These sorts of activities will only erode further faith in the U.S. media.
  • Forget Antitrust Laws. To Limit Tech, Some Say a New Regulator Is Needed.” By Steve Lohr — The New York Times. This piece argues that anti-trust enforcement actions are plodding, tending to take years to finish. Consequently, this body of law is inadequate to the task of addressing the market dominance of big technology companies. Instead, a new regulatory body is needed along the lines of those regulating the financial services industries that is more nimble than anti-trust. Given the problems in that industry with respect to regulation, this may not be the best model.
  • “‘Do Not Track’ Is Back, and This Time It Might Work” By Gilad Edelman — WIRED. Looking to utilize the requirement in the “California Consumer Privacy Act” (CCPA) (AB 375) that requires regulated entities to respect and effectuate the use of a one-time opt-out mechanism, a group of entities have come together to build and roll out the Global Privacy Control. In theory, users could download this technical specification to their phones and computers, install it, use it once, and then all websites would be on notice regarding that person’s privacy preferences. Such a means would go to the problem turned up by Consumer Reports recent report on the difficulty of trying to opt out of having one’s personal information sold.
  • EU countries sound alarm about growing anti-5G movement” By Laurens Cerulus — Politico. 15 European Union (EU) nations wrote the European Commission (EC) warning that the nascent anti-5G movement borne of conspiracy thinking and misinformation threatens the Eu’s position vis-à-vis the United States (U.S.) and the People’s Republic of China (PRC). There have been more than 200 documented arson attacks in the EU with the most having occurred in the United Kingdom, France, and the Netherlands. These nations called for a more muscular, more forceful debunking of the lies and misinformation being spread about 5G.
  • Security firms call Microsoft’s effort to disrupt botnet to protect against election interference ineffective” By Jay Greene — The Washington Post. Microsoft seemingly acted alongside the United States (U.S.) Cyber Command to take down and impair the operation of Trickbot, but now cybersecurity experts are questioning how effective Microsoft’s efforts really were. Researchers have shown the Russian operated Trickbot has already stood up operations and has dispersed across servers around the world, showing how difficult it is to address some cyber threats.
  • Governments around the globe find ways to abuse Facebook” By Sara Fischer and Ashley Gold — Axios. This piece puts a different spin on the challenges Facebook faces in countries around the world, especially those that ruthlessly use the platform to spread lies and misinformation than the recent BuzzFeed News article. The new article paints Facebook as the well-meaning company being taken advantage of while the other one portrays a company callous to content moderation except in nations where it causes them political problems such as the United States, the European Union, and other western democracies.

Other Developments

  • The United States (U.S.) Department of Justice’s (DOJ) Cyber-Digital Task Force (Task Force) issued “Cryptocurrency: An Enforcement Framework,” that “provides a comprehensive overview of the emerging threats and enforcement challenges associated with the increasing prevalence and use of cryptocurrency; details the important relationships that the Department of Justice has built with regulatory and enforcement partners both within the United States government and around the world; and outlines the Department’s response strategies.” The Task Force noted “[t]his document does not contain any new binding legal requirements not otherwise already imposed by statute or regulation.” The Task Force summarized the report:
    • [I]n Part I, the Framework provides a detailed threat overview, cataloging the three categories into which most illicit uses of cryptocurrency typically fall: (1) financial transactions associated with the commission of crimes; (2) money laundering and the shielding of legitimate activity from tax, reporting, or other legal requirements; and (3) crimes, such as theft, directly implicating the cryptocurrency marketplace itself. 
    • Part II explores the various legal and regulatory tools at the government’s disposal to confront the threats posed by cryptocurrency’s illicit uses, and highlights the strong and growing partnership between the Department of Justice and the Securities and Exchange Commission, the Commodity Futures Commission, and agencies within the Department of the Treasury, among others, to enforce federal law in the cryptocurrency space.
    • Finally, the Enforcement Framework concludes in Part III with a discussion of the ongoing challenges the government faces in cryptocurrency enforcement—particularly with respect to business models (employed by certain cryptocurrency exchanges, platforms, kiosks, and casinos), and to activity (like “mixing” and “tumbling,” “chain hopping,” and certain instances of jurisdictional arbitrage) that may facilitate criminal activity.    
  • The White House’s Office of Science and Technology Policy (OSTP) has launched a new website for the United States’ (U.S.) quantum initiative and released a report titled “Quantum Frontiers: Report On Community Input To The Nation’s Strategy For Quantum Information Science.” The Quantum Initiative flows from the “National Quantum Initiative Act” (P.L. 115-368) “to  provide  for  a  coordinated  Federal  program  to  accelerate  quantum  research  and  development  for  the  economic and national security of the United States.” The OSTP explained that the report “outlines eight frontiers that contain core problems with fundamental questions confronting quantum information science (QIS) today:
    • Expanding Opportunities for Quantum Technologies to Benefit Society
    • Building the Discipline of Quantum Engineering
    • Targeting Materials Science for Quantum Technologies
    • Exploring Quantum Mechanics through Quantum Simulations
    • Harnessing Quantum Information Technology for Precision Measurements
    • Generating and Distributing Quantum Entanglement for New Applications
    • Characterizing and Mitigating Quantum Errors
    • Understanding the Universe through Quantum Information
    • OSTP asserted “[t]hese frontier areas, identified by the QIS research community, are priorities for the government, private sector, and academia to explore in order to drive breakthrough R&D.”
  • The New York Department of Financial Services (NYDFS) published its report on the July 2020 Twitter hack during which a team of hacker took over a number of high-profile accounts (e.g. Barack Obama, Kim Kardashian West, Jeff Bezos, and Elon Musk) in order to perpetrate a cryptocurrency scam. The NYDFS has jurisdiction over cryptocurrencies and companies dealing in this item in New York. The NYDFS found that the hackers used the most basic means to acquire permission to take over accounts. The NYDFS explained:
    • Given that Twitter is a publicly traded, $37 billion technology company, it was surprising how easily the Hackers were able to penetrate Twitter’s network and gain access to internal tools allowing them to take over any Twitter user’s account. Indeed, the Hackers used basic techniques more akin to those of a traditional scam artist: phone calls where they pretended to be from Twitter’s Information Technology department. The extraordinary access the Hackers obtained with this simple technique underscores Twitter’s cybersecurity vulnerability and the potential for devastating consequences. Notably, the Twitter Hack did not involve any of the high-tech or sophisticated techniques often used in cyberattacks–no malware, no exploits, and no backdoors.
    • The implications of the Twitter Hack extend far beyond this garden-variety fraud. There are well-documented examples of social media being used to manipulate markets and interfere with elections, often with the simple use of a single compromised account or a group of fake accounts.In the hands of a dangerous adversary, the same access obtained by the Hackers–the ability to take control of any Twitter users’ account–could cause even greater harm.
    • The Twitter Hack demonstrates the need for strong cybersecurity to curb the potential weaponization of major social media companies. But our public institutions have not caught up to the new challenges posed by social media. While policymakers focus on antitrust and content moderation problems with large social media companies, their cybersecurity is also critical. In other industries that are deemed critical infrastructure, such as telecommunications, utilities, and finance, we have established regulators and regulations to ensure that the public interest is protected. With respect to cybersecurity, that is what is needed for large, systemically important social media companies.
    • The NYDFS recommended the cybersecurity measures cryptocurrency companies in New York should implement to avoid similar hacks, including its own cybersecurity regulations that bind its regulated entities in New York. The NYDFS also called for a national regulator to address the lack of a dedicated regulator of Twitter and other massive social media platforms. The NYDFS asserted:
      • Social media companies currently have no dedicated regulator. They are subject to the same general oversight applicable to other companies. For instance, the SEC’s regulations for all public companies apply to public social media companies, and antitrust and related laws and regulations enforced by the Department of Justice and the FTC apply to social media companies as they do to all companies. Social media companies are also subject to generally applicable laws, such as the California Consumer Privacy Act and the New York SHIELD Act. The European Union’s General Data Protection Regulation, which regulates the storage and use of personal data, also applies to social media entities doing business in Europe.
      • But there are no regulators that have the authority to uniformly regulate social media platforms that operate over the internet, and to address the cybersecurity concerns identified in this Report. That regulatory vacuum must be filled.
      • A useful starting point is to create a “systemically important” designation for large social media companies, like the designation for critically important bank and non-bank financial institutions. In the wake of the 2007-08 financial crisis, Congress established a new regulatory framework for financial institutions that posed a systemic threat to the financial system of the United States. An institution could be designated as a Systemically Important Financial Institution (“SIFI”) “where the failure of or a disruption to the functioning of a financial market utility or the conduct of a payment, clearing, or settlement activity could create, or increase, the risk of significant liquidity or credit problems spreading among financial institutions or markets and thereby threaten the stability of the financial system of the United States.”
      • The risks posed by social media to our consumers, economy, and democracy are no less grave than the risks posed by large financial institutions. The scale and reach of these companies, combined with the ability of adversarial actors who can manipulate these systems, require a similarly bold and assertive regulatory approach.
      • The designation of an institution as a SIFI is made by the Financial Stability Oversight Council (“FSOC”), which Congress established to “identify risks to the financial stability of the United States” and to provide enhanced supervision of SIFIs.[67] The FSOC also “monitors regulatory gaps and overlaps to identify emerging sources of systemic risk.” In determining whether a financial institution is systemically important, the FSOC considers numerous factors including: the effect that a failure or disruption to an institution would have on financial markets and the broader financial system; the nature of the institution’s transactions and relationships; the nature, concentration, interconnectedness, and mix of the institution’s activities; and the degree to which the institution is regulated.
      • An analogue to the FSOC should be established to identify systemically important social media companies. This new Oversight Council should evaluate the reach and impact of social media companies, as well as the society-wide consequences of a social media platform’s misuse, to determine which companies they should designate as systemically important. Once designated, those companies should be subject to enhanced regulation, such as through the provision of “stress tests” to evaluate the social media companies’ susceptibility to key threats, including cyberattacks and election interference.
      • Finally, the success of such oversight will depend on the establishment of an expert agency to oversee designated social media companies. Systemically important financial companies designated by the FSOC are overseen by the Federal Reserve Board, which has a long-established and deep expertise in banking and financial market stability. A regulator for systemically important social media would likewise need deep expertise in areas such as technology, cybersecurity, and disinformation. This expert regulator could take various forms; it could be a completely new agency or could reside within an established agency or at an existing regulator.
  • The Government Accountability Office (GAO) evaluated how well the Trump Administration has been implementing the “Open, Public, Electronic and Necessary Government Data Act of 2018” (OPEN Government Data Act) (P.L. 115-435). As the GAO explained, this statute “requires federal agencies to publish their information as open data using standardized, nonproprietary formats, making data available to the public open by default, unless otherwise exempt…[and] codifies and expands on existing federal open data policy including the Office of Management and Budget’s (OMB) memorandum M-13-13 (M-13-13), Open Data Policy—Managing Information as an Asset.”
    • The GAO stated
      • To continue moving forward with open government data, the issuance of OMB implementation guidance should help agencies develop comprehensive inventories of their data assets, prioritize data assets for publication, and decide which data assets should or should not be made available to the public.
      • Implementation of this statutory requirement is critical to agencies’ full implementation and compliance with the act. In the absence of this guidance, agencies, particularly agencies that have not previously been subject to open data policies, could fall behind in meeting their statutory timeline for implementing comprehensive data inventories.
      • It is also important for OMB to meet its statutory responsibility to biennially report on agencies’ performance and compliance with the OPEN Government Data Act and to coordinate with General Services Administration (GSA) to improve the quality and availability of agency performance data that could inform this reporting. Access to this information could inform Congress and the public on agencies’ progress in opening their data and complying with statutory requirements. This information could also help agencies assess their progress and improve compliance with the act.
    • The GAO made three recommendations:
      • The Director of OMB should comply with its statutory requirement to issue implementation guidance to agencies to develop and maintain comprehensive data inventories. (Recommendation 1)
      • The Director of OMB should comply with the statutory requirement to electronically publish a report on agencies’ performance and compliance with the OPEN Government Data Act. (Recommendation 2)
      • The Director of OMB, in collaboration with the Administrator of GSA, should establish policy to ensure the routine identification and correction of errors in electronically published performance information. (Recommendation 3)
  • The United States’ (U.S.) National Security Agency (NSA) issued a cybersecurity advisory titled “Chinese State-Sponsored Actors Exploit Publicly Known Vulnerabilities,” that “provides Common Vulnerabilities and Exposures (CVEs) known to be recently leveraged, or scanned-for, by Chinese state-sponsored cyber actors to enable successful hacking operations against a multitude of victim networks.” The NSA recommended a number of mitigations generally for U.S. entities, including:
    • Keep systems and products updated and patched as soon as possible after patches are released.
    • Expect that data stolen or modified (including credentials, accounts, and software) before the device was patched will not be alleviated by patching, making password changes and reviews of accounts a good practice.
    • Disable external management capabilities and set up an out-of-band management network.
    • Block obsolete or unused protocols at the network edge and disable them in device configurations.
    • Isolate Internet-facing services in a network Demilitarized Zone (DMZ) to reduce the exposure of the internal network.
    • Enable robust logging of Internet-facing services and monitor the logs for signs of compromise.
    • The NSA then proceeded to recommend specific fixes.
    • The NSA provided this policy backdrop:
      • One of the greatest threats to U.S. National Security Systems (NSS), the U.S. Defense Industrial Base (DIB), and Department of Defense (DOD) information networks is Chinese state-sponsored malicious cyber activity. These networks often undergo a full array of tactics and techniques used by Chinese state-sponsored cyber actors to exploit computer networks of interest that hold sensitive intellectual property, economic, political, and military information. Since these techniques include exploitation of publicly known vulnerabilities, it is critical that network defenders prioritize patching and mitigation efforts.
      • The same process for planning the exploitation of a computer network by any sophisticated cyber actor is used by Chinese state-sponsored hackers. They often first identify a target, gather technical information on the target, identify any vulnerabilities associated with the target, develop or re-use an exploit for those vulnerabilities, and then launch their exploitation operation.
  • Belgium’s data protection authority (DPA) (Autorité de protection des données in French or Gegevensbeschermingsautoriteit in Dutch) (APD-GBA) has reportedly found that the Transparency & Consent Framework (TCF) developed by the Interactive Advertising Bureau (IAB) violates the General Data Protection Regulation (GDPR). The Real-Time Bidding (RTB) system used for online behavioral advertising allegedly transmits the personal information of European Union residents without their consent even before a popup appears on their screen asking for consent. The APD-GBA is the lead DPA in the EU in investigating the RTB and will likely now circulate their findings and recommendations to other EU DPAs before any enforcement will commence.
  • None Of Your Business (noyb) announced “[t]he Irish High Court has granted leave for a “Judicial Review” against the Irish Data Protection Commission (DPC) today…[and] [t]he legal action by noyb aims to swiftly implement the [Court of Justice for the European Union (CJEU)] Decision prohibiting Facebook’s” transfer of personal data from the European Union to the United States (U.S.) Last month, after the DPC directed Facebook to stop transferring the personal data of EU citizens to the U.S., the company filed suit in the Irish High Court to stop enforcement of the order and succeeded in staying the matter until the court rules on the merits of the challenge.
    • noyb further asserted:
      • Instead of making a decision in the pending procedure, the DPC has started a second, new investigation into the same subject matter (“Parallel Procedure”), as widely reported (see original reporting by the WSJ). No logical reasons for the Parallel Procedure was given, but the DPC has maintained that Mr Schrems will not be heard in this second case, as he is not a party in this Parallel Procedure. This Paralell procedure was criticised by Facebook publicly (link) and instantly blocked by a Judicial Review by Facebook (see report by Reuters).
      • Today’s Judicial Review by noyb is in many ways the counterpart to Facebook’s Judicial Review: While Facebook wants to block the second procedure by the DPC, noyb wants to move the original complaints procedure towards a decision.
      • Earlier this summer, the CJEU struck down the adequacy decision for the agreement between the EU and (U.S. that had provided the easiest means to transfer the personal data of EU citizens to the U.S. for processing under the General Data Protection Regulation (GDPR) (i.e. the EU-U.S. Privacy Shield). In the case known as Schrems II, the CJEU also cast doubt on whether standard contractual clauses (SCC) used to transfer personal data to the U.S. would pass muster given the grounds for finding the Privacy Shield inadequate: the U.S.’s surveillance regime and lack of meaningful redress for EU citizens. Consequently, it has appeared as if data protection authorities throughout the EU would need to revisit SCCs for transfers to the U.S., and it appears the DPC was looking to stop Facebook from using its SCC. Facebook is apparently arguing in its suit that it will suffer “extremely significant adverse effects” if the DPC’s decision is implemented.
  • Most likely with the aim of helping British chances for an adequacy decision from the European Union (EU), the United Kingdom’s Information Commissioner’s Office (ICO) published guidance that “discusses the right of access [under the General Data Protection Regulation (GDPR)] in detail.” The ICO explained “is aimed at data protection officers (DPOs) and those with specific data protection responsibilities in larger organisations…[but] does not specifically cover the right of access under Parts 3 and 4 of the Data Protection Act 2018.”
    • The ICO explained
      • The right of access, commonly referred to as subject access, gives individuals the right to obtain a copy of their personal data from you, as well as other supplementary information.
  • The report the House Education and Labor Ranking Member requested from the Government Accountability Office (GAO) on the data security and data privacy practices of public schools. Representative Virginia Foxx (R-NC) asked the GAO “to review the security of K-12 students’ data. This report examines (1) what is known about recently reported K-12 cybersecurity incidents that compromised student data, and (2) the characteristics of school districts that experienced these incidents.” Strangely, the report did have GAO’s customary conclusions or recommendations. Nonetheless, the GAO found:
    • Ninety-nine student data breaches reported from July 1, 2016 through May 5, 2020 compromised the data of students in 287 school districts across the country, according to our analysis of K-12 Cybersecurity Resource Center (CRC) data (see fig. 3). Some breaches involved a single school district, while others involved multiple districts. For example, an attack on a vendor system in the 2019-2020 school year affected 135 districts. While information about the number of students affected was not available for every reported breach, examples show that some breaches affected thousands of students, for instance, when a cybercriminal accessed 14,000 current and former students’ personally identifiable information (PII) in one district.
    • The 99 reported student data breaches likely understate the number of breaches that occurred, for different reasons. Reported incidents sometimes do not include sufficient information to discern whether data were breached. We identified 15 additional incidents in our analysis of CRC data in which student data might have been compromised, but the available information was not definitive. In addition, breaches can go undetected for some time. In one example, the personal information of hundreds of thousands of current and former students in one district was publicly posted for 2 years before the breach was discovered.
    • The CRC identified 28 incidents involving videoconferences from April 1, 2020 through May 5, 2020, some of which disrupted learning and exposed students to harm. In one incident, 50 elementary school students were exposed to pornography during a virtual class. In another incident in a different district, high school students were targeted with hate speech during a class, resulting in the cancellation that day of all classes using the videoconferencing software. These incidents also raise concerns about the potential for violating students’ privacy. For example, one district is reported to have instructed teachers to record their class sessions. Teachers said that students’ full names were visible to anyone viewing the recording.
    • The GAO found gaps in the protection and enforcement of student privacy by the United States government:
      • [The Department of] Education is responsible for enforcing Family Educational Rights and Privacy Act (FERPA), which addresses the privacy of PII in student education records and applies to all schools that receive funds under an applicable program administered by Education. If parents or eligible students believe that their rights under FERPA have been violated, they may file a formal complaint with Education. In response, Education is required to take appropriate actions to enforce and deal with violations of FERPA. However, because the department’s authority under FERPA is directly related to the privacy of education records, Education’s security role is limited to incidents involving potential violations under FERPA. Further, FERPA amendments have not directly addressed educational technology use.
      • The “Children’s Online Privacy Protection Act” (COPPA) requires the Federal Trade Commission (FTC) to issue and enforce regulations concerning children’s privacy. The COPPA Rule, which took effect in 2000 and was later amended in 2013, requires operators of covered websites or online services that collect personal information from children under age 13 to provide notice and obtain parental consent, among other things. COPPA generally applies to the vendors who provide educational technology, rather than to schools directly. However, according to FTC guidance, schools can consent on behalf of parents to the collection of students’ personal information if such information is used for a school-authorized educational purpose and for no other commercial purpose.
  • Upturn, an advocacy organization that “advances equity and justice in the design, governance, and use of technology,” has released a report showing that United States (U.S.) law enforcement agencies have multiple means of hacking into encrypted or protected smartphones. There have long been the means and vendors available in the U.S. and abroad for breaking into phones despite the claims of a number of nations like the Five Eyes (U.S., the United Kingdom, Australia, Canada, and New Zealand) that default end-to-end encryption was a growing problem that allowed those preying on children and engaged in terrorism to go undetected. In terms of possible bias, Upturn is “is supported by the Ford Foundation, the Open Society Foundations, the John D. and Catherine T. MacArthur Foundation, Luminate, the Patrick J. McGovern Foundation, and Democracy Fund.”
    • Upturn stated:
      • Every day, law enforcement agencies across the country search thousands of cellphones, typically incident to arrest. To search phones, law enforcement agencies use mobile device forensic tools (MDFTs), a powerful technology that allows police to extract a full copy of data from a cellphone — all emails, texts, photos, location, app data, and more — which can then be programmatically searched. As one expert puts it, with the amount of sensitive information stored on smartphones today, the tools provide a “window into the soul.”
      • This report documents the widespread adoption of MDFTs by law enforcement in the United States. Based on 110 public records requests to state and local law enforcement agencies across the country, our research documents more than 2,000 agencies that have purchased these tools, in all 50 states and the District of Columbia. We found that state and local law enforcement agencies have performed hundreds of thousands of cellphone extractions since 2015, often without a warrant. To our knowledge, this is the first time that such records have been widely disclosed.
    • Upturn argued:
      • Law enforcement use these tools to investigate not only cases involving major harm, but also for graffiti, shoplifting, marijuana possession, prostitution, vandalism, car crashes, parole violations, petty theft, public intoxication, and the full gamut of drug-related offenses. Given how routine these searches are today, together with racist policing policies and practices, it’s more than likely that these technologies disparately affect and are used against communities of color.
      • We believe that MDFTs are simply too powerful in the hands of law enforcement and should not be used. But recognizing that MDFTs are already in widespread use across the country, we offer a set of preliminary recommendations that we believe can, in the short-term, help reduce the use of MDFTs. These include:
        • banning the use of consent searches of mobile devices,
        • abolishing the plain view exception for digital searches,
        • requiring easy-to-understand audit logs,
        • enacting robust data deletion and sealing requirements, and
        • requiring clear public logging of law enforcement use.

Coming Events

  • The Federal Communications Commission (FCC) will hold an open commission meeting on 27 October, and the agency has released a tentative agenda:
    • Restoring Internet Freedom Order Remand – The Commission will consider an Order on Remand that would respond to the remand from the U.S. Court of Appeals for the D.C. Circuit and conclude that the Restoring Internet Freedom Order promotes public safety, facilitates broadband infrastructure deployment, and allows the Commission to continue to provide Lifeline support for broadband Internet access service. (WC Docket Nos. 17-108, 17-287, 11- 42)
    • Establishing a 5G Fund for Rural America – The Commission will consider a Report and Order that would establish the 5G Fund for Rural America to ensure that all Americans have access to the next generation of wireless connectivity. (GN Docket No. 20-32)
    • Increasing Unlicensed Wireless Opportunities in TV White Spaces – The Commission will consider a Report and Order that would increase opportunities for unlicensed white space devices to operate on broadcast television channels 2-35 and expand wireless broadband connectivity in rural and underserved areas. (ET Docket No. 20-36)
    • Streamlining State and Local Approval of Certain Wireless Structure Modifications – The Commission will consider a Report and Order that would further accelerate the deployment of 5G by providing that modifications to existing towers involving limited ground excavation or deployment would be subject to streamlined state and local review pursuant to section 6409(a) of the Spectrum Act of 2012. (WT Docket No. 19-250; RM-11849)
    • Revitalizing AM Radio Service with All-Digital Broadcast Option – The Commission will consider a Report and Order that would authorize AM stations to transition to an all-digital signal on a voluntary basis and would also adopt technical specifications for such stations. (MB Docket Nos. 13-249, 19-311)
    • Expanding Audio Description of Video Content to More TV Markets – The Commission will consider a Report and Order that would expand audio description requirements to 40 additional television markets over the next four years in order to increase the amount of video programming that is accessible to blind and visually impaired Americans. (MB Docket No. 11-43)
    • Modernizing Unbundling and Resale Requirements – The Commission will consider a Report and Order to modernize the Commission’s unbundling and resale regulations, eliminating requirements where they stifle broadband deployment and the transition to next- generation networks, but preserving them where they are still necessary to promote robust intermodal competition. (WC Docket No. 19-308)
    • Enforcement Bureau Action – The Commission will consider an enforcement action.
  • The Senate Commerce, Science, and Transportation Committee will hold a hearing on 28 October regarding 47 U.S.C. 230 titled “Does Section 230’s Sweeping Immunity Enable Big Tech Bad Behavior?” with testimony from:
    • Jack Dorsey, Chief Executive Officer of Twitter;
    • Sundar Pichai, Chief Executive Officer of Alphabet Inc. and its subsidiary, Google; and 
    • Mark Zuckerberg, Chief Executive Officer of Facebook.
  • On 29 October, the Federal Trade Commission (FTC) will hold a seminar titled “Green Lights & Red Flags: FTC Rules of the Road for Business workshop” that “will bring together Ohio business owners and marketing executives with national and state legal experts to provide practical insights to business and legal professionals about how established consumer protection principles apply in today’s fast-paced marketplace.”
  • On 10 November, the Senate Commerce, Science, and Transportation Committee will hold a hearing to consider nominations, including Nathan Simington’s to be a Member of the Federal Communications Commission.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Mehmet Turgut Kirkgoz from Pixabay

CJEU Puts Limits On Electronic Communications Surveillance

The EU’s highest court rules against three nations that tried to require that communications providers  hand over location data and traffic data in bulk.

The Court of Justice of the European Union (CJEU) handed down a pair of rulings (here and here) on the extent to which European Union (EU) nations may engage in bulk, indiscriminate collection of two types of data related to electronic communications. The CJEU found that while EU member nations may conduct these activities to combat crime or national security threats during periods limited by necessity and subject to oversight, nations may not generally require the providers of electronic communications to store and provide indiscriminate location data and traffic data in response to an actual national security danger or a prospective one. The CJEU combined three cases into two rulings that came from the United Kingdom (UK), France, and Belgium to elucidate the reach of the Privacy and Electronic Communications Directive in relation to foundational EU laws.

First and foremost, the CJEU found that the “Directive on privacy and electronic communications” (Directive 2002/58/EC) does indeed apply to situations where EU nations are directing telecommunications companies and similar providers to hold and turn over bulk data. Some EU nations had tried to argue that such practices fell outside the scope of the Directive, and the CJEU found the opposite. The CJEU went on to state the Directive does not generally allow for an exception to the general principle that the confidentiality of communications must be safeguarded and that any permitted abridgement of this and associated rights are subject to related EU law principles of proportionality (more on this below).  Consequently, the CJEU found that EU member nations may not require endless bulk transmission of location data and traffic data for national security reasons. Likewise, the court is also barring legislation requiring a communications provider to hold these data in case they are needed in the future by law enforcement or intelligence authorities. The CJEU went further and explicated a provision in the General Data Protection Regulation (GDPR) as barring EU nations from requiring that entities providing “online public communications services” and “hosting services” must retain and hand over data on people using those services.

The other side of the CJEU’s ruling is that EU nations may order just such bulk collection and retention of location data and traffic data if there is an imminent or foreseeable national security threat. This strikes me as the exception that will devour the rule. In any event, a court or an administrative body must be able to review whether such a national security threat exists and whether the collection is limited in both time and necessity. The reviewing entity must be able to render a binding decision that can shut down unlawful or unnecessary orders to providers to hand over such information.

The CJEU also found that targeted, limited orders for traffic and location data derived from objective and non-discriminatory bases targeting certain classes of people or a geographic location may be used. Real-time location data and traffic data may be collected as well if a court or administrative body has authorized the surveillance according to EU law. The CJEU spells out a few other exceptions EU nations may use regarding location data and traffic data. The CJEU, however, ruled that EU nations may not have provisions in laws allowing for the temporary suspension of the bar on providers being required to collect and turn over traffic data and location data to an EU government. Finally, the CJEU added that evidence of crimes gained through the bulk collection of the two types of data are inadmissible in the courts of EU nations.

The CJEU summarized the beginning of the case out of the UK:

  • At the beginning of 2015, the existence of practices for the acquisition and use of bulk communications data by the various security and intelligence agencies of the United Kingdom, namely GCHQ, MI5 and MI6, was made public, including in a report by the Intelligence and Security Committee of Parliament (United Kingdom). On 5 June 2015, Privacy International, a non-governmental organisation, brought an action before the Investigatory Powers Tribunal (United Kingdom) against the Secretary of State for Foreign and Commonwealth Affairs, the Secretary of State for the Home Department and those security and intelligence agencies, challenging the lawfulness of those practices.

The CJEU also summarized the two other cases combined into one:

  • By application lodged on 1 September 2015, French Data Network, La Quadrature du Net and the Fédération des fournisseurs d’accès à Internet associatifs brought an action before the Conseil d’État (Council of State, France) for the annulment of the implied rejection decision arising from the Prime Minister’s failure to reply to their application for the repeal of Article R. 10-13 of the CPCE and Decree No 2011-219, on the ground, inter alia, that those legislative texts infringe Article 15(1) of Directive 2002/58, read in the light of Articles 7, 8 and 11 of the Charter. Privacy International and the Center for Democracy and Technology were granted leave to intervene in the main proceedings.
  • By applications lodged on 10, 16, 17 and 18 January 2017, joined in the main proceedings, the Ordre des barreaux francophones et germanophone, the Académie Fiscale ASBL and UA, the Liga voor Mensenrechten ASBL, the Ligue des Droits de l’Homme ASBL, and VZ, WY and XX brought actions before the Cour constitutionnelle (Constitutional Court, Belgium) for the annulment of the Law of 29 May 2016, on the ground that it infringes Articles 10 and 11 of the Belgian Constitution, read in conjunction with Articles 5, 6 to 11, 14, 15, 17 and 18 of the ECHR, Articles 7, 8, 11 and 47 and Article 52(1) of the Charter, Article 17 of the International Covenant on Civil and Political Rights, which was adopted by the United Nations General Assembly on 16 December 1966 and entered into force on 23 March 1976, the general principles of legal certainty, proportionality and self-determination in relation to information and Article 5(4) TEU.

Given the state of Brexit negotiations, it may have given the CJEU some pleasure to administer this swift kick to the UK and its surveillance apparatus at its security and intelligence services before it leaves the bloc at year’s end. But, more substantially, this decision may well have repercussions on the adequacy decision the UK would need so companies could transfer personal data from the EU to the UK. Moreover, privacy and civil liberties advocates will be sure to point to his ruling as evidence the EU is against bulk collection of metadata and other facets of electronic communications unlike some other nations, like the United States (U.S.), which have historically collected vast troves of data on electronic communications.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Arek Socha from Pixabay

Commerce White Paper on Schrems II

The U.S. tries to lay out the reasons why data can still be transferred from the EU

The Trump Administration has released a crib sheet they are hoping United States (U.S.) multinationals will have success in using to argue to data protection authorities (DPA) in the European Union that their Standard Contractual Clauses (SCC) and Binding Corporate Rules (BCR) and U.S. law satisfy the European Court of Justice’s ruling that struck down the EU-U.S. Privacy Shield. And, the Trump Administration is basically arguing, sure, we spy, but most EU citizens data is not surveilled and EU governments themselves often share in the proceeds of the surveillance we conduct. Moreover, there are plenty of safeguards and means of redress in the U.S. system because, you know, we say so. It is unlikely this analysis will be very persuasive in the EU, especially since these broad arguments do not go to the criticisms the EU has had under Privacy Shield about U.S. surveillance and privacy rights nor to the basis for the Court of Justice of the European Union’s (CJEU) ruling.

In a summary of its decision Data Protection Commissioner v. Facebook Ireland and Maximillian Schrems, Case C-311/18 (Schrems II), the CJEU explained:

The General Data Protection Regulation (GDPR) provides that the transfer of such data to a third country may, in principle, take place only if the third country in question ensures an adequate level of data protection. According to the GDPR, the Commission may find that a third country ensures, by reason of its domestic law or its international commitments, an adequate level of protection. In the absence of an adequacy decision, such transfer may take place only if the personal data exporter established in the EU has provided appropriate safeguards, which may arise, in particular, from standard data protection clauses adopted by the Commission, and if data subjects have enforceable rights and effective legal remedies. Furthermore, the GDPR details the conditions under which such a transfer may take place in the absence of an adequacy decision or appropriate safeguards.

Ultimately, the CJEU found the United States (U.S.) lacks the requisite safeguards needed under EU law, and so the general means of transferring the data of EU citizens from the EU to the U.S. was essentially struck down. This marked the second time in the last five years such an agreement had been found to violate EU law.

Needless to say, the Trump Administration did not care for this ruling nor did the multinationals using Privacy Shield. And while those entities using SCCs and BCRs may have been relieved that the CJEU did not strike down those means of transferring data under the GDPR to the U.S., the court made clear that DPAs will need to go through these agreements on a case-by-case basis to see if they comport with EU law, too. Hence, this White Paper. The United States Department of Commerce (hereafter Commerce) explained the rationale for the White Paper as “in an effort to assist organizations in assessing whether their transfers offer appropriate data protection in accordance with the [CJEU’s] ruling, the U.S. Government has prepared the attached White Paper, which outlines the robust limits and safeguards in the United States pertaining to government access to data.”

Commerce made the rather obvious assertion that “[l]ike European nations and other countries, the United States conducts intelligence gathering activities to ensure that national security and foreign policy decision makers have access to timely, accurate, and insightful information on the threats posed by terrorists, criminals, cyber hackers, and other malicious actors.” Comparing U.S. surveillance to other nations is a bit like saying Jeff Bezos and I both made money this year. That is true, of course, but Bezos out earned me and everyone else by orders of magnitude. Moreover, whether EU nations conduct surveillance is beside the point. The CJEU took issue with U.S. surveillance and the rights afforded to EU residents for redress and not surveillance generally. It found the U.S.’s regime violated EU law.

Commerce touted “the extensive U.S. surveillance reforms since 2013” which were, of course, the result of former National Security Agency (NSA) contractor Edward Snowden revealing the massive NSA surveillance programs that were hoovering up data around the world. Nonetheless, after omitting this crucial bit, Commerce claimed “the U.S. legal framework for foreign intelligence collection provides clearer limits, stronger safeguards, and more rigorous independent oversight than the equivalent laws of almost all other countries.” And yet, the CJEU somehow disagreed with this claim.

Commerce summarized its “key points:

(1)  Most U.S. companies do not deal in data that is of any interest to U.S. intelligence agencies, and have no grounds to believe they do. They are not engaged in data transfers that present the type of risks to privacy that appear to have concerned the ECJ in Schrems II.

(2)  The U.S. government frequently shares intelligence information with EU Member States, including data disclosed by companies in response to FISA 702 orders, to counter threats such as terrorism, weapons proliferation, and hostile foreign cyber activity. Sharing of FISA 702 information undoubtedly serves important EU public interests by protecting the governments and people of the Member States.

(3) There is a wealth of public information about privacy protections in U.S. law concerning government access to data for national security purposes, including information not recorded in Decision 2016/1250, new developments that have occurred since 2016, and information the ECJ neither considered nor addressed. Companies may wish to take this information into account in any assessment of U.S. law post-Schrems II.

Again, even if all this were true (and that is a stretch with some of these claims), these arguments are irrelevant in the eyes of the CJEU. Let’s take a look at what the CJEU found so objectionable in the European Commission’s adequacy decision with respect to U.S. surveillance and the rights afforded to EU residents:

  • It is thus apparent that Section 702 of the FISA does not indicate any limitations on the power it confers to implement surveillance programmes for the purposes of foreign intelligence or the existence of guarantees for non-US persons potentially targeted by those programmes. In those circumstances and as the Advocate General stated, in essence, in points 291, 292 and 297 of his Opinion, that article cannot ensure a level of protection essentially equivalent to that guaranteed by the Charter, as interpreted by the case-law set out in paragraphs 175 and 176 above, according to which a legal basis which permits interference with fundamental rights must, in order to satisfy the requirements of the principle of proportionality, itself define the scope of the limitation on the exercise of the right concerned and lay down clear and precise rules governing the scope and application of the measure in question and imposing minimum safeguards.            
  • According to the findings in the Privacy Shield Decision, the implementation of the surveillance programmes based on Section 702 of the FISA is, indeed, subject to the requirements of PPD‑28. However, although the Commission stated, in recitals 69 and 77 of the Privacy Shield Decision, that such requirements are binding on the US intelligence authorities, the US Government has accepted, in reply to a question put by the Court, that PPD‑28 does not grant data subjects actionable rights before the courts against the US authorities. Therefore, the Privacy Shield Decision cannot ensure a level of protection essentially equivalent to that arising from the Charter, contrary to the requirement in Article 45(2)(a) of the GDPR that a finding of equivalence depends, inter alia, on whether data subjects whose personal data are being transferred to the third country in question have effective and enforceable rights.
  • As regards the monitoring programmes based on E.O. 12333, it is clear from the file before the Court that that order does not confer rights which are enforceable against the US authorities in the courts either.
  • It should be added that PPD‑28, with which the application of the programmes referred to in the previous two paragraphs must comply, allows for ‘“bulk” collection … of a relatively large volume of signals intelligence information or data under circumstances where the Intelligence Community cannot use an identifier associated with a specific target … to focus the collection’, as stated in a letter from the Office of the Director of National Intelligence to the United States Department of Commerce and to the International Trade Administration from 21 June 2016, set out in Annex VI to the Privacy Shield Decision. That possibility, which allows, in the context of the surveillance programmes based on E.O. 12333, access to data in transit to the United States without that access being subject to any judicial review, does not, in any event, delimit in a sufficiently clear and precise manner the scope of such bulk collection of personal data.   
  • It follows therefore that neither Section 702 of the FISA, nor E.O. 12333, read in conjunction with PPD‑28, correlates to the minimum safeguards resulting, under EU law, from the principle of proportionality, with the consequence that the surveillance programmes based on those provisions cannot be regarded as limited to what is strictly necessary.
  • In those circumstances, the limitations on the protection of personal data arising from the domestic law of the United States on the access and use by US public authorities of such data transferred from the European Union to the United States, which the Commission assessed in the Privacy Shield Decision, are not circumscribed in a way that satisfies requirements that are essentially equivalent to those required, under EU law, by the second sentence of Article 52(1) of the Charter.

A stroll down memory lane is also helpful. EU authorities have been flagging these issues for years. The European Data Protection Board (EDPB or Board) released its most recent annual assessment of the Privacy Shield in December 2019 and again found both the agreement itself and implementation wanting. There was some overlap between the concerns of the EDPB and the European Commission (EC) as detailed in its recently released third assessment of the Privacy Shield, but the EDPB discusses areas that were either omitted from or downplayed in the EC’s report. The EDPB’s authority is persuasive with respect to Privacy Shield and carries weight with the EC; however, its concerns as detailed in previous annual reports have pushed the EC to demand changes, including but not limited to, pushing the Trump Administration to nominate Board Members to the Privacy and Civil Liberties Oversight Board (PCLOB) and the appointment of a new Ombudsperson to handle complaints about how the U.S. Intelligence Community is handling the personal data of EU citizens.

In January 2019, in the “EU-U.S. Privacy Shield – Second Annual Joint Review,” the EDPB took issue with a number of shortcomings in US implementation. Notably, the EDPB found problems with the assurances provided by the US government regarding the collection and use of personal data by national security and law enforcement agencies. The EDPB also found problems with how the Department of Commerce and FTC are enforcing the Privacy Shield in the US against commercial entities.

The EDPB also took issue with U.S. law enforcement and national security treatment of EU citizens’ personal data. The Board asserted that nothing had changed in the legal landscape in the U.S. since last year’s review but recounted its concerns, chiefly that under Title VII of the Foreign Intelligence Surveillance Act (FISA) and Executive Order (EO) 12333 indiscriminate data collection from and analysis of EU citizens could occur with minimal oversight and little to no redress contrary to EU law. EDPB also decried how the standing requirements in federal courts have effectively blunted the available redress for EU citizens under the Privacy Act of 1974. The Board also enumerated its concerns about the Ombudsperson “provides the only way for EU individuals to ask for a verification that the relevant authorities have complied with the requirements of this instrument by asking the Ombudsperson to refer the matter to the competent authorities, which include the Inspector General, to check the internal policies of these authorities.” The EDPB was concerned about the impartiality and independence of the current Ombudsperson, Under Secretary of State for Economic Growth, Energy, and the Environment Kenneth Krach and asserted “still doubts that the powers of the Ombudsperson to remedy non-compliance vis-a-vis the intelligence authorities are sufficient, as his “power” seems to be limited to decide not to confirm compliance towards the petitioner.”

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by S. Hermann & F. Richter from Pixabay

Further Reading, Other Developments, and Coming Events (22 September)

Coming Events

  • The United States’ Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) announced that its third annual National Cybersecurity Summit “will be held virtually as a series of webinars every Wednesday for four weeks beginning September 16 and ending October 7:”
    • September 23: Leading the Digital Transformation
    • September 30: Diversity in Cybersecurity
    • October 7: Defending our Democracy
    • One can register for the event here.
  • The Senate Judiciary Committee’s Intellectual Property Subcommittee will hold a hearing on 23 September titled “Examining Threats to American Intellectual Property: Cyber-attacks and Counterfeits During the COVID-19 Pandemic” with these witnesses:
    • Adam Hickey, Deputy Assistant Attorney General National Security Division, Department of Justice
    • Clyde Wallace, Deputy Assistant Director Cyber Division, Federal Bureau of Investigation
    • Steve Francis, Assistant Director, HSI Global Trade Investigations Division Director, National Intellectual Property Rights Center, U.S. Immigration and Customs Enforcement, Department of Homeland Security
    • Bryan S. Ware, Assistant Director for Cybersecurity Cyber Security and Infrastructure Security Agency, Department of Homeland Security
  • On 23 September, the Commerce, Science, and Transportation Committee will hold a hearing titled “Revisiting the Need for Federal Data Privacy Legislation,” with these witnesses:
    • The Honorable Julie Brill, Former Commissioner, Federal Trade Commission
    • The Honorable William Kovacic, Former Chairman and Commissioner, Federal Trade Commission
    • The Honorable Jon Leibowitz, Former Chairman and Commissioner, Federal Trade Commission
    • The Honorable Maureen Ohlhausen, Former Commissioner and Acting Chairman, Federal Trade Commission
    • Mr. Xavier Becerra, Attorney General, State of California
  • The House Energy and Commerce Committee’s Consumer Protection and Commerce Subcommittee will hold a virtual hearing “Mainstreaming Extremism: Social Media’s Role in Radicalizing America” on 23 September with these witnesses:
    • Marc Ginsburg, President, Coalition for a Safer Web
    • Tim Kendall, Chief Executive Officer, Moment
    • Taylor Dumpson, Hate Crime Survivor and Cyber-Harassment Target
    • John Donahue, Fellow, Rutgers University Miler Center for Community Protection and Resiliency, Former Chief of Strategic Initiatives, New York City Police Department
  • On 23 September, the Senate Homeland Security and Governmental Affairs will hold a hearing to consider the nomination of Chad Wolf to be the Secretary of Homeland Security.
  • The Senate Armed Services Committee will hold a closed briefing on 24 September “on Department of Defense Cyber Operations in Support of Efforts to Protect the Integrity of U.S. National Elections from Malign Actors” with:
    • Kenneth P. Rapuano, Assistant Secretary of Defense for Homeland Defense and Global Security
    • General Paul M. Nakasone, Commander, U.S. Cyber Command and Director, National Security Agency/Chief, Central Security Service
  • On 24 September, the Homeland Security and Governmental Affairs will hold a hearing on “Threats to the Homeland” with:
    • Christopher A. Wray, Director, Federal Bureau of Investigation
    • Christopher Miller, Director, National Counterterrorism Center
    • Kenneth Cuccinelli, Senior Official Performing the Duties of the Deputy Secretary of Homeland Security
  • The Senate Judiciary Committee’s Antitrust, Competition Policy & Consumer Rights Subcommittee will hold a hearing on 30 September titled “Oversight of the Enforcement of the Antitrust Laws” with Federal Trade Commission Chair Joseph Simons and United States Department of Justice Antitrust Division Assistant Attorney General Makan Delhrahim.
  • The Federal Communications Commission (FCC) will hold an open meeting on 30 September and has made available its agenda with these items:
    • Facilitating Shared Use in the 3.1-3.55 GHz Band. The Commission will consider a Report and Order that would remove the existing non-federal allocations from the 3.3-3.55 GHz band as an important step toward making 100 megahertz of spectrum in the 3.45-3.55 GHz band available for commercial use, including 5G, throughout the contiguous United States. The Commission will also consider a Further Notice of Proposed Rulemaking that would propose to add a co-primary, non-federal fixed and mobile (except aeronautical mobile) allocation to the 3.45-3.55 GHz band as well as service, technical, and competitive bidding rules for flexible-use licenses in the band. (WT Docket No. 19-348)
    • Expanding Access to and Investment in the 4.9 GHz Band. The Commission will consider a Sixth Report and Order that would expand access to and investment in the 4.9 GHz (4940-4990 MHz) band by providing states the opportunity to lease this spectrum to commercial entities, electric utilities, and others for both public safety and non-public safety purposes. The Commission also will consider a Seventh Further Notice of Proposed Rulemaking that would propose a new set of licensing rules and seek comment on ways to further facilitate access to and investment in the band. (WP Docket No. 07-100)
    • Improving Transparency and Timeliness of Foreign Ownership Review Process. The Commission will consider a Report and Order that would improve the timeliness and transparency of the process by which it seeks the views of Executive Branch agencies on any national security, law enforcement, foreign policy, and trade policy concerns related to certain applications filed with the Commission. (IB Docket No. 16-155)
    • Promoting Caller ID Authentication to Combat Spoofed Robocalls. The Commission will consider a Report and Order that would continue its work to implement the TRACED Act and promote the deployment of caller ID authentication technology to combat spoofed robocalls. (WC Docket No. 17-97)
    • Combating 911 Fee Diversion. The Commission will consider a Notice of Inquiry that would seek comment on ways to dissuade states and territories from diverting fees collected for 911 to other purposes. (PS Docket Nos. 20-291, 09-14)
    • Modernizing Cable Service Change Notifications. The Commission will consider a Report and Order that would modernize requirements for notices cable operators must provide subscribers and local franchising authorities. (MB Docket Nos. 19-347, 17-105)
    • Eliminating Records Requirements for Cable Operator Interests in Video Programming. The Commission will consider a Report and Order that would eliminate the requirement that cable operators maintain records in their online public inspection files regarding the nature and extent of their attributable interests in video programming services. (MB Docket No. 20-35, 17-105)
    • Reforming IP Captioned Telephone Service Rates and Service Standards. The Commission will consider a Report and Order, Order on Reconsideration, and Further Notice of Proposed Rulemaking that would set compensation rates for Internet Protocol Captioned Telephone Service (IP CTS), deny reconsideration of previously set IP CTS compensation rates, and propose service quality and performance measurement standards for captioned telephone services. (CG Docket Nos. 13-24, 03-123)
    • Enforcement Item. The Commission will consider an enforcement action.

Other Developments

  • The United States (U.S.) Department of Justice (DOJ) has indicted two Iranian nationals for allegedly hacking into systems in the U.S., Europe, and the Middle East dating back to 2013 to engage in espionage and sometimes theft.
    • The DOJ claimed in its press release:
      • According to a 10-count indictment returned on Sept. 15, 2020, Hooman Heidarian, a/k/a “neo,” 30, and Mehdi Farhadi, a/k/a “Mehdi Mahdavi” and “Mohammad Mehdi Farhadi Ramin,” 34, both of Hamedan, Iran, stole hundreds of terabytes of data, which typically included confidential communications pertaining to national security, foreign policy intelligence, non-military nuclear information, aerospace data, human rights activist information, victim financial information and personally identifiable information, and intellectual property, including unpublished scientific research.  In some instances, the defendants’ hacks were politically motivated or at the behest of Iran, including instances where they obtained information regarding dissidents, human rights activists, and opposition leaders.  In other instances, the defendants sold the hacked data and information on the black market for private financial gain.
      • The victims included several American and foreign universities, a Washington, D.C.-based think tank, a defense contractor, an aerospace company, a foreign policy organization, non-governmental organizations (NGOs), non-profits, and foreign government and other entities the defendants identified as rivals or adversaries to Iran.  In addition to the theft of highly protected and sensitive data, the defendants also vandalized websites, often under the pseudonym “Sejeal” and posted messages that appeared to signal the demise of Iran’s internal opposition, foreign adversaries, and countries identified as rivals to Iran, including Israel and Saudi Arabia.
  • Two United States (U.S.) agencies took coordinated action against an alleged cyber threat group and a front company for a “a years-long malware campaign that targeted Iranian dissidents, journalists, and international companies in the travel sector.” The U.S. Department of the Treasury’s Office of Foreign Assets Control (OFAC) “imposed sanctions on Iranian cyber threat group Advanced Persistent Threat 39 (APT39), 45 associated individuals, and one front company…Rana Intelligence Computing Company (Rana)” per the agency’s press release. Treasury further claimed:
    • Rana advances Iranian national security objectives and the strategic goals of Iran’s Ministry of Intelligence and Security (MOIS) by conducting computer intrusions and malware campaigns against perceived adversaries, including foreign governments and other individuals the MOIS considers a threat. APT39 is being designated pursuant to E.O. 13553 for being owned or controlled by the MOIS, which was previously designated on February 16, 2012 pursuant to Executive Orders 13224, 13553, and 13572, which target terrorists and those responsible for human rights abuses in Iran and Syria, respectively.
    • The Federal Bureau of Investigation (FBI) provided “information on numerous malware variants and indicators of compromise (IOCs) associated with Rana to assist organizations and individuals in determining whether they may have been targeted.”
  • The United States (U.S.) Department of Justice (DOJ) also released grand jury indictments against five nationals of the People’s Republic of China and two Malaysians for extensive hacking and exfiltration of commercial and business information with an eye towards profiting from these crimes. The DOJ asserted in its press release:
    • In August 2019 and August 2020, a federal grand jury in Washington, D.C., returned two separate indictments (available here and here) charging five computer hackers, all of whom were residents and nationals of the People’s Republic of China (PRC), with computer intrusions affecting over 100 victim companies in the United States and abroad, including software development companies, computer hardware manufacturers, telecommunications providers, social media companies, video game companies, non-profit organizations, universities, think tanks, and foreign governments, as well as pro-democracy politicians and activists in Hong Kong.
    •  The intrusions, which security researchers have tracked using the threat labels “APT41,” “Barium,” “Winnti,” “Wicked Panda,” and “Wicked Spider,” facilitated the theft of source code, software code signing certificates, customer account data, and valuable business information.  These intrusions also facilitated the defendants’ other criminal schemes, including ransomware and “crypto-jacking” schemes, the latter of which refers to the group’s unauthorized use of victim computers to “mine” cryptocurrency. 
    • Also in August 2020, the same federal grand jury returned a third indictment charging two Malaysian businessmen who conspired with two of the Chinese hackers to profit from computer intrusions targeting the video game industry in the United States and abroad.  Shortly thereafter, the U.S. District Court for the District of Columbia issued arrest warrants for the two businessmen.  On Sept. 14, 2020, pursuant to a provisional arrest request from the United States with a view to their extradition, Malaysian authorities arrested them in Sitiawan.
  • On 21 September, the House of Representatives took and passed the following bills, according to summaries provided by the House Majority Whip’s office:
    • The “Effective Assistance in the Digital Era” (H.R. 5546) (Rep. Jeffries – Judiciary) This bill requires the Federal Bureau of Prisons to establish a system to exempt from monitoring any privileged electronic communications between incarcerated individuals and their attorneys or legal representatives.
    • The “Defending the Integrity of Voting Systems Act (S. 1321) This bill broadens the definition of “protected computer” for purposes of computer fraud and abuse offenses under current law to include a computer that is part of a voting system.
    • The “Promoting Secure 5G Act of 2020” (H.R. 5698) This bill will establish as a U.S. policy within the IFIs to only finance 5G projects and other wireless technologies that include adequate security measures in furtherance of national security aims to protect wireless networks from bad actors and foreign governments.
    • The “MEDIA Diversity Act of 2020” (H.R. 5567) This bill Requires the FCC to consider market entry barriers for socially disadvantaged individuals in the communications marketplace.
    • The “Don’t Break Up the T-Band Act of 2020” as amended (H.R. 451) This bill repeals the requirement on the FCC to reallocate and auction the T-Band.  H.R. 451 also requires the FCC to adopt rules limiting the use of 9-1-1 fees by States or other taxing jurisdictions to (1) the support and implementation of 9-1-1 services and (2) operational expenses of public safety answering points.
    • It bears note that S. 1321 has passed the Senate, and so it is off to the White House for the only election security bill that has made it through both house of Congress.

Further Reading

  • Justice Department expected to brief state attorneys general this week on imminent Google antitrust lawsuit” By Tony Romm — The Washington Post; “Justice Dept. Case Against Google Is Said to Focus on Search Dominance” By Cecilia Kang, Katie Benner, Steve Lohr and Daisuke Wakabayashi — The New York Times; “Justice Department, states to meet in possible prelude to Google antitrust suit” By Leah Nylen — Politico. Tomorrow, the United States Department of Justice (DOJ) will outline its proposed antitrust case against Google with state attorneys general, almost all of whom are investigating Google on the same grounds. Reportedly, the DOJ case is focused on the company’s dominance of online searches, notably its arrangement to make Google the default search engine on iPhones and Androids, and not on its advertising practices. If the DOJ goes this road, then it will be similar to the European Union’s (EU) 2018 case against Google for the same, which resulted in EU residents being offered a choice on search engines on Android devices and a €4.34 billion fine. This development comes after articles earlier this month that Attorney General William Barr has been pushing the DOJ attorneys and investigators against the wishes of many to wrap up the investigation in time for a pre-election filing that would allow President Donald Trump to claim he is being tough on big technology companies. However, if this comes to pass, Democratic attorneys general may decline to join the suit and may bring their own action also alleging violations in the online advertising realm that Google dominates. In this vein, Texas Attorney General Ken Paxton has been leading the state effort to investigate Google’s advertising business, which critics argue is anti-competitive. Also, according to DOJ attorneys who oppose what they see as Barr rushing the suit, this could lead to a weaker case Google may be able to defeat in court. Of course, this news comes shortly after word leaked from the Federal Trade Commission (FTC) that its case against Facebook could be filed regarding its purchase of rivals WhatsApp and Instagram.
  • Why Japan wants to join the Five Eyes intelligence network” By Alan Weedon — ABC News. This piece makes the case as to why the United States, United Kingdom, Canada, Australia, and New Zealand may admit a new member to the Five Eyes soon: Japan. The case for the first Asian country is that it is a stable, western democracy, a key ally in the Pacific, and a bulwark against the influence of the People’s Republic of China (PRC). It is really this latter point that could carry the day, for the Five Eyes may need Japan’s expertise with the PRC and its technology to counter the former’s growing ambitions.
  • The next Supreme Court justice could play a major role in cybersecurity and privacy decisions” By Joseph Marks — The Washington Post. There are a range of cybersecurity and technology cases that the Supreme Court will decide in the near future, and so whether President Donald Trump gets to appoint Justice Ruth Bader Ginsburg’s successor will be very consequential for policy in these areas. For example, the court could rule on the Computer Fraud and Abuse Act for the first time regarding whether researchers are violating the law by probing for weak spots in systems. There are also Fourth Amendment and Fifth Amendment cases pending with technology implications as the former pertains to searches of devices by border guards and the latter to self-incrimination visa vis suspects being required to unlock devices.
  • Facebook Says it Will Stop Operating in Europe If Regulators Don’t Back Down” By David Gilbert —VICE. In a filing in its case against Ireland’s Data Protection Commission (DPC), Facebook made veiled threats that if the company is forced to stop transferring personal data to the United States, it may stop operating in the European Union altogether. Recently, the DPC informed Facebook that because Privacy Shield was struck down, it would need to stop transfers even though the company has been using standard contractual clauses, another method permitted in some case under the General Data Protection Regulation. Despite Facebook’s representation, it seems a bit much that the company would leave the EU to any competitors looking to its fill its shoes.
  • As U.S. Increases Pressure, Iran Adheres to Toned-Down Approach” By Julian E. Barnes, David E. Sanger, Ronen Bergman and Lara Jakes — The New York Times. The Islamic Republic of Iran is showing remarkable restraint in its interactions with the United States in the face of continued, punitive actions against Tehran. And this is true also of its cyber operations. The country has made the calculus that any response could be used by President Donald Trump to great effect in closing the gap against front runner former Vice President Joe Biden. The same has been true of its cyber operations against Israel, which has reportedly conducted extensive operations inside Iran with considerable damage.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.