EU Regulators Settle Dispute Over Proper Punishment of Twitter For Breach

The EDPB uses its GDPR powers to manage a dispute between DPAs.

The European Data Protection Board (EDPB) concluded its first use of powers granted under the General Data Protection Regulation (GDPR) (Regulation (EU) 2016/679 of the European Parliament and of the Council) to resolve a dispute among EU regulators on how to apply the GDPR in punishing a violator. In this case, the EDPB had to referee how Twitter should be punished for a data breach arising from a bug affecting users of an Android OS. Ireland’s Data Protection Commission (DPC) and unnamed concerned supervisory agencies (CSA) disagreed about how Twitter should be fined for the GDPR breach, and so an unused article of the GDPR was triggered that put the EDPB in charge of resolving the dispute. The EDPB considered the objections raised by other EU agencies and found that the DPC needed to recalculate its fine that was set to be a maximum of $300,000 of a possible $69.2 million. Thereafter, the DPC revised and decided that “an administrative fine of €450,000 on Twitter” is “an effective, proportionate and dissuasive measure.”

The DPC issued a revised decision that incorporates the EDPB’s decision on the case that arose from a glitch that changed a person’s protected tweets to unprotected. Twitter users may protect their tweets, meaning only certain people, usually just followers, can see this content. However, a bug with the Android OS resulted in a person’s desire to protect their tweets being thwarted the DPC explained:

The bug that resulted in this data breach meant that, if a user operating an Android device changed the  email  address  associated  with  that  Twitter  account,  their  tweets  became  unprotected  and consequently were accessible to the wider public without the user’s knowledge.

The DPC said this breach occurred between September 2017 and January 2019, affecting 88,726 EU and European Economic Area (EEA) users, and on 8 January 2019, Twitter alerted the DPC, triggering an investigation. Twitter revealed:

On 26 December 2018, we received a bug report through our bug bounty program that if a Twitter user with a protected account, using Twitter for Android, changed their email address the bug would result in their account being unprotected.

Article 33(1) of the GDPR requires breaches to be reported to a DPA within 72 hours in most cases:

In the case of a personal data breach, the controller shall without undue delay and, where feasible, not later than 72 hours after having become aware of it, notify the personal data breach to the supervisory authority competent in accordance with Article 55, unless the personal data breach is unlikely to result in a risk to the rights and freedoms of natural persons. Where the notification to the supervisory authority is not made within 72 hours, it shall be accompanied by reasons for the delay.

However, Twitter conceded by way of reason as to why it had not reported the breach within the 72 hour window:

The severity of the issue – and that it was reportable – was not appreciated until 3 January 2018 at which point Twitter’s incident response process was put into action.

Additionally, Article 33(5) would become relevant during the DPC investigation:

The controller shall document any personal data breaches, comprising the facts relating to the personal data breach, its effects and the remedial action taken. That documentation shall enable the supervisory authority to verify compliance with this Article.

Consequently, Twitter had a responsibility as the controller to document all the relevant facts about the data breach and then to report the breach within 72 hours of becoming aware of the breach subject to a range of exceptions.

Shortly thereafter, the DPC named itself the lead supervisory agency (LSA), investigated and reached its proposed decision in late April and submitted it to the European Commission (EC). And, this is where the need for the EDPB to step in began.

Irish Data Protection Commissioner Helen Dixon explained the scope of the subsequent investigation:

  1. Whether Twitter International Company (TIC) complied with its obligations, in accordance with Article 33(1) GDPR, to notify the Commission of the Breach without undue delay and, where feasible, not later than 72 hours after having become aware of it; and
  2. Whether TIC complied with its obligation under Article 33(5) to document the Breach.

Dixon found that TIC did not comply with Article 33(1) and found unpersuasive the main claim of TIC that because Twitter, International, its processor under EU law, did not alert TIC in a timely fashion, it need not meet the 72 hour window. Moreover, Dixon found TIC did not meet its Article 33(5) obligations such that its compliance with Article 33 could be determined. However, the size of the fine became the issue necessitating the EDPB step in because the Austrian Supervisory Authority (Österreichische Datenschutzbehörde), the German Supervisory Authority (Der Hamburgische Beauftragte für Datenschutz und Informationsfreiheit) and the Italian Supervisory Authority (Garante per la protezione dei dati personali) made “relevant and reasoned” objections.

Per the GDPR, the EDPB intervened. Article 60 of the GDPR provides if a CSA “expresses a relevant and reasoned objection to the draft decision [of the LSA], the lead supervisory authority shall, if it does not follow the relevant and reasoned objection or is of the opinion that the objection is not relevant or reasoned, submit the matter to the consistency mechanism.” Article 65 also provides that where “a supervisory authority concerned has raised a relevant and reasoned objection to a draft decision of the lead authority or the lead authority has rejected such an objection as being not relevant or reasoned,” then the EDPB must step in and work towards a final binding decision. This process was installed so that the enforcement of the EDPB would be uniform throughout the EU and to forestall the possibility that one DPA or a small group of DPAs would construe the data protection regime in ways contrary to its intention. As it is, there have already been allegations that some DPAs have been ineffective or lenient towards alleged offenders.

In its mid-November statement, the EDPB said it “adopted by a 2/3 majority of its members its first dispute resolution decision on the basis of Art. 65 GDPR.” The EDPB stated

The Irish supervisory authority (SA) issued the draft decision following an own-volition inquiry and investigations into Twitter International Company, after the company notified the Irish SA of a personal data breach on 8 January 2019. In May 2020, the Irish SA shared its draft decision with the concerned supervisory authorities (CSAs) in accordance with Art. 60 (3) GDPR. The CSAs then had four weeks to submit their relevant and reasoned objections (RROs.) Among others, the CSAs issued RROs on the infringements of the GDPR identified by the lead supervisory authority (LSA), the role of Twitter International Company as the (sole) data controller, and the quantification of the proposed fine. 

It appears from the EDPB’s statement that other DPAs/SAs had objected to the size of the fine (which can be as high as 2% of annual revenue), how Twitter violated the GDPR, and Twitter’s culpability based on whether it was the only controller of the personal data or other controllers may have also been held responsible.

According to the DPC, the EDPB ultimately decided that

…the [DPC] is required to re-assess the elements it relies upon to calculate the amount of the fixed fine to be imposed on TIC, and to amend its Draft Decision by increasing the level of the fine in order to ensure it fulfils its purpose as a corrective measure and meets the requirements of effectiveness, dissuasiveness and proportionality established by Article 83(1) GDPR and taking into account the criteria of Article 83(2) GDPR.

Dixon went back and reasoned through the breach and compliance. She stressed that the GDPR infringements were largely aside and apart from the substance of the breach, which is why the administrative fine was low. Nonetheless, Dixon reexamined the evidence in light of the EDPB’s decision and concluded in relevant part:

  • I therefore consider that the nature of the obligations arising under Article 33(1) and Article 33(5) are such that, compliance is central to the overall functioning of the supervision and enforcement regime performed by supervisory authorities in relation to both the specific issue of personal data breaches but also the identification and assessment of wider issues of non-compliance by controllers. As such, non-compliance with these obligations has serious consequences in that it risks undermining the effective exercise by supervisory authorities of their functions under the GDPR. With regard to the nature of the specific infringements in these circumstances, it is clear, having regard to the foregoing, that in the circumstances of this case, the delayed notification under Article 33(1) inevitably delayed the Commission’s assessment of the Breach. With regard to Article 33(5), the deficiencies in the “documenting” of the Breach by TIC impacted on the Commission’s overall efficient assessment of the Breach, necessitating the raising of multiple queries concerning the facts and sequencing surrounding the notification of the Breach.
  • Accordingly, having regard to the potential for damage to data subjects caused by the delayed notification to the Commission (which I have set out above in the context of Article 83(2)(a)), the corollary of this is that any category of personal data could have been affected by the delayed notification. Whilst, as stated above, there was no direct evidence of damage, at the same time, it cannot be definitively said that there was no damage to data subjects or no affected categories of personal data.

Dixon also recalculated the fine that she noted was bound on the upper limit at €10 million or 2% of annual worldwide revenue after once again turning aside TIC’s argument that it independent of Twitter for purposes of determining a fine. Dixon determined the appropriate administrative fine would be about $500,000 and Twitter’s worldwide revenue was $3.46 billion in 2019 (meaning a maximum penalty of $69.2 million.) Dixon explained:

Having regard to all of the foregoing, and, in particular, having had due regard to all of the factors which I am required to consider under Articles 83(2)(a) to (k), as applicable, and in the interests of effectiveness, proportionality and deterrence, and in light of the re-assessment of the elements I have implemented and documented above in accordance with the EDPB Decision, I have decided to impose an administrative fine of $500,000, which equates (in my estimation for this purpose) to €450,000. In deciding to impose a fine in this amount, I have had regard to the previous range of the fine, set out in the Draft Decision (of $150,000 – $300,000), and to the binding direction in the EDPB Decision, at paragraph 207 thereof, that the level of the fine should be increased “..in order to ensure it fulfils its purpose as a corrective measure and meets the requirements of effectiveness, dissuasiveness and proportionality established by Article 83(1) GDPR and taking into account the criteria of Article 83(2) GDPR.”

In its Article 65 decision, the EDPB judged the various objections to the DPC’s proposed decision against Article 4(24) of the GDPR:

‘relevant and reasoned objection’ means an objection to a draft decision as to whether there is an infringement of this Regulation, or whether envisaged action in relation to the controller or processor complies with this Regulation, which clearly demonstrates the significance of the risks posed by the draft decision as regards the fundamental rights and freedoms of data subjects and, where applicable, the free flow of personal data within the Union;

The EDPB ultimately decided “the fine proposed in the Draft Decision is too low and therefore does not fulfil its purpose as a corrective measure, in particular it does not meet the requirements of Article 83(1) GDPR of being effective, dissuasive and proportionate.” The EDPB directed the DPC “to re-assess the elements it relies upon to calculate the amount of the fixed fine to be imposed on TIC so as to ensure it is appropriate to the facts of the case.” However, the EDPB turned aside a number of other objections raised by EU DPAs as failing to meet the standard of review in Article 4(24):

  • the competence of the LSA;
  • the qualification of the roles of TIC and Twitter, Inc., respectively;
  • the infringements of the GDPR identified by the LSA;
  • the existence of possible additional (or alternative) infringements of the GDPR;
  • the lack of a reprimand;

However, the EDPB stressed:

Regarding the objections deemed not to meet the requirements stipulated by Art 4(24) GDPR, the EDPB does not take any position on the merit of any substantial issues raised by these objections. The EDPB reiterates that its current decision is without any prejudice to any assessments the EDPB may be called upon to make in other cases, including with the same parties, taking into account the contents of the relevant draft decision and the objections raised by the CSAs.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2021. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by papagnoc from Pixabay

EDPB Concludes First Use of Powers To Resolve Differences Between DPAs in Twitter Enforcement Action

The EDPB announces but does not release its release on the dispute between SAs in the EU over the appropriate punishment for Twitter’s data breaches.

The European Data Protection Board (EDPB) has used its powers under the General Data Protection Regulation (GDPR) for the first time to resolve a dispute between data protection authorities (DPA) in the European Union (EU) over an enforcement action. Unidentified DPAs had objected to the proposed action Ireland’s Data Protection Commission (DPC) had circulated, obligating the EDPB to utilize its Article 65 powers to craft a resolution to the disputed part of the action. The enforcement concerned 2018 and 2019 Twitter data breaches. Now, the DPC has a month to craft a decision on the basis of the EDPB decision unless the DPC challenges the decision in the Court of Justice for the European Union (CJEU).

The DPC submitted its draft decision to other DPAs on the Twitter breach in May in accordance with Article 60 of the GDPR. The DPC stated “[t]he draft decision focusses on whether Twitter International Company has complied with Articles 33(1) and 33(5) of the GDPR” (i.e. the provision pertaining to data breach and proper notification protocol. The DPC further explained

  • This draft decision is one of a number of significant developments in DPC inquiries into “big tech” companies this week. Deputy Commissioner Graham Doyle has confirmed that: “In addition to submitting this draft decision to other EU supervisory authorities, we have this week sent a preliminary draft decision to WhatsApp Ireland Limited for their final submissions which will be taken in to account by the DPC before preparing a draft decision in that matter also for Article 60 purposes.  The inquiry into WhatsApp Ireland examines its compliance with Articles 12 to 14 of the GDPR in terms of transparency including in relation to transparency around what information is shared with Facebook.“
  • The DPC has also completed the investigation phase of a complaint-based inquiry which focuses on Facebook Ireland’s obligations to establish a lawful basis for personal data processing. This inquiry is now in the decision-making phase at the DPC.

In its statement this week, the EDPB said it “adopted by a 2/3 majority of its members its first dispute resolution decision on the basis of Art. 65 GDPR.” The EDPB stated

The Irish supervisory authority (SA) issued the draft decision following an own-volition inquiry and investigations into Twitter International Company, after the company notified the Irish SA of a personal data breach on 8 January 2019. In May 2020, the Irish SA shared its draft decision with the concerned supervisory authorities (CSAs) in accordance with Art. 60 (3) GDPR. The CSAs then had four weeks to submit their relevant and reasoned objections (RROs.) Among others, the CSAs issued RROs on the infringements of the GDPR identified by the lead supervisory authority (LSA), the role of Twitter International Company as the (sole) data controller, and the quantification of the proposed fine. 

It appears from the EDPB’s statement that other DPAs/SAs had objected to the size of the fine (which can be as high as 4% of annual revenue), how Twitter violated the GDPR, and Twitter’s culpability based on whether it was the only controller of the personal data or other controllers may have also been held responsible.

The EDPB asserted:

The Irish SA shall adopt its final decision on the basis of the EDPB decision, which will be addressed to the controller, without undue delay and at the latest one month after the EDPB has notified its decision. The LSA and CSAs shall notify the EDPB of the date the final decision was notified to the controller. Following this notification, the EDPB will publish its decision on its website.

The EDPB also published FAQs on the Article 65 procedure.

More recently, the EDPB issued a draft of its construction of a key authority in the GDPR designed to guide and coordinate investigations that cross borders in the European Union (EU). An LSA is supposed to consider “relevant and reasoned objections” to draft decisions submitted by CSAs. If an LSA rejects such feedback, then the GDPR action gets kicked over to the EDPB. However, since this has only happened once, the EDPB thought it appropriate to define the term so all the EU DPA would understand what objections are relevant and reasoned.

The EDPB explained that the guidance “aims at establishing a common understanding of the notion of the terms “relevant and reasoned”, including what should be considered when assessing whether an objection “clearly demonstrates the significance of the risks posed by the draft decision.” The EDPB stated “[t]he unfamiliarity surrounding “what constitutes relevant and reasoned objection” has the potential to create misunderstandings and inconsistent applications by the supervisory authorities, the EU legislator (sic) suggested that the EDPB should issue guidelines on this concept (end of Recital 124 GDPR).”

Article 60 of the GDPR provides if a CSA “expresses a relevant and reasoned objection to the draft decision [of the LSA], the lead supervisory authority shall, if it does not follow the relevant and reasoned objection or is of the opinion that the objection is not relevant or reasoned, submit the matter to the consistency mechanism.” Article 65 also provides that where “a supervisory authority concerned has raised a relevant and reasoned objection to a draft decision of the lead authority or the lead authority has rejected such an objection as being not relevant or reasoned,” then the EDPB must step in and work towards a final binding decision. This process was installed so that the enforcement of the EDPB would be uniform throughout the EU and to forestall the possibility that one DPA or a small group of DPAs would construe the data protection regime in ways contrary to its intention. As it is, there have already been allegations that some DPAs have been ineffective or lenient towards alleged offenders.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by ElisaRiva from Pixabay

Further Reading, Other Developments, and Coming Events (3 November)

Further Reading

  • How Facebook and Twitter plan to handle election day disinformation” By Sam Dean — The Los Angeles Times; “How Big Tech is planning for election night” By Heather Kelly — The Washington Post; and “What to Expect From Facebook, Twitter and YouTube on Election Day” By Mike Isaac, Kate Conger and Daisuke Wakabayashi — The New York Times. Social media platforms have prepared for today and will use a variety of measures to combat lies, disinformation, and misinformation from sources foreign and domestic. Incidentally, my read is that these tech companies are more worried as measured by resource allocation about problematic domestic content.
  • QAnon received earlier boost from Russian accounts on Twitter, archives show” By Joseph Menn — Reuters. Researchers have delved into Twitter’s archive of taken down or suspended Russian accounts and are now claiming that the Russian Federation was pushing and amplifying the QAnon conspiracy in the United States (U.S.) as early as November 2017. This revised view of Russian involvement differs from conclusions reached in August that Russia played a small but significant role in the proliferation of the conspiracy.
  • Facial recognition used to identify Lafayette Square protester accused of assault” By Justin Jouvenal and Spencer S. Hsu — The Washington Post. When police were firing pepper balls and rolling gas canisters towards protestors in Lafayette Park in Washington, D.C. on 1 June 2020, a protestor was filmed assaulting two officers. A little known and not publicly revealed facial recognition technology platform available to many federal, state, and local law enforcement agencies in the Capital area resulted in a match with the footage. Now, a man stands accused of crimes during a Black Lives Matter march, and civil liberties and privacy advocates are objecting to the use of the National Capital Region Facial Recognition Investigative Leads System (NCRFRILS) on a number of grounds, including the demonstrated weakness of these systems to accurately identify people of color, the fact it has been used but not disclosed to a number of defendants, and the potential chilling effect it will have on people going to protests. Law enforcement officials claim there are strict privacy and process safeguards and an identification alone cannot be used as the basis of an arrest.
  • CISA’s political independence from Trump will be an Election Day asset” By Joseph Marks — The Washington Post. The United States’ (U.S.) Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA) has established its bona fides with nearly all stakeholders inside and outside the U.S. government since its creation in 2018. Many credit its Director, Christopher Krebs, and many also praise the agency for conveying information and tips about cyber and other threats without incurring the ire of a White House and President antithetical to any hint of Russian hacking or interference in U.S. elections. However, today and the next few days may prove the agency’s metal as it will be in uncharted territory trying to tamp down fear about unfounded rumors and looking to discredit misinformation and disinformation in close to real time.
  • Trump allies, largely unconstrained by Facebook’s rules against repeated falsehoods, cement pre-election dominance” By Isaac Stanley-Becker and Elizabeth Dwoskin — The Washington Post. Yet another article showing that if Facebook has a bias, it is against liberal viewpoints and content, for conservative material is allowed even if it breaks the rules of the platform. Current and former employees and an analysis support this finding. The Trump Family have been among those who have benefitted from the kid gloves used by the company regarding posts that would have likely resulted in consequences from other users. However, smaller conservative outlets or less prominent conservative figures have faced consequences for content that violates Facebook’s standards, however.

Other Developments

  • The European Union’s (EU) Executive Vice-President Margrethe Vestager made a speech titled “Building trust in technology,” in which she previewed one long awaited draft EU law on technology and another to address antitrust and anti-competitive practices of large technology companies. Vestager stated “in just a few weeks, we plan to publish two draft laws that will help to create a more trustworthy digital world.” Both drafts are expected on 2 December and represent key pieces of the new EU leadership’s Digital Strategy, the bloc’s initiative to update EU laws to account for changes in technology since the beginning of the century. The Digital Services Act will address and reform the legal treatment of both online commerce and online content. The draft Digital Markets Act would give the European Commission (EC) more tools to combat the same competition and market dominance issues the United States (U.S.), Australia, and other nations are starting to tackle visa vis companies like Apple, Amazon, Facebook, and Google.
    • Regarding the Digital Services Act, Vestager asserted:
      • One part of those rules will be the Digital Services Act, which will update the E-Commerce Directive, and require digital services to take more responsibility for dealing with illegal content and dangerous products. 
      • Those services will have to put clear and simple procedures in place, to deal with notifications about illegal material on their platforms. They’ll have to make it harder for dodgy traders to use the platform, by checking sellers’ IDs before letting them on the platform. And they’ll also have to provide simple ways for users to complain, if they think their material should not have been removed – protecting the right of legitimate companies to do business, and the right of individuals to freedom of expression. 
      • Those new responsibilities will help to keep Europeans just as safe online as they are in the physical world. They’ll protect legitimate businesses, which follow the rules, from being undercut by others who sell cheap, dangerous products. And by applying the same standards, all over Europe, they’ll make sure every European can rely on the same protection – and that digital businesses of all sizes can easily operate throughout Europe, without having to meet the costs of complying with different rules in different EU countries. 
      • The new rules will also require digital services – especially the biggest platforms – to be open about the way they shape the digital world that we see. They’ll have to report on what they’ve done to take down illegal material. They’ll have to tell us how they decide what information and products to recommend to us, and which ones to hide – and give us the ability to influence those decisions, instead of simply having them made for us. And they’ll have to tell us who’s paying for the ads that we see, and why we’ve been targeted by a certain ad.  
      • But to really give people trust in the digital world, having the right rules in place isn’t enough. People also need to know that those rules really work – that even the biggest companies will actually do what they’re supposed to. And to make sure that happens, there’s no substitute for effective enforcement.  
      • And effective enforcement is a vital part of the draft laws that we’ll propose in December. For instance, the Digital Services Act will improve the way national authorities cooperate, to make sure the rules are properly enforced, throughout the EU. Our proposal won’t change the fundamental principle, that digital services should be regulated by their home country. But it will set up a permanent system of cooperation that will help those regulators work more effectively, to protect consumers all across Europe. And it will give the EU power to step in, when we need to, to enforce the rules against some very large platforms.
    • Vestager also discussed the competition law the EC hopes to enact:
      • So, to keep our markets fair and open to competition, it’s vital that we have the right toolkit in place. And that’s what the second set of rules we’re proposing – what we call the Digital Markets Act – is for. 
      • That proposal will have two pillars. The first of those pillars will be a clear list of dos and don’ts for big digital gatekeepers, based on our experience with the sorts of behaviour that can stop markets working well. 
      • For instance, the decisions that gatekeepers take, about how to rank different companies in search results, can make or break businesses in dozens of markets that depend on the platform. And if platforms also compete in those markets themselves, they can use their position as player and referee to help their own services succeed, at the expense of their rivals. For instance, gatekeepers might manipulate the way that they rank different businesses, to show their own services more visibly than their rivals’. So, the proposal that we’ll put forward in a few weeks’ time will aim to ban this particular type of unfair self-preferencing. 
      • We also know that these companies can collect a lot of data about companies that rely on their platform – data which they can then use, to compete against those very same companies in other markets. That can seriously damage fairness in these markets – which is why our proposal aims to ban big gatekeepers from misusing their business users’ data in that way. 
      • These clear dos and don’ts will allow us to act much faster and more effectively, to tackle behaviour that we know can stop markets working well. But we also need to be ready for new situations, where digitisation creates deep, structural failures in the way our markets work.  
      • Once a digital company gets to a certain size, with the big network of users and the huge collections of data that brings, it can be very hard for anyone else to compete – even if they develop a much better service. So, we face a constant risk that big companies will succeed in pushing markets to a tipping point, sending them on a rapid, unstoppable slide towards monopoly – and creating yet another powerful gatekeeper. 
      • One way to deal with risks like this would be to stop those emerging gatekeepers from locking users into their platform. That could mean, for instance, that those gatekeepers would have to make it easier for users to switch platform, or to use more than one service. That would keep the market open for competition, by making it easier for innovative rivals to compete. But right now, we don’t have the power to take this sort of action when we see these risks arising. 
      • It can also be difficult to deal with large companies which use the power of their platforms again and again, to take over one related market after another. We can deal with that issue with a series of cases – but the risk is that we’ll always find ourselves playing catch-up, while platforms move from market to market, using the same strategies to drive out their rivals. 
      • The risk, though, is that we’ll have a fragmented system, with different rules in different EU countries. That would make it hard to tackle huge platforms that operate throughout Europe, and to deal with other problems that you find in digital markets in many EU countries. And it would mean that businesses and consumers across Europe can’t all rely on the same protection. 
      • That’s why the second pillar of the Digital Markets Act would put a harmonised market investigation framework in place across the single market, giving us the power to tackle market failures like this in digital markets, and stop new ones from emerging. That would give us a harmonised set of rules that would allow us to investigate certain structural problems in digital markets. And if necessary, we could take action to make these markets contestable and competitive.
  • California Governor Gavin Newsom (D) vetoed one of the bills sent to him to amend the “California Consumer Privacy Act” (AB 375) last week. In mid-October, he signed two bills that amended the CCPA but one will only take effect if the “California Privacy Rights Act” (CPRA) (Ballot Initiative 24) is not enacted by voters in the November election. Moreover, if the CPRA is enacted via ballot, then the two other statutes would likely become dead law as the CCPA and its amendments would become moot once the CPRA becomes effective in 2023.
    • Newsom vetoed AB 1138 that would amend the recently effective “Parent’s Accountability and Child Protection Act” would bar those under the age of 13 from opening a social media account unless the platform got the explicit consent from their parents. Moreover, “[t]he bill would deem a business to have actual knowledge of a consumer’s age if it willfully disregards the consumer’s age.” In his veto message, Newsom explained that while he agrees with the spirit of the legislation, it would create unnecessary confusion and overlap with federal law without any increase in protection for children. He signaled an openness to working with the legislature on this issue, however.
    • Newsom signed AB 1281 that would extend the carveout for employers to comply with the CCPA from 1 January 2021 to 1 January 2022. The CCPA “exempts from its provisions certain information collected by a business about a natural person in the course of the natural person acting as a job applicant, employee, owner, director, officer, medical staff member, or contractor, as specified…[and also] exempts from specified provisions personal information reflecting a written or verbal communication or a transaction between the business and the consumer, if the consumer is a natural person who is acting as an employee, owner, director, officer, or contractor of a company, partnership, sole proprietorship, nonprofit, or government agency and whose communications or transaction with the business occur solely within the context of the business conducting due diligence regarding, or providing or receiving a product or service to or from that company, partnership, sole proprietorship, nonprofit, or government agency.” AB 1281 “shall become operative only” if the CPRA is not approved by voters.
    • Newsom also signed AB 713 that would:
      • except from the CCPA information that was deidentified in accordance with specified federal law, or was derived from medical information, protected health information, individually identifiable health information, or identifiable private information, consistent with specified federal policy, as provided.
      • except from the CCPA a business associate of a covered entity, as defined, that is governed by federal privacy, security, and data breach notification rules if the business associate maintains, uses, and discloses patient information in accordance with specified requirements.
      • except information that is collected for, used in, or disclosed in research, as defined.
      • additionally prohibit a business or other person from reidentifying information that was deidentified, unless a specified exception is met.
      • beginning January 1, 2021, require a contract for the sale or license of deidentified information to include specified provisions relating to the prohibition of reidentification, as provided.
  • The Government Accountability Office (GAO) published a report requested by the chair of a House committee on a Federal Communications Commission (FCC) program to provide subsidies for broadband providers to provide service for High cost areas, typically for people in rural or hard to reach areas. Even though the FCC has provided nearly $5 billion in 2019 through the Universal Service Fund’s (USF) high cost program, the agency lack data to determine whether the goals of the program are being met. House Energy and Commerce Committee Chair Frank Pallone Jr (D-NJ) had asked for the assessment, which could well form the basis for future changes to how the FCC funds and sets conditions for use of said funds for broadband.
    • The GAO noted:
      • According to stakeholders GAO interviewed, FCC faces three key challenges to accomplish its high-cost program performance goals: (1) accuracy of FCC’s broadband deployment data, (2) broadband availability on tribal lands, and (3) maintaining existing fixed-voice infrastructure and attaining universal mobile service. For example, although FCC adopted a more precise method of collecting and verifying broadband availability data, stakeholders expressed concern the revised data would remain inaccurate if carriers continue to overstate broadband coverage for marketing and competitive reasons. Overstating coverage impairs FCC’s efforts to promote universal voice and broadband since an area can become ineligible for high-cost support if a carrier reports that service already exists in that area. FCC has also taken actions to address the lack of broadband availability on tribal lands, such as making some spectrum available to tribes for wireless broadband in rural areas. However, tribal stakeholders told GAO that some tribes are unable to secure funding to deploy the infrastructure necessary to make use of spectrum for wireless broadband purposes.
    • The GAO concluded:
      • The effective use of USF resources is critical given the importance of ensuring that Americans have access to voice and broadband services. Overall, FCC’s high-cost program performance goals and measures are often not conducive to providing FCC with high-quality performance information. The performance goals lack measurable or quantifiable bases upon which numeric targets can be set. Further, while FCC’s performance measures met most of the key attributes of effective measures, the measures often lacked linkage, clarity, and objectivity. Without such clarity and specific desired outcomes, FCC lacks performance information that could help FCC make better-informed decisions about how to allocate the program’s resources to meet ongoing and emerging challenges to ensuring universal access to voice and broadband services. Furthermore, the absence of public reporting of this information leaves stakeholders, including Congress, uncertain about the program’s effectiveness to deploy these services.
    • The GAO recommended that
      • The Chairman of FCC should revise the high-cost performance goals so that they are measurable and quantifiable.
      • The Chairman of FCC should ensure high-cost performance measures align with key attributes of successful performance measures, including ensuring that measures clearly link with performance goals and have specified targets.
      • The Chairman of FCC should ensure the high-cost performance measure for the goal of minimizing the universal service contribution burden on consumers and businesses takes into account user-fee leading practices, such as equity and sustainability considerations.
      • The Chairman of FCC should publicly and periodically report on the progress it has made for its high-cost program’s performance goals, for example, by including relevant performance information in its Annual Broadband Deployment Report or the USF Monitoring Report.
  • An Irish court has ruled that the Data Protection Commission (DPC) must cover the legal fees of Maximillian Schrems for litigating and winning his case in which the Court of Justice of the European Union (CJEU) struck down the adequacy decision underpinning the European Union-United States Privacy Shield (aka Schrems II). It has been estimated that Schrems’ legal costs could total between €2-5 million for filing a complaint against Facebook, the low end of which would represent 10% of the DPC’s budget this year. In addition, the DPC has itself spent almost €3 million litigating the case.
  • House Transportation and Infrastructure Chair Peter DeFazio (D-OR) and Ranking Member Sam Graves (R-MO) asked that the Government Accountability Office (GAO) review the implications of the Federal Communications Commission’s (FCC) 2019 proposal to allow half of the Safety Band (i.e. 5.9 GHz spectrum) to be used for wireless communications even though the United States (U.S.) Department of Transportation weighed in against this decision. DeFazio and Graves are asking for a study of “the safety implications of sharing more than half of the 5.9 GHz spectrum band” that is currently “reserved exclusively for transportation safety purposes.” This portion of the spectrum was set aside in 1999 for connected vehicle technologies, according to DeFazio and Graves, and given the rise of automobile technology that requires spectrum, they think the FCC’s proposed proceeding “may significantly affect the efficacy of current and future applications of vehicle safety technologies.” DeFazio and Graves asked the GAO to review the following:
    • What is the current status of 5.9 GHz wireless transportation technologies, both domestically and internationally?
    • What are the views of industry and other stakeholders on the potential uses of these technologies, including scenarios where the 5.9 GHz spectrum is shared among different applications?
    • In a scenario in which the 5.9 GHz spectrum band is shared among different applications, what effect would this have on the efficacy, including safety, of current wireless transportation technology deployments?
    • What options exist for automakers and highway management agencies to ensure the safe deployment of connected vehicle technologies in a scenario in which the 5.9 GHz spectrum band is shared among different applications?
    • How, if at all, have DOT and FCC assessed current and future needs for 5.9 GHz spectrum in transportation applications? 
    • How, if at all, have DOT and FCC coordinated to develop a federal spectrum policy for connected vehicles?
  • The Harvard Law School’s Cyberlaw Clinic and the Electronic Frontier Foundation (EFF) have published “A Researcher’s Guide to Some Legal Risks of Security Research,” which is timely given the case before the Supreme Court of the United States to determine whether the “Computer Fraud and Abuse Act” (CFAA). The Cyberlaw Clinic and EFF stated:
    • Just last month, over 75 prominent security researchers signed a letter urging the Supreme Court not to interpret the CFAA, the federal anti-hacking/computer crime statute, in a way that would criminalize swaths of valuable security research.
    • In the report, the Cyberlaw Clinic and EFF explained:
      • This guide overviews broad areas of potential legal risk related to security research, and the types of security research likely implicated. We hope it will serve as a useful starting point for concerned researchers and others. While the guide covers what we see as the main areas of legal risk for security researchers, it is not exhaustive.
    • In Van Buren v. United States, the Court will consider the question of “[w]hether a person who is authorized to access information on a computer for certain purposes violates Section 1030(a)(2) of the Computer Fraud and Abuse Act if he accesses the same information for an improper purpose.” In this case, the defendant was a police officer who took money as part of a sting operation to illegally use his access to Georgia’s database of license plates to obtain information about a person. The Eleventh Circuit Court of Appeals denied his appeal of his conviction under the CFAA per a previous ruling in that circuit that “a defendant violates the CFAA not only when he obtains information that he has no “rightful[]” authorization whatsoever to acquire, but also when he obtains information “for a nonbusiness purpose.”
    • As the Cyberlaw Clinic and EFF noted in the report:
      • Currently, courts are divided about whether the statute’s prohibition on “exceed[ing] authorized access” applies to people who have authorization to access data (for some purpose), but then access it for a (different) purpose that violates a contractual terms of service or computer use policy. Some courts have found that the CFAA covers activities that do not circumvent any technical access barriers, from making fake profiles that violate Facebook’s terms of service to running a search on a government database without permission. Other courts, disagreeing with the preceding approach, have held that a verbal or contractual prohibition alone cannot render access punishable under the CFAA.
  • The United Kingdom’s Institution of Engineering and Technology (IET) and the National Cyber Security Centre (NCSC) have published “Code of Practice: Cyber Security and Safety” to help engineers develop technological and digital products with both safety and cybersecurity in mind. The IET and NCSC explained:
    • The aim of this Code is to help organizations accountable for safety-related systems manage cyber security vulnerabilities that could lead to hazards. It does this by setting out principles that when applied will improve the interaction between the disciplines of system safety and cyber security, which have historically been addressed as distinct activities.
    • The objectives for organizations that follow this Code are: (a) to manage the risk to safety from any insecurity of digital technology; (b) to be able to provide assurance that this risk is acceptable; and (c) where there is a genuine conflict between the controls used to achieve safety and security objectives, to ensure that these are properly resolved by the appropriate authority.
    • It should be noted that the focus of this Code is on the safety and security of digital technology, but it is recognized that addressing safety and cyber security risk is not just a technological issue: it involves people, process, physical and technological aspects. It should also be noted that whilst digital technology is central to the focus, other technologies can be used in pursuit of a cyber attack. For example, the study of analogue emissions (electromagnetic, audio, etc.) may give away information being digitally processed and thus analogue interfaces could be used to provide an attack surface.

Coming Events

  • On 10 November, the Senate Commerce, Science, and Transportation Committee will hold a hearing to consider nominations, including Nathan Simington’s to be a Member of the Federal Communications Commission.
  • On 17 November, the Senate Judiciary Committee will reportedly hold a hearing with Facebook CEO Mark Zuckerberg and Twitter CEO Jack Dorsey on Section 230 and how their platforms chose to restrict The New York Post article on Hunter Biden.
  • On 18 November, the Federal Communications Commission (FCC) will hold an open meeting and has released a tentative agenda:
    • Modernizing the 5.9 GHz Band. The Commission will consider a First Report and Order, Further Notice of Proposed Rulemaking, and Order of Proposed Modification that would adopt rules to repurpose 45 megahertz of spectrum in the 5.850-5.895 GHz band for unlicensed operations, retain 30 megahertz of spectrum in the 5.895-5.925 GHz band for the Intelligent Transportation Systems (ITS) service, and require the transition of the ITS radio service standard from Dedicated Short-Range Communications technology to Cellular Vehicle-to-Everything technology. (ET Docket No. 19-138)
    • Further Streamlining of Satellite Regulations. The Commission will consider a Report and Order that would streamline its satellite licensing rules by creating an optional framework for authorizing space stations and blanket-licensed earth stations through a unified license. (IB Docket No. 18-314)
    • Facilitating Next Generation Fixed-Satellite Services in the 17 GHz Band. The Commission will consider a Notice of Proposed Rulemaking that would propose to add a new allocation in the 17.3-17.8 GHz band for Fixed-Satellite Service space-to-Earth downlinks and to adopt associated technical rules. (IB Docket No. 20-330)
    • Expanding the Contribution Base for Accessible Communications Services. The Commission will consider a Notice of Proposed Rulemaking that would propose expansion of the Telecommunications Relay Services (TRS) Fund contribution base for supporting Video Relay Service (VRS) and Internet Protocol Relay Service (IP Relay) to include intrastate telecommunications revenue, as a way of strengthening the funding base for these forms of TRS and making it more equitable without increasing the size of the Fund itself. (CG Docket Nos. 03-123, 10-51, 12-38)
    • Revising Rules for Resolution of Program Carriage Complaints. The Commission will consider a Report and Order that would modify the Commission’s rules governing the resolution of program carriage disputes between video programming vendors and multichannel video programming distributors. (MB Docket Nos. 20-70, 17-105, 11-131)
    • Enforcement Bureau Action. The Commission will consider an enforcement action.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Pexels from Pixabay

Further Reading, Other Developments, and Coming Events (27 October)

Further Reading

  •  “The Police Can Probably Break Into Your Phone” By Jack Nicas — The New York Times. So, about “Going Dark.” Turns out nations and law enforcement officials have either oversold the barrier that default end-to-end encryption on phones creates or did not understand the access that police were already getting to many encrypted phones. This piece is based in large part on the Upturn report showing that United States (U.S.) law enforcement agencies have multiple means of hacking into encrypted or protected smartphones. The point is made that the issue is really that encryption makes it harder to get into phones and is quite pricey. If an iPhone or Android user stores data in the cloud, then getting access is not a problem. But having it encrypted on a phone requires serious technological means to access. But, this article points to another facet of the Upturn report: police have very little in the way of policy or guidance on how to handle data in ways that respect privacy and possibly even the laws of their jurisdictions.
  • Pornhub Doesn’t Care” By Samantha Cole and Emanuel Maiberg — Vice. One of the world’s biggest pornography sites seems to have a poor track record at taking down non-consensual pornography. A number of women were duped into filming pornography they were told would not be distributed online or only in certain jurisdictions. The proprietor lied and now many of them are faced with having these clips turn up again and again on Pornhub and other sites even if they use digital fingerprinting of such videos. These technological screening methods can be easily defeated. Worse still, Pornhub, and its parent company, Mindgeek, did not start responding to requests from these women to have their videos taken down until they began litigating against the man who had masterminded the filming of the non-consensual videos.
  • ‘Machines set loose to slaughter’: the dangerous rise of military AI” By Frank Pasquale — The Guardian. This long read lays out some of the possibilities that may come to pass if artificial intelligence is used to create autonomous weapons or robots. Most of the outcomes sound like science fiction, but then who could have foreseen a fleet of drones in the Middle East operated by the United States.
  • How The Epoch Times Created a Giant Influence Machine” By Kevin Roose — The New York Times. An interesting tale of how a fringe publication may be on its way to being one of the biggest purveyors of right wing material online.
  • Schools Clamored for Seesaw’s App. That Was Good News, and Bad News.” By Stephanie Clifford — The New York Times. The pandemic has led to the rise of another educational app.

Other Developments

  • The United Kingdom’s (UK) Parliamentary Business, Energy and Industrial Strategy (BEIS) Committee wrote a number of companies, including technology firms, “to seek answers in relation to the Committee’s inquiry exploring the extent to which businesses in the UK are exploiting the forced labour of Uyghur in the Xinjiang region of China” according to the committee’s press release. The committee wrote to Amazon and TikTok because as the chair of the committee, Minister of Parliament Nusrat Ghani asserted:
    • The Australian Strategic Policy Institute’s (ASPI) ‘Uyghur’s for Sale’ report names 82 foreign and Chinese companies directly or indirectly benefiting from the exploitation of Uyghur workers in Xinjiang. The companies listed in the Australian Strategic Policy Institute’s report span industries including the fashion, retail and information technology sectors. On the BEIS Committee, we are determined to ask prominent businesses operating in Britain in these sectors what they are doing to ensure their profits are not on the back of forced labour in China. These businesses are trusted by many British consumers and I hope they will repay this faith by coming forward to answer these questions and also take up the opportunity to give evidence to the Business Committee in public.
    • In its March report, the ASPI argued:
      • The Chinese government has facilitated the mass transfer of Uyghur and other ethnic minority citizens from the far west region of Xinjiang to factories across the country. Under conditions that strongly suggest forced labour, Uyghurs are working in factories that are in the supply chains of at least 82 well-known global brands in the technology, clothing and automotive sectors, including Apple, BMW, Gap, Huawei, Nike, Samsung, Sony and Volkswagen.
      • This report estimates that more than 80,000 Uyghurs were transferred out of Xinjiang to work in factories across China between 2017 and 2019, and some of them were sent directly from detention camps. The estimated figure is conservative and the actual figure is likely to be far higher. In factories far away from home, they typically live in segregated dormitories, undergo organised Mandarin and ideological training outside working hours, are subject to constant surveillance, and are forbidden from participating in religious observances. Numerous sources, including government documents, show that transferred workers are assigned minders and have limited freedom of movement.
      • China has attracted international condemnation for its network of extrajudicial ‘re-education camps’ in Xinjiang. This report exposes a new phase in China’s social re-engineering campaign targeting minority citizens, revealing new evidence that some factories across China are using forced Uyghur labour under a state-sponsored labour transfer scheme that is tainting the global supply chain.
  • A group of nations worked together to find and apprehend individuals accused of laundering ill-gotten funds for cyber criminals. The United States (U.S.) indicted the accused. Europol explained:
    • An unprecedented international law enforcement operation involving 16 countries has resulted in the arrest of 20 individuals suspected of belonging to the QQAAZZ criminal network which attempted to launder tens of millions of euros on behalf of the world’s foremost cybercriminals. 
    • Some 40 house searches were carried out in Latvia, Bulgaria, the United Kingdom, Spain and Italy, with criminal proceedings initiated against those arrested by the United States, Portugal, the United Kingdom and Spain. The largest number of searches in the case were carried out in Latvia in operations led by the Latvian State Police (Latvijas Valsts Policija). Bitcoin mining equipment was also seized in Bulgaria.
    • This international sweep follows a complex investigation led by the Portuguese Judicial Police (Polícia Judiciária) together with the United States Attorney Office for the Western District of Pennsylvania and the FBI’s Pittsburgh Field Office, alongside the Spanish National Police (Policia Nacional) and the regional Catalan police (Mossos D’esquadra) and law enforcement authorities from the United Kingdom, Latvia, Bulgaria, Georgia, Italy, Germany, Switzerland, Poland, Czech Republic, Australia, Sweden, Austria and Belgium with coordination efforts led by Europol. 
    • The U.S. Department of Justice (DOJ) claimed:
      • Comprised of several layers of members from Latvia, Georgia, Bulgaria, Romania, and Belgium, among other countries, the QQAAZZ network opened and maintained hundreds of corporate and personal bank accounts at financial institutions throughout the world to receive money from cybercriminals who stole it from bank accounts of victims.  The funds were then transferred to other QQAAZZ-controlled bank accounts and sometimes converted to cryptocurrency using “tumbling” services designed to hide the original source of the funds.  After taking a fee of up to 40 to 50 percent, QQAAZZ returned the balance of the stolen funds to their cybercriminal clientele.  
      • The QQAAZZ members secured these bank accounts by using both legitimate and fraudulent Polish and Bulgarian identification documents to create and register dozens of shell companies which conducted no legitimate business activity. Using these registration documents, the QQAAZZ members then opened corporate bank accounts in the names of the shell companies at numerous financial institutions around the world, thereby generating hundreds of QQAAZZ-controlled bank accounts available to receive stolen funds from cyber thieves.
      • QQAAZZ advertised its services as a “global, complicit bank drops service” on Russian-speaking online cybercriminal forums where cybercriminals gather to offer or seek specialized skills or services needed to engage in a variety of cybercriminal activities. The criminal gangs behind some of the world’s most harmful malware families (e.g.: Dridex, Trickbot, GozNym, etc.) are among those cybercriminal groups that benefited from the services provided by QQAAZZ. 
  • Representatives Anna Eshoo (D-CA) and Bobby L. Rush (D-IL), and Senator Ron Wyden (D-OR) wrote the Privacy and Civil Liberties Oversight Board (PCLOB) asking that the privacy watchdog “investigate the federal government’s surveillance of recent protests, the legal authorities for that surveillance, the government’s adherence to required procedures in using surveillance equipment, and the chilling effect that federal government surveillance has had on protesters.”
    • They argued:
      • Many agencies have or may have surveilled protesters, according to press reports and agency documents.
        • The Customs and Border Protection (CBP) deployed various aircraft –including AS350 helicopters, a Cessna single-engine airplane, and Predator drones –that logged 270 hours of aerial surveillance footage over 15 cities, including Minneapolis, New York City, Buffalo, Philadelphia, Detroit, and Washington, D.C.
        • The FBI flew Cessna 560 aircraft over protests in Washington, D.C., in June, and reporting shows that the FBI has previously equipped such aircraft with ‘dirt boxes,’ equipment that can collect cell phone location data, along with sophisticated cameras for long-range, persistent video surveillance.
        • In addition to specific allegations of protester surveillance, the Drug Enforcement Agency (DEA) was granted broad authority to “conduct covert surveillance ”over protesters responding to the murder of Mr. Floyd.
    • Eshoo, Rush, and Wyden claimed:
      • Recent surveillance of protests involves serious threats to liberty and requires a thorough investigation. We ask that PCLOB thoroughly investigate, including by holding public hearings, the following issues and issue a public report about its findings:
        • (1) Whether and to what extent federal government agencies surveilled protests by collecting or processing personal information of protesters.
        • (2) What legal authorities agencies are using as the basis for surveillance, an unclassified enumeration of claimed statutory or other authorities, and whether agencies followed required procedures for using surveillance equipment, acquiring and processing personal data, receiving appropriate approvals, and providing needed transparency.
        • (3) To what extent the threat of surveillance has a chilling effect on protests.
  • Ireland’s Data Protection Commission (DPC) has opened two inquiries into Facebook and Instagram for potential violations under the General Data Protection Regulation (GDPR) and Ireland’s Data Protection Act 2018. This is not the only regulatory action the DPC has against Facebook, which is headquartered in Dublin. The DPC is reportedly trying to stop Facebook from transferring personal data out of the European Union (EU) and into the United States (U.S.) using standard contractual clauses (SCC) in light of the EU-U.S. Privacy Shield being struck down. The DPC stated “Instagram is a social media platform which is used widely by children in Ireland and across Europe…[and] [t]he DPC has been actively monitoring complaints received from individuals in this area and has identified potential concerns in relation to the processing of children’s personal data on Instagram which require further examination.
    • The DPC explained the two inquiries:
      • This Inquiry will assess Facebook’s reliance on certain legal bases for its processing of children’s personal data on the Instagram platform. The DPC will set out to establish whether Facebook has a legal basis for the ongoing processing of children’s personal data and if it employs adequate protections and or restrictions on the Instagram platform for such children. This Inquiry will also consider whether Facebook meets its obligations as a data controller with regard to transparency requirements in its provision of Instagram to children.
      • This Inquiry will focus on Instagram profile and account settings and the appropriateness of these settings for children. Amongst other matters, this Inquiry will explore Facebook’s adherence with the requirements in the GDPR in respect to Data Protection by Design and Default and specifically in relation to Facebook’s responsibility to protect the data protection rights of children as vulnerable persons.
  • The United States’ National Institute of Standards and Technology (NIST) issued a draft version of the Cybersecurity Profile for the Responsible Use of Positioning, Navigation and Timing (PNT) Services (NISTIR 8323). Comments are due by 23 November.
    • NIST explained:
      • NIST has developed this PNT cybersecurity profile to help organizations identify systems, networks, and assets dependent on PNT services; identify appropriate PNT services; detect the disruption and manipulation of PNT services; and manage the associated risks to the systems, networks, and assets dependent on PNT services. This profile will help organizations make deliberate, risk-informed decisions on their use of PNT services.
    • In its June request for information (RFI), NIST explained “Executive Order 13905, Strengthening National Resilience Through Responsible Use of Positioning, Navigation, and Timing Services, was issued on February 12, 2020 and seeks to protect the national and economic security of the United States from disruptions to PNT services that are vital to the functioning of technology and infrastructure, including the electrical power grid, communications infrastructure and mobile devices, all modes of transportation, precision agriculture, weather forecasting, and emergency response.” The EO directed NIST “to develop and make available, to at least the appropriate agencies and private sector users, PNT profiles.”

Coming Events

  • The Senate Commerce, Science, and Transportation Committee will hold a hearing on 28 October regarding 47 U.S.C. 230 titled “Does Section 230’s Sweeping Immunity Enable Big Tech Bad Behavior?” with testimony from:
    • Jack Dorsey, Chief Executive Officer of Twitter;
    • Sundar Pichai, Chief Executive Officer of Alphabet Inc. and its subsidiary, Google; and 
    • Mark Zuckerberg, Chief Executive Officer of Facebook.
  • On 29 October, the Federal Trade Commission (FTC) will hold a seminar titled “Green Lights & Red Flags: FTC Rules of the Road for Business workshop” that “will bring together Ohio business owners and marketing executives with national and state legal experts to provide practical insights to business and legal professionals about how established consumer protection principles apply in today’s fast-paced marketplace.”
  • On 10 November, the Senate Commerce, Science, and Transportation Committee will hold a hearing to consider nominations, including Nathan Simington’s to be a Member of the Federal Communications Commission.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

“How Encryption Works” by Afsal CMK is licensed under CC BY 4.0

Further Reading, Other Developments, and Coming Events (22 October)

Further Reading

  •  “A deepfake porn Telegram bot is being used to abuse thousands of women” By Matt Burgess — WIRED UK. A bot set loose on Telegram can take pictures of women and, apparently teens, too, and “takes off” their clothing, rendering a naked image of females who never took naked pictures. This seems to be the next iteration in deepfake porn, a problem that will surely get worse until governments legislate against it and technology companies have incentives to locate and take down such material.
  • The Facebook-Twitter-Trump Wars Are Actually About Something Else” By Charlie Warzel — The New York Times. This piece makes the case that there are no easy fixes for American democracy or for misinformation on social media platforms.
  • Facebook says it rejected 2.2m ads for breaking political campaigning rules” — Agence France-Presse. Facebook’s Vice President of Global Affairs and Communications Nick Clegg said the social media giant is employing artificial intelligence and humans to find and remove political advertisements that violate policy in order to avoid a repeat of 2016 where untrue information and misinformation played roles in both Brexit and the election of Donald Trump as President of the United States.
  • Huawei Fallout—Game-Changing New China Threat Strikes At Apple And Samsung” By Zak Doffman — Forbes. Smartphone manufacturers from the People’s Republic of China (PRC) appear ready to step into the projected void caused by the United States (U.S.) strangling off Huawei’s access to chips. Xiaomi and Oppo have already seen sales surge worldwide and are poised to pick up where Huawei is being forced to leave off, perhaps demonstrating the limits of U.S. power to blunt the rise of PRC technology companies.
  • As Local News Dies, a Pay-for-Play Network Rises in Its Place” By Davey Alba and Jack Nicas — The New York Times. With a decline and demise of many local media outlets in the United States, new groups are stepping into the void, and some are politically minded but not transparent about biases. The organization uncovered in this article is nakedly Republican and is running and planting articles at both legitimate and artificial news sites for pay. Sometimes conservative donors pay, sometimes campaigns do. Democrats are engaged in the same activity but apparently to a lesser extent. These sorts of activities will only erode further faith in the U.S. media.
  • Forget Antitrust Laws. To Limit Tech, Some Say a New Regulator Is Needed.” By Steve Lohr — The New York Times. This piece argues that anti-trust enforcement actions are plodding, tending to take years to finish. Consequently, this body of law is inadequate to the task of addressing the market dominance of big technology companies. Instead, a new regulatory body is needed along the lines of those regulating the financial services industries that is more nimble than anti-trust. Given the problems in that industry with respect to regulation, this may not be the best model.
  • “‘Do Not Track’ Is Back, and This Time It Might Work” By Gilad Edelman — WIRED. Looking to utilize the requirement in the “California Consumer Privacy Act” (CCPA) (AB 375) that requires regulated entities to respect and effectuate the use of a one-time opt-out mechanism, a group of entities have come together to build and roll out the Global Privacy Control. In theory, users could download this technical specification to their phones and computers, install it, use it once, and then all websites would be on notice regarding that person’s privacy preferences. Such a means would go to the problem turned up by Consumer Reports recent report on the difficulty of trying to opt out of having one’s personal information sold.
  • EU countries sound alarm about growing anti-5G movement” By Laurens Cerulus — Politico. 15 European Union (EU) nations wrote the European Commission (EC) warning that the nascent anti-5G movement borne of conspiracy thinking and misinformation threatens the Eu’s position vis-à-vis the United States (U.S.) and the People’s Republic of China (PRC). There have been more than 200 documented arson attacks in the EU with the most having occurred in the United Kingdom, France, and the Netherlands. These nations called for a more muscular, more forceful debunking of the lies and misinformation being spread about 5G.
  • Security firms call Microsoft’s effort to disrupt botnet to protect against election interference ineffective” By Jay Greene — The Washington Post. Microsoft seemingly acted alongside the United States (U.S.) Cyber Command to take down and impair the operation of Trickbot, but now cybersecurity experts are questioning how effective Microsoft’s efforts really were. Researchers have shown the Russian operated Trickbot has already stood up operations and has dispersed across servers around the world, showing how difficult it is to address some cyber threats.
  • Governments around the globe find ways to abuse Facebook” By Sara Fischer and Ashley Gold — Axios. This piece puts a different spin on the challenges Facebook faces in countries around the world, especially those that ruthlessly use the platform to spread lies and misinformation than the recent BuzzFeed News article. The new article paints Facebook as the well-meaning company being taken advantage of while the other one portrays a company callous to content moderation except in nations where it causes them political problems such as the United States, the European Union, and other western democracies.

Other Developments

  • The United States (U.S.) Department of Justice’s (DOJ) Cyber-Digital Task Force (Task Force) issued “Cryptocurrency: An Enforcement Framework,” that “provides a comprehensive overview of the emerging threats and enforcement challenges associated with the increasing prevalence and use of cryptocurrency; details the important relationships that the Department of Justice has built with regulatory and enforcement partners both within the United States government and around the world; and outlines the Department’s response strategies.” The Task Force noted “[t]his document does not contain any new binding legal requirements not otherwise already imposed by statute or regulation.” The Task Force summarized the report:
    • [I]n Part I, the Framework provides a detailed threat overview, cataloging the three categories into which most illicit uses of cryptocurrency typically fall: (1) financial transactions associated with the commission of crimes; (2) money laundering and the shielding of legitimate activity from tax, reporting, or other legal requirements; and (3) crimes, such as theft, directly implicating the cryptocurrency marketplace itself. 
    • Part II explores the various legal and regulatory tools at the government’s disposal to confront the threats posed by cryptocurrency’s illicit uses, and highlights the strong and growing partnership between the Department of Justice and the Securities and Exchange Commission, the Commodity Futures Commission, and agencies within the Department of the Treasury, among others, to enforce federal law in the cryptocurrency space.
    • Finally, the Enforcement Framework concludes in Part III with a discussion of the ongoing challenges the government faces in cryptocurrency enforcement—particularly with respect to business models (employed by certain cryptocurrency exchanges, platforms, kiosks, and casinos), and to activity (like “mixing” and “tumbling,” “chain hopping,” and certain instances of jurisdictional arbitrage) that may facilitate criminal activity.    
  • The White House’s Office of Science and Technology Policy (OSTP) has launched a new website for the United States’ (U.S.) quantum initiative and released a report titled “Quantum Frontiers: Report On Community Input To The Nation’s Strategy For Quantum Information Science.” The Quantum Initiative flows from the “National Quantum Initiative Act” (P.L. 115-368) “to  provide  for  a  coordinated  Federal  program  to  accelerate  quantum  research  and  development  for  the  economic and national security of the United States.” The OSTP explained that the report “outlines eight frontiers that contain core problems with fundamental questions confronting quantum information science (QIS) today:
    • Expanding Opportunities for Quantum Technologies to Benefit Society
    • Building the Discipline of Quantum Engineering
    • Targeting Materials Science for Quantum Technologies
    • Exploring Quantum Mechanics through Quantum Simulations
    • Harnessing Quantum Information Technology for Precision Measurements
    • Generating and Distributing Quantum Entanglement for New Applications
    • Characterizing and Mitigating Quantum Errors
    • Understanding the Universe through Quantum Information
    • OSTP asserted “[t]hese frontier areas, identified by the QIS research community, are priorities for the government, private sector, and academia to explore in order to drive breakthrough R&D.”
  • The New York Department of Financial Services (NYDFS) published its report on the July 2020 Twitter hack during which a team of hacker took over a number of high-profile accounts (e.g. Barack Obama, Kim Kardashian West, Jeff Bezos, and Elon Musk) in order to perpetrate a cryptocurrency scam. The NYDFS has jurisdiction over cryptocurrencies and companies dealing in this item in New York. The NYDFS found that the hackers used the most basic means to acquire permission to take over accounts. The NYDFS explained:
    • Given that Twitter is a publicly traded, $37 billion technology company, it was surprising how easily the Hackers were able to penetrate Twitter’s network and gain access to internal tools allowing them to take over any Twitter user’s account. Indeed, the Hackers used basic techniques more akin to those of a traditional scam artist: phone calls where they pretended to be from Twitter’s Information Technology department. The extraordinary access the Hackers obtained with this simple technique underscores Twitter’s cybersecurity vulnerability and the potential for devastating consequences. Notably, the Twitter Hack did not involve any of the high-tech or sophisticated techniques often used in cyberattacks–no malware, no exploits, and no backdoors.
    • The implications of the Twitter Hack extend far beyond this garden-variety fraud. There are well-documented examples of social media being used to manipulate markets and interfere with elections, often with the simple use of a single compromised account or a group of fake accounts.In the hands of a dangerous adversary, the same access obtained by the Hackers–the ability to take control of any Twitter users’ account–could cause even greater harm.
    • The Twitter Hack demonstrates the need for strong cybersecurity to curb the potential weaponization of major social media companies. But our public institutions have not caught up to the new challenges posed by social media. While policymakers focus on antitrust and content moderation problems with large social media companies, their cybersecurity is also critical. In other industries that are deemed critical infrastructure, such as telecommunications, utilities, and finance, we have established regulators and regulations to ensure that the public interest is protected. With respect to cybersecurity, that is what is needed for large, systemically important social media companies.
    • The NYDFS recommended the cybersecurity measures cryptocurrency companies in New York should implement to avoid similar hacks, including its own cybersecurity regulations that bind its regulated entities in New York. The NYDFS also called for a national regulator to address the lack of a dedicated regulator of Twitter and other massive social media platforms. The NYDFS asserted:
      • Social media companies currently have no dedicated regulator. They are subject to the same general oversight applicable to other companies. For instance, the SEC’s regulations for all public companies apply to public social media companies, and antitrust and related laws and regulations enforced by the Department of Justice and the FTC apply to social media companies as they do to all companies. Social media companies are also subject to generally applicable laws, such as the California Consumer Privacy Act and the New York SHIELD Act. The European Union’s General Data Protection Regulation, which regulates the storage and use of personal data, also applies to social media entities doing business in Europe.
      • But there are no regulators that have the authority to uniformly regulate social media platforms that operate over the internet, and to address the cybersecurity concerns identified in this Report. That regulatory vacuum must be filled.
      • A useful starting point is to create a “systemically important” designation for large social media companies, like the designation for critically important bank and non-bank financial institutions. In the wake of the 2007-08 financial crisis, Congress established a new regulatory framework for financial institutions that posed a systemic threat to the financial system of the United States. An institution could be designated as a Systemically Important Financial Institution (“SIFI”) “where the failure of or a disruption to the functioning of a financial market utility or the conduct of a payment, clearing, or settlement activity could create, or increase, the risk of significant liquidity or credit problems spreading among financial institutions or markets and thereby threaten the stability of the financial system of the United States.”
      • The risks posed by social media to our consumers, economy, and democracy are no less grave than the risks posed by large financial institutions. The scale and reach of these companies, combined with the ability of adversarial actors who can manipulate these systems, require a similarly bold and assertive regulatory approach.
      • The designation of an institution as a SIFI is made by the Financial Stability Oversight Council (“FSOC”), which Congress established to “identify risks to the financial stability of the United States” and to provide enhanced supervision of SIFIs.[67] The FSOC also “monitors regulatory gaps and overlaps to identify emerging sources of systemic risk.” In determining whether a financial institution is systemically important, the FSOC considers numerous factors including: the effect that a failure or disruption to an institution would have on financial markets and the broader financial system; the nature of the institution’s transactions and relationships; the nature, concentration, interconnectedness, and mix of the institution’s activities; and the degree to which the institution is regulated.
      • An analogue to the FSOC should be established to identify systemically important social media companies. This new Oversight Council should evaluate the reach and impact of social media companies, as well as the society-wide consequences of a social media platform’s misuse, to determine which companies they should designate as systemically important. Once designated, those companies should be subject to enhanced regulation, such as through the provision of “stress tests” to evaluate the social media companies’ susceptibility to key threats, including cyberattacks and election interference.
      • Finally, the success of such oversight will depend on the establishment of an expert agency to oversee designated social media companies. Systemically important financial companies designated by the FSOC are overseen by the Federal Reserve Board, which has a long-established and deep expertise in banking and financial market stability. A regulator for systemically important social media would likewise need deep expertise in areas such as technology, cybersecurity, and disinformation. This expert regulator could take various forms; it could be a completely new agency or could reside within an established agency or at an existing regulator.
  • The Government Accountability Office (GAO) evaluated how well the Trump Administration has been implementing the “Open, Public, Electronic and Necessary Government Data Act of 2018” (OPEN Government Data Act) (P.L. 115-435). As the GAO explained, this statute “requires federal agencies to publish their information as open data using standardized, nonproprietary formats, making data available to the public open by default, unless otherwise exempt…[and] codifies and expands on existing federal open data policy including the Office of Management and Budget’s (OMB) memorandum M-13-13 (M-13-13), Open Data Policy—Managing Information as an Asset.”
    • The GAO stated
      • To continue moving forward with open government data, the issuance of OMB implementation guidance should help agencies develop comprehensive inventories of their data assets, prioritize data assets for publication, and decide which data assets should or should not be made available to the public.
      • Implementation of this statutory requirement is critical to agencies’ full implementation and compliance with the act. In the absence of this guidance, agencies, particularly agencies that have not previously been subject to open data policies, could fall behind in meeting their statutory timeline for implementing comprehensive data inventories.
      • It is also important for OMB to meet its statutory responsibility to biennially report on agencies’ performance and compliance with the OPEN Government Data Act and to coordinate with General Services Administration (GSA) to improve the quality and availability of agency performance data that could inform this reporting. Access to this information could inform Congress and the public on agencies’ progress in opening their data and complying with statutory requirements. This information could also help agencies assess their progress and improve compliance with the act.
    • The GAO made three recommendations:
      • The Director of OMB should comply with its statutory requirement to issue implementation guidance to agencies to develop and maintain comprehensive data inventories. (Recommendation 1)
      • The Director of OMB should comply with the statutory requirement to electronically publish a report on agencies’ performance and compliance with the OPEN Government Data Act. (Recommendation 2)
      • The Director of OMB, in collaboration with the Administrator of GSA, should establish policy to ensure the routine identification and correction of errors in electronically published performance information. (Recommendation 3)
  • The United States’ (U.S.) National Security Agency (NSA) issued a cybersecurity advisory titled “Chinese State-Sponsored Actors Exploit Publicly Known Vulnerabilities,” that “provides Common Vulnerabilities and Exposures (CVEs) known to be recently leveraged, or scanned-for, by Chinese state-sponsored cyber actors to enable successful hacking operations against a multitude of victim networks.” The NSA recommended a number of mitigations generally for U.S. entities, including:
    • Keep systems and products updated and patched as soon as possible after patches are released.
    • Expect that data stolen or modified (including credentials, accounts, and software) before the device was patched will not be alleviated by patching, making password changes and reviews of accounts a good practice.
    • Disable external management capabilities and set up an out-of-band management network.
    • Block obsolete or unused protocols at the network edge and disable them in device configurations.
    • Isolate Internet-facing services in a network Demilitarized Zone (DMZ) to reduce the exposure of the internal network.
    • Enable robust logging of Internet-facing services and monitor the logs for signs of compromise.
    • The NSA then proceeded to recommend specific fixes.
    • The NSA provided this policy backdrop:
      • One of the greatest threats to U.S. National Security Systems (NSS), the U.S. Defense Industrial Base (DIB), and Department of Defense (DOD) information networks is Chinese state-sponsored malicious cyber activity. These networks often undergo a full array of tactics and techniques used by Chinese state-sponsored cyber actors to exploit computer networks of interest that hold sensitive intellectual property, economic, political, and military information. Since these techniques include exploitation of publicly known vulnerabilities, it is critical that network defenders prioritize patching and mitigation efforts.
      • The same process for planning the exploitation of a computer network by any sophisticated cyber actor is used by Chinese state-sponsored hackers. They often first identify a target, gather technical information on the target, identify any vulnerabilities associated with the target, develop or re-use an exploit for those vulnerabilities, and then launch their exploitation operation.
  • Belgium’s data protection authority (DPA) (Autorité de protection des données in French or Gegevensbeschermingsautoriteit in Dutch) (APD-GBA) has reportedly found that the Transparency & Consent Framework (TCF) developed by the Interactive Advertising Bureau (IAB) violates the General Data Protection Regulation (GDPR). The Real-Time Bidding (RTB) system used for online behavioral advertising allegedly transmits the personal information of European Union residents without their consent even before a popup appears on their screen asking for consent. The APD-GBA is the lead DPA in the EU in investigating the RTB and will likely now circulate their findings and recommendations to other EU DPAs before any enforcement will commence.
  • None Of Your Business (noyb) announced “[t]he Irish High Court has granted leave for a “Judicial Review” against the Irish Data Protection Commission (DPC) today…[and] [t]he legal action by noyb aims to swiftly implement the [Court of Justice for the European Union (CJEU)] Decision prohibiting Facebook’s” transfer of personal data from the European Union to the United States (U.S.) Last month, after the DPC directed Facebook to stop transferring the personal data of EU citizens to the U.S., the company filed suit in the Irish High Court to stop enforcement of the order and succeeded in staying the matter until the court rules on the merits of the challenge.
    • noyb further asserted:
      • Instead of making a decision in the pending procedure, the DPC has started a second, new investigation into the same subject matter (“Parallel Procedure”), as widely reported (see original reporting by the WSJ). No logical reasons for the Parallel Procedure was given, but the DPC has maintained that Mr Schrems will not be heard in this second case, as he is not a party in this Parallel Procedure. This Paralell procedure was criticised by Facebook publicly (link) and instantly blocked by a Judicial Review by Facebook (see report by Reuters).
      • Today’s Judicial Review by noyb is in many ways the counterpart to Facebook’s Judicial Review: While Facebook wants to block the second procedure by the DPC, noyb wants to move the original complaints procedure towards a decision.
      • Earlier this summer, the CJEU struck down the adequacy decision for the agreement between the EU and (U.S. that had provided the easiest means to transfer the personal data of EU citizens to the U.S. for processing under the General Data Protection Regulation (GDPR) (i.e. the EU-U.S. Privacy Shield). In the case known as Schrems II, the CJEU also cast doubt on whether standard contractual clauses (SCC) used to transfer personal data to the U.S. would pass muster given the grounds for finding the Privacy Shield inadequate: the U.S.’s surveillance regime and lack of meaningful redress for EU citizens. Consequently, it has appeared as if data protection authorities throughout the EU would need to revisit SCCs for transfers to the U.S., and it appears the DPC was looking to stop Facebook from using its SCC. Facebook is apparently arguing in its suit that it will suffer “extremely significant adverse effects” if the DPC’s decision is implemented.
  • Most likely with the aim of helping British chances for an adequacy decision from the European Union (EU), the United Kingdom’s Information Commissioner’s Office (ICO) published guidance that “discusses the right of access [under the General Data Protection Regulation (GDPR)] in detail.” The ICO explained “is aimed at data protection officers (DPOs) and those with specific data protection responsibilities in larger organisations…[but] does not specifically cover the right of access under Parts 3 and 4 of the Data Protection Act 2018.”
    • The ICO explained
      • The right of access, commonly referred to as subject access, gives individuals the right to obtain a copy of their personal data from you, as well as other supplementary information.
  • The report the House Education and Labor Ranking Member requested from the Government Accountability Office (GAO) on the data security and data privacy practices of public schools. Representative Virginia Foxx (R-NC) asked the GAO “to review the security of K-12 students’ data. This report examines (1) what is known about recently reported K-12 cybersecurity incidents that compromised student data, and (2) the characteristics of school districts that experienced these incidents.” Strangely, the report did have GAO’s customary conclusions or recommendations. Nonetheless, the GAO found:
    • Ninety-nine student data breaches reported from July 1, 2016 through May 5, 2020 compromised the data of students in 287 school districts across the country, according to our analysis of K-12 Cybersecurity Resource Center (CRC) data (see fig. 3). Some breaches involved a single school district, while others involved multiple districts. For example, an attack on a vendor system in the 2019-2020 school year affected 135 districts. While information about the number of students affected was not available for every reported breach, examples show that some breaches affected thousands of students, for instance, when a cybercriminal accessed 14,000 current and former students’ personally identifiable information (PII) in one district.
    • The 99 reported student data breaches likely understate the number of breaches that occurred, for different reasons. Reported incidents sometimes do not include sufficient information to discern whether data were breached. We identified 15 additional incidents in our analysis of CRC data in which student data might have been compromised, but the available information was not definitive. In addition, breaches can go undetected for some time. In one example, the personal information of hundreds of thousands of current and former students in one district was publicly posted for 2 years before the breach was discovered.
    • The CRC identified 28 incidents involving videoconferences from April 1, 2020 through May 5, 2020, some of which disrupted learning and exposed students to harm. In one incident, 50 elementary school students were exposed to pornography during a virtual class. In another incident in a different district, high school students were targeted with hate speech during a class, resulting in the cancellation that day of all classes using the videoconferencing software. These incidents also raise concerns about the potential for violating students’ privacy. For example, one district is reported to have instructed teachers to record their class sessions. Teachers said that students’ full names were visible to anyone viewing the recording.
    • The GAO found gaps in the protection and enforcement of student privacy by the United States government:
      • [The Department of] Education is responsible for enforcing Family Educational Rights and Privacy Act (FERPA), which addresses the privacy of PII in student education records and applies to all schools that receive funds under an applicable program administered by Education. If parents or eligible students believe that their rights under FERPA have been violated, they may file a formal complaint with Education. In response, Education is required to take appropriate actions to enforce and deal with violations of FERPA. However, because the department’s authority under FERPA is directly related to the privacy of education records, Education’s security role is limited to incidents involving potential violations under FERPA. Further, FERPA amendments have not directly addressed educational technology use.
      • The “Children’s Online Privacy Protection Act” (COPPA) requires the Federal Trade Commission (FTC) to issue and enforce regulations concerning children’s privacy. The COPPA Rule, which took effect in 2000 and was later amended in 2013, requires operators of covered websites or online services that collect personal information from children under age 13 to provide notice and obtain parental consent, among other things. COPPA generally applies to the vendors who provide educational technology, rather than to schools directly. However, according to FTC guidance, schools can consent on behalf of parents to the collection of students’ personal information if such information is used for a school-authorized educational purpose and for no other commercial purpose.
  • Upturn, an advocacy organization that “advances equity and justice in the design, governance, and use of technology,” has released a report showing that United States (U.S.) law enforcement agencies have multiple means of hacking into encrypted or protected smartphones. There have long been the means and vendors available in the U.S. and abroad for breaking into phones despite the claims of a number of nations like the Five Eyes (U.S., the United Kingdom, Australia, Canada, and New Zealand) that default end-to-end encryption was a growing problem that allowed those preying on children and engaged in terrorism to go undetected. In terms of possible bias, Upturn is “is supported by the Ford Foundation, the Open Society Foundations, the John D. and Catherine T. MacArthur Foundation, Luminate, the Patrick J. McGovern Foundation, and Democracy Fund.”
    • Upturn stated:
      • Every day, law enforcement agencies across the country search thousands of cellphones, typically incident to arrest. To search phones, law enforcement agencies use mobile device forensic tools (MDFTs), a powerful technology that allows police to extract a full copy of data from a cellphone — all emails, texts, photos, location, app data, and more — which can then be programmatically searched. As one expert puts it, with the amount of sensitive information stored on smartphones today, the tools provide a “window into the soul.”
      • This report documents the widespread adoption of MDFTs by law enforcement in the United States. Based on 110 public records requests to state and local law enforcement agencies across the country, our research documents more than 2,000 agencies that have purchased these tools, in all 50 states and the District of Columbia. We found that state and local law enforcement agencies have performed hundreds of thousands of cellphone extractions since 2015, often without a warrant. To our knowledge, this is the first time that such records have been widely disclosed.
    • Upturn argued:
      • Law enforcement use these tools to investigate not only cases involving major harm, but also for graffiti, shoplifting, marijuana possession, prostitution, vandalism, car crashes, parole violations, petty theft, public intoxication, and the full gamut of drug-related offenses. Given how routine these searches are today, together with racist policing policies and practices, it’s more than likely that these technologies disparately affect and are used against communities of color.
      • We believe that MDFTs are simply too powerful in the hands of law enforcement and should not be used. But recognizing that MDFTs are already in widespread use across the country, we offer a set of preliminary recommendations that we believe can, in the short-term, help reduce the use of MDFTs. These include:
        • banning the use of consent searches of mobile devices,
        • abolishing the plain view exception for digital searches,
        • requiring easy-to-understand audit logs,
        • enacting robust data deletion and sealing requirements, and
        • requiring clear public logging of law enforcement use.

Coming Events

  • The Federal Communications Commission (FCC) will hold an open commission meeting on 27 October, and the agency has released a tentative agenda:
    • Restoring Internet Freedom Order Remand – The Commission will consider an Order on Remand that would respond to the remand from the U.S. Court of Appeals for the D.C. Circuit and conclude that the Restoring Internet Freedom Order promotes public safety, facilitates broadband infrastructure deployment, and allows the Commission to continue to provide Lifeline support for broadband Internet access service. (WC Docket Nos. 17-108, 17-287, 11- 42)
    • Establishing a 5G Fund for Rural America – The Commission will consider a Report and Order that would establish the 5G Fund for Rural America to ensure that all Americans have access to the next generation of wireless connectivity. (GN Docket No. 20-32)
    • Increasing Unlicensed Wireless Opportunities in TV White Spaces – The Commission will consider a Report and Order that would increase opportunities for unlicensed white space devices to operate on broadcast television channels 2-35 and expand wireless broadband connectivity in rural and underserved areas. (ET Docket No. 20-36)
    • Streamlining State and Local Approval of Certain Wireless Structure Modifications – The Commission will consider a Report and Order that would further accelerate the deployment of 5G by providing that modifications to existing towers involving limited ground excavation or deployment would be subject to streamlined state and local review pursuant to section 6409(a) of the Spectrum Act of 2012. (WT Docket No. 19-250; RM-11849)
    • Revitalizing AM Radio Service with All-Digital Broadcast Option – The Commission will consider a Report and Order that would authorize AM stations to transition to an all-digital signal on a voluntary basis and would also adopt technical specifications for such stations. (MB Docket Nos. 13-249, 19-311)
    • Expanding Audio Description of Video Content to More TV Markets – The Commission will consider a Report and Order that would expand audio description requirements to 40 additional television markets over the next four years in order to increase the amount of video programming that is accessible to blind and visually impaired Americans. (MB Docket No. 11-43)
    • Modernizing Unbundling and Resale Requirements – The Commission will consider a Report and Order to modernize the Commission’s unbundling and resale regulations, eliminating requirements where they stifle broadband deployment and the transition to next- generation networks, but preserving them where they are still necessary to promote robust intermodal competition. (WC Docket No. 19-308)
    • Enforcement Bureau Action – The Commission will consider an enforcement action.
  • The Senate Commerce, Science, and Transportation Committee will hold a hearing on 28 October regarding 47 U.S.C. 230 titled “Does Section 230’s Sweeping Immunity Enable Big Tech Bad Behavior?” with testimony from:
    • Jack Dorsey, Chief Executive Officer of Twitter;
    • Sundar Pichai, Chief Executive Officer of Alphabet Inc. and its subsidiary, Google; and 
    • Mark Zuckerberg, Chief Executive Officer of Facebook.
  • On 29 October, the Federal Trade Commission (FTC) will hold a seminar titled “Green Lights & Red Flags: FTC Rules of the Road for Business workshop” that “will bring together Ohio business owners and marketing executives with national and state legal experts to provide practical insights to business and legal professionals about how established consumer protection principles apply in today’s fast-paced marketplace.”
  • On 10 November, the Senate Commerce, Science, and Transportation Committee will hold a hearing to consider nominations, including Nathan Simington’s to be a Member of the Federal Communications Commission.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Mehmet Turgut Kirkgoz from Pixabay

EDPB Releases Draft Guidelines On “Relevant And Reasoned Objection”

The EDPB seeks to define when a DPA would have criticism of a draft GDPR decision another DPA would need to heed.

The European Data Protection Board (EDPB) has issued a draft of its construction of a key authority in the General Data Protection Regulation (GDPR) designed to guide and coordinate investigations that cross borders in the European Union (EU). A lead supervisory authority (LSA) is supposed to consider “relevant and reasoned objections” to draft decisions submitted by concerned supervisory authorities (CSA). If an LSA rejects such feedback, then the GDPR action gets kicked over to the EDPB. However, since this has only happened once, the EDPB thought it appropriate to define the term so all the EU data protection authorities (DPA) would understand what objections are relevant and reasoned.

The EDPB explained that the guidance “aims at establishing a common understanding of the notion of the terms “relevant and reasoned”, including what should be considered when assessing whether an objection “clearly demonstrates the significance of the risks posed by the draft decision.” The EDPB stated “he unfamiliarity surrounding “what constitutes relevant and reasoned objection” has the potential to create misunderstandings and inconsistent applications by the supervisory authorities, the EU legislator (sic) suggested that the EDPB should issue guidelines on this concept (end of Recital 124 GDPR).”

Article 60 of the GDPR provides if a CSA “expresses a relevant and reasoned objection to the draft decision [of the LSA], the lead supervisory authority shall, if it does not follow the relevant and reasoned objection or is of the opinion that the objection is not relevant or reasoned, submit the matter to the consistency mechanism.” Article 65 also provides that where “a supervisory authority concerned has raised a relevant and reasoned objection to a draft decision of the lead authority or the lead authority has rejected such an objection as being not relevant or reasoned,” then the EDPB must step in and work towards a final binding decision. This process was installed so that the enforcement of the EDPB would be uniform throughout the EU and to forestall the possibility that one DPA or a small group of DPAs would construe the data protection regime in ways contrary to its intention. As it is, there have already been allegations that some DPAs have been ineffective or lenient towards alleged offenders.

The EDPB stated in relevant part:

  • Article 4(24) GDPR defines “relevant and reasoned objection” as an objection to a draft decision as to whether there is an infringement of this Regulation, or whether envisaged action in relation to the controller or processor complies with this Regulation, which clearly demonstrates the significance of the risks posed by the draft decision as regards the fundamental rights and freedoms of data subjects and, where applicable, the free flow of personal data within the Union”.
  • This concept serves as a threshold in situations where CSAs aim to object to a (revised) draft decision to be adopted by the LSA under Article 60 GDPR.
  • In order for the objection to be considered as “relevant”, there must be a direct connection between the objection and the draft decision at issue. More specifically, the objection needs to concern either whether there is an infringement of the GDPR or whether the envisaged action in relation to the controller or processor complies with the GDPR.
  • In order for the objection to be “reasoned”, it needs to include clarifications and arguments as to why an amendment of the decision is proposed (i.e. the alleged legal / factual mistakes of the draft decision). It also needs to demonstrate how the change would lead to a different conclusion as to whether there is an infringement of the GDPR or whether the envisaged action in relation to the controller or processor complies with the GDPR.

Of course, this guidance is released at a time when the EDPB is using these powers for the first time. Unnamed DPAs have objected to proposed ruling by Ireland’s Data Protection Commission (DPC), the lead DPA investigating 2018 and 2019 Twitter data breaches. Consequently, the disagreement has been handed off to the EDPB, and depending on how resolution of this matter happens, the body could decide Twitter’s punishment, including a possible fine of up to 4% of its worldwide revenue. What’s more, the DPC is the lead agency investigating Facebook’s WhatsApp and Instagram, among other large technology companies, and may have to relinquish those decisions as well if other DPAs disagree with the DPC’s proposed punishment for any wrongdoing.

The DPC submitted its draft decision to other DPAs on the Twitter breach in May. The DPC stated “[t]he draft decision focusses on whether Twitter International Company has complied with Articles 33(1) and 33(5) of the GDPR” (i.e. the provision pertaining to data breach and proper notification protocol. The DPC further explained

  • This draft decision is one of a number of significant developments in DPC inquiries into “big tech” companies this week. Deputy Commissioner Graham Doyle has confirmed that: “In addition to submitting this draft decision to other EU supervisory authorities, we have this week sent a preliminary draft decision to WhatsApp Ireland Limited for their final submissions which will be taken in to account by the DPC before preparing a draft decision in that matter also for Article 60 purposes.  The inquiry into WhatsApp Ireland examines its compliance with Articles 12 to 14 of the GDPR in terms of transparency including in relation to transparency around what information is shared with Facebook.“
  • The DPC has also completed the investigation phase of a complaint-based inquiry which focuses on Facebook Ireland’s obligations to establish a lawful basis for personal data processing. This inquiry is now in the decision-making phase at the DPC.

Under Article 65 now that the draft decision on Twitter has been handed over to the EDPB, it has a month to get two-thirds of its members to agree to a binding decision it may draft. If this is not achieved, then the Board has another two weeks to get a simple majority, and if this does not occur, then EDPB Chair Andrea Jelinek alone may decide. Consequently, it is possible the EDPB redrafts the DPC decision and tries to get buy in from the DPAs that make up the Board to support a stronger punishment of Twitter.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by David Mark from Pixabay

EDPB Steps Into Twitter Investigation

The body that consists of and oversees the EU’s DPAs will use its power under the GDPR to resolve  a dispute between agencies over the punishment for Twitter’s data breaches.

The European Data Protection Board (EDPB) will soon have the opportunity to use a key power for the first time since its inception in order to resolve a dispute among data protection authorities (DPA) in the European Union (EU). Unnamed DPAs have objected to proposed ruling by Ireland’s Data Protection Commission (DPC), the lead DPA investigating 2018 and 2019 Twitter data breaches. Consequently, per the General Data Protection Regulation (GDPR), the disagreement has been handed off to the EDPB, and depending on how resolution of this matter happens, the body could decide Twitter’s punishment, including a possible fine of up to 4% of its worldwide revenue. What’s more, the DPC is the lead agency investigating Facebook’s WhatsApp and Instagram, among other large technology companies, and may have to relinquish those decisions as well if other DPAs disagree with the DPC’s proposed punishment for any wrongdoing.

The DPC submitted its draft decision to other DPAs on the Twitter breach in May in accordance with Article 60 of the GDPR. The DPC stated “[t]he draft decision focusses on whether Twitter International Company has complied with Articles 33(1) and 33(5) of the GDPR” (i.e. the provision pertaining to data breach and proper notification protocol. The DPC further explained

  • This draft decision is one of a number of significant developments in DPC inquiries into “big tech” companies this week. Deputy Commissioner Graham Doyle has confirmed that: “In addition to submitting this draft decision to other EU supervisory authorities, we have this week sent a preliminary draft decision to WhatsApp Ireland Limited for their final submissions which will be taken in to account by the DPC before preparing a draft decision in that matter also for Article 60 purposes.  The inquiry into WhatsApp Ireland examines its compliance with Articles 12 to 14 of the GDPR in terms of transparency including in relation to transparency around what information is shared with Facebook.“
  • The DPC has also completed the investigation phase of a complaint-based inquiry which focuses on Facebook Ireland’s obligations to establish a lawful basis for personal data processing. This inquiry is now in the decision-making phase at the DPC.

Article 65 of the GDPR provides that the EDPB will make a binding decision on an investigation where “a supervisory authority concerned has raised a relevant and reasoned objection to a draft decision of the lead authority or the lead authority has rejected such an objection as being not relevant or reasoned.” In this case, at least one DPA has raised an objection to the DPC’s draft decision, thus triggering Article 65. Then the EDPB has a month to get two-thirds of its members to agree to a binding decision it may draft. If this is not achieved, then the Board has another two weeks to get a simple majority, and if this does not occur, then EDPB Chair  Andrea Jelinek alone may decide. Consequently, it is possible the EDPB redrafts the DPC decision and tries to get buy in from the DPAs that make up the Board to support a stronger punishment of Twitter.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Roland Mey from Pixabay

Europe’s Highest Court Strikes Down Privacy Shield

The agreement that has been allowing US companies to transfer the personal data of EU residents to the US was found to be invalid under EU law. The EU’s highest court seem to indicate standard contractual clauses, a frequently used means to transfer data, may be acceptable.

First things first, if you would like to receive my Technology Policy Update, email me. You can find some of these Updates from 2019 and 2020 here.

In the second major ruling from the European Union (EU) this week, earlier today, its highest court invalidated the agreement that has allowed multinational corporations and others to transfer the personal data of EU citizens to the United States (US) for commercial purposes since 2016. The court did not, however, find illegal standard contractual clauses, the means by which many such transfers are occurring. This is the second case an Austrian privacy activist has brought, alleging that Facebook was transferring his personal data into the US in violation of European law because US law, especially surveillance programs, resulted in less protection and fewer rights. The first case resulted in the previous transfer agreement being found illegal, and now this case has resulted in much the same outcome. The import of this ruling is not immediately clear.

Maximillian Schrems filed a complaint against Facebook with the Data Protection Commission (DPC) in 2013, alleging that the company’s transfer of his personal data violated his rights under EU law because of the mass US surveillance revealed by former National Security Agency (NSA) contractor Edward Snowden. Ultimately, this case resulted in a 2015 Court of Justice of the European Union (CJEU) ruling that invalidated the Safe Harbor agreement under which the personal data of EU residents was transferred to the US by commercial concerns. The EU and US executed a follow on agreement, the EU-US Privacy Shield, that was designed to address some of the problems the CJEU turned up, and the US passed a law, the “Judicial Redress Act of 2015” (P.L. 114-126), to provide EU citizens a way to exercise their EU rights in US courts via the “Privacy Act of 1974.”

However, Schrems continued and soon sought to challenge the legality of the European Commission’s signing off on the Privacy Shield agreement, the adequacy decision issued in 2016, and also the use of standard contractual clauses (SCC) by companies for the transfer of personal data to the US. The European Data Protection Board (EDPB) explained in a recent decision on Denmark’s SCC that

  • According to Article 28(3) General Data Protection Regulation (GDPR), the processing by a data processor shall be governed by a contract or other legal act under Union or Member State law that is binding on the processor with regard to the controller, setting out a set of specific aspects to regulate the contractual relationship between the parties. These include the subject-matter and duration of the processing, its nature and purpose, the type of personal data and categories of data subjects, among others.
  • Under Article 28(6) GDPR, without prejudice to an individual contract between the data controller and the data processor, the contract or the other legal act referred in paragraphs (3) and (4) of Article 28 GDPR may be based, wholly or in part on SCCs.

In a summary of its decision, the CJEU explained

The GDPR provides that the transfer of such data to a third country may, in principle, take place only if the third country in question ensures an adequate level of data protection. According to the GDPR, the Commission may find that a third country ensures, by reason of its domestic law or its international commitments, an adequate level of protection. In the absence of an adequacy decision, such transfer may take place only if the personal data exporter established in the EU has provided appropriate safeguards, which may arise, in particular, from standard data protection clauses adopted by the Commission, and if data subjects have enforceable rights and effective legal remedies. Furthermore, the GDPR details the conditions under which such a transfer may take place in the absence of an adequacy decision or appropriate safeguards.

The CJEU found

  • Regarding the level of protection required in respect of such a transfer, the Court holds that the requirements laid down for such purposes by the GDPR concerning appropriate safeguards, enforceable rights and effective legal remedies must be interpreted as meaning that data subjects whose personal data are transferred to a third country pursuant to standard data protection clauses must be afforded a level of protection essentially equivalent to that guaranteed within the EU by the GDPR, read in the light of the Charter. In those circumstances, the Court specifies that the assessment of that level of protection must take into consideration both the contractual clauses agreed between the data exporter established in the EU and the recipient of the transfer established in the third country concerned and, as regards any access by the public authorities of that third country to the data transferred, the relevant aspects of the legal system of that third country.
  • Regarding the supervisory authorities’ obligations in connection with such a transfer, the Court holds that, unless there is a valid Commission adequacy decision, those competent supervisory authorities are required to suspend or prohibit a transfer of personal data to a third country where they take the view, in the light of all the circumstances of that transfer, that the standard data protection clauses are not or cannot be complied with in that country and that the protection of the data transferred that is required by EU law cannot be ensured by other means, where the data exporter established in the EU has not itself suspended or put an end to such a transfer.

The CJEU stated “the limitations on the protection of personal data arising from the domestic law of the US on the access and use by US public authorities of such data transferred from the EU to that third country, which the Commission assessed in [its 2016 adequacy decision], are not circumscribed in a way that satisfies requirements that are essentially equivalent to those required under EU law, by the principle of proportionality, in so far as the surveillance programmes based on those provisions are not limited to what is strictly necessary.”

The CJEU found the process put in place by the US government to handle complaints inadequate. The 2016 Privacy Shield resulted in the creation of an Ombudsman post that EU citizens could submit their complaints. This position is currently held by Under Secretary of State for Economic Growth, Energy, and the Environment Keith Krach.

The CJEU stated “the Ombudsperson mechanism referred to in that decision does  not  provide  data  subjects with any  cause  of  action  before  a  body  which  offers guarantees substantially equivalent to those required by EU law, such as to ensure both the independence  of  the Ombudsperson  provided  for  by  that  mechanism  and the  existence  of rules  empowering  the  Ombudsperson  to  adopt  decisions  that  are  binding  on  the US intelligence services.”

The decision on SCCs is more ambiguous as it is not entirely clear the circumstances under which they can be used. In its decision, the CJEU made clear that SCCs are not necessarily legal under EU law:

although there are situations in which, depending on the law and practices in force in the third country concerned, the recipient of such a transfer is in a position to guarantee the necessary protection of the data solely on the basis of standard data protection clauses, there are others in which the content of those standard clauses might not constitute a sufficient means of ensuring, in practice, the effective protection of personal data transferred to the third country concerned. That is the case, in particular, where the law of that third country allows its public authorities to interfere with the rights of the data subjects to which that data relates.

Reaction from the parties was mixed, particularly on what the CJEU’s ruling means for SCCs even though there was agreement that the Privacy Shield will soon no longer govern data transfers from the EU to the US.

The DPC issued a statement in which it asserted

Today’s judgment provides just that, firmly endorsing the substance of the concerns expressed by the DPC (and by the Irish High Court) to the effect that EU citizens do not enjoy the level of protection demanded by EU law when their data is transferred to the United States. In that regard, while the judgment most obviously captures Facebook’s transfers of data relating to Mr Schrems, it is of course the case that its scope extends far beyond that, addressing the position of EU citizens generally.

The DPC added

So, while in terms of the points of principle in play, the Court has endorsed the DPC’s position, it has also ruled that the SCCs transfer mechanism used to transfer data to countries worldwide is, in principle, valid, although it is clear that, in practice, the application of the SCCs transfer mechanism to transfers of personal data to the United States is now questionable. This is an issue that will require further and careful examination, not least because assessments will need to be made on a case by case basis.

At a press conference, EC Vice-President Věra Jourová claimed the “CJEU declared the Privacy Shield decision invalid, but also confirmed that the standard contractual clauses remain a valid tool for the transfer of personal data to processors established in third countries.” She asserted “[t]his means that the transatlantic data flows can continue, based on the broad toolbox for international transfers provided by the GDPR, for instance binding corporate rules or SCCs.” Jourová contended with regard to next steps, “[w]e are not starting from scratch…[and] [o]n the contrary, the Commission has already been working intensively to ensure that this toolbox is fit for purpose, including the modernisation of the Standard Contractual Clauses.” Jourová stated “we will be working closely with our American counterparts, based on today’s ruling.”

European Commissioner for Justice Didier Reynders stated

  • First, I welcome the fact that the Court confirmed the validity of our Decision on SCCs.
    • We have been working already for some time on modernising these clauses and ensuring that our toolbox for international data transfers is fit for purpose.
    • Standard Contractual Clauses are in fact the most used tool for international transfers of personal data and we wanted to ensure they can be used by businesses and fully in line with EU law.
    • We are now advanced with this work and we will of course take into account the requirements of judgement.
    • We will work with the European Data Protection Board, as well as the 27 EU Member States. It will be very important to start the process to have a formal approval to modernise the Standard Contractual Clauses as soon as possible. We have been in an ongoing process about such a modernisation for some time, but with an attention to the different elements of the decision of the Court today.
  • My second point: The Court has invalidated the Privacy Shield. We have to study the judgement in detail and carefully assess the consequences of this invalidation.

Reynders stated that “[i]n the meantime, transatlantic data flows between companies can continue using other mechanisms for international transfers of personal data available under the GDPR.”

In a statement, US Secretary of Commerce Wilbur Ross

While the Department of Commerce is deeply disappointed that the court appears to have invalidated the European Commission’s adequacy decision underlying the EU-U.S. Privacy Shield, we are still studying the decision to fully understand its practical impacts.

Ross continued

We have been and will remain in close contact with the European Commission and European Data Protection Board on this matter and hope to be able to limit the negative consequences to the $7.1 trillion transatlantic economic relationship that is so vital to our respective citizens, companies, and governments. Data flows are essential not just to tech companies—but to businesses of all sizes in every sector. As our economies continue their post-COVID-19 recovery, it is critical that companies—including the 5,300+ current Privacy Shield participants—be able to transfer data without interruption, consistent with the strong protections offered by Privacy Shield.

The Department of Commerce stated it “will continue to administer the Privacy Shield program, including processing submissions for self-certification and re-certification to the Privacy Shield Frameworks and maintaining the Privacy Shield List.” The agency added “[t]oday’s decision does not relieve participating organizations of their Privacy Shield obligations.”

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by harakir from Pixabay

Further Reading and Other Developments (4 July)

First things first, if you would like to receive my Technology Policy Update, email me. You can find some of these Updates from 2019 and 2020 here.

Other Developments

  • The Senate invoked cloture on the nomination of acting Office of Management and Budget (OMB) Director Russell Vought to be confirmed in that role and will vote on the nomination on 20 July. OMB has been without a Senate-confirmed Director since Mick Mulvaney resigned at the end of March, but he was named acting White House Chief of Staff in January 2019, resulting in Vought serving as the acting OMB head since that time.
  • The United States Federal Chief Information Officer (CIO) Suzette Kent announced she is stepping down in July, and Deputy Federal CIO Maria Roat is expected to be named acting Federal CIO. Given the Trump Administration’s approach to submitting nominations to the Senate for confirmation and the Senate’s truncated work schedule due to the election, it is likely no nomination is made this year. Kent technically held the position of Administrator of the Office of Electronic Government within the Office of Management and Budget (OMB) and her portfolio includes a range of technology related matters including cybersecurity, information technology IT policy and procurement, workforce, data security, data management and others.
  • The General Services Administration (GSA) announced the next step in “establish[ing] a program to procure commercial products through commercial e-commerce portals for purposes of enhancing competition, expediting procurement, enabling market research, and ensuring reasonable pricing of commercial products.” GSA “awarded contracts to three e-marketplace platform providers…[to] Amazon Business, Fisher Scientific, and Overstock.com, Inc. allows GSA to test the use of commercial e-commerce portals for purchases below the micro-purchase threshold of $10,000 using a proof-of-concept (for up to three years).” Section 846 of the 2018 National Defense Authorization Act (P. L. 115-91) directed GSA to implement such a program, and the agency claimed in a blog posting:
    • These contracts and platforms will be available to federal agencies as part of a governmentwide effort to modernize the buying experience for agencies and help them gain insights into open-market online spend occurring outside of existing contracts.  It is estimated that open market purchases on government purchase cards represent an addressable market of $6 billion annually.
    • The goal of the proof of concept is to provide a modern buying solution for federal customers and increase transparency on agency spending that’s already taking place with better data through this solution. Further, this solution leverages the government’s buying power and increases supply chain security awareness with a governmentwide approach.
  • In response to the ongoing and growing advertising boycott, Facebook announced in a press release some changes to the platform’s policies regarding voter suppression or hateful content. CEO Mark Zuckerberg stated “Three weeks ago, I committed to reviewing our policies ahead of the 2020 elections…[and] [t]hat work is ongoing, but today I want to share some new policies to connect people with authoritative information about voting, crack down on voter suppression, and fight hate speech:
    • 1. Providing Authoritative Information on Voting During the Pandemic
      • Last week, we announced the largest voting information campaign in American history, with the goal of helping 4 million people register to vote. As part of this, we’re creating a Voting Information Center to share authoritative information on how and when you can vote, including voter registration, voting by mail and early voting. During a pandemic when people may be afraid of going to polls, sharing authoritative information on voting by mail will be especially important. We’ll be showing the Voting Information Center at the top of the Facebook and Instagram apps over the coming months.
    • 2. Additional Steps to Fight Voter Suppression
      • Since the most dangerous voter suppression campaigns can be local and run in the days immediately before an election, we’re going to use our Elections Operations Center to quickly respond and remove false claims about polling conditions in the 72 hours leading into election day. Learning from our experience fighting Covid misinformation, we will partner with and rely on state election authorities to help determine the accuracy of information and what is potentially dangerous. We know this will be challenging in practice as facts on the ground may be uncertain and we don’t want to remove accurate information about challenges people are experiencing, but we’re building our operation to be able to respond quickly.
      • We will also ban posts that make false claims saying ICE agents are checking for immigration papers at polling places, which is a tactic used to discourage voting. We’ll also remove any threats of coordinated interference, like someone saying “My friends and I will be doing our own monitoring of the polls to make sure only the right people vote”, which can be used to intimidate voters. We will continue to review our voter suppression policies on an ongoing basis as part of our work on voter engagement and racial justice.
    • 3. Creating a Higher Standard for Hateful Content in Ads
      • This week’s study from the EU showed that Facebook acts faster and removes a greater percent of hate speech on our services than other major internet platforms, including YouTube and Twitter. We’ve invested heavily in both AI systems and human review teams so that now we identify almost 90% of the hate speech we remove before anyone even reports it to us. We’ve also set the standard in our industry by publishing regular transparency reports so people can hold us accountable for progress. We will continue investing in this work and will commit whatever resources are necessary to improve our enforcement.
      • We believe there is a public interest in allowing a wider range of free expression in people’s posts than in paid ads. We already restrict certain types of content in ads that we allow in regular posts, but we want to do more to prohibit the kind of divisive and inflammatory language that has been used to sow discord. So today we’re prohibiting a wider category of hateful content in ads. Specifically, we’re expanding our ads policy to prohibit claims that people from a specific race, ethnicity, national origin, religious affiliation, caste, sexual orientation, gender identity or immigration status are a threat to the physical safety, health or survival of others. We’re also expanding our policies to better protect immigrants, migrants, refugees and asylum seekers from ads suggesting these groups are inferior or expressing contempt, dismissal or disgust directed at them.
    • 4. Labeling Newsworthy Content
      • A handful of times a year, we leave up content that would otherwise violate our policies if the public interest value outweighs the risk of harm. Often, seeing speech from politicians is in the public interest, and in the same way that news outlets will report what a politician says, we think people should generally be able to see it for themselves on our platforms.
      • We will soon start labeling some of the content we leave up because it is deemed newsworthy, so people can know when this is the case. We’ll allow people to share this content to condemn it, just like we do with other problematic content, because this is an important part of how we discuss what’s acceptable in our society — but we’ll add a prompt to tell people that the content they’re sharing may violate our policies.
      • To clarify one point: there is no newsworthiness exemption to content that incites violence or suppresses voting. Even if a politician or government official says it, if we determine that content may lead to violence or deprive people of their right to vote, we will take that content down. Similarly, there are no exceptions for politicians in any of the policies I’m announcing here today.
  • On 30 June, Facebook banned the boogaloo movement from its platform. The company “designat[ed] a violent US-based anti-government network under our Dangerous Individuals and Organizations policy and disrupting it on our services…[and] [a]s a result, this violent network is banned from having a presence on our platform and we will remove content praising, supporting or representing it.”
  • The United States Department of Commerce suspended “regulations affording preferential treatment to Hong Kong… including the availability of export license exceptions.” The Trump Administration took this latest action in its trade war with the People’s Republic of China (PRC) because of “the Chinese Communist Party’s imposition of new security measures on Hong Kong” and “the risk that sensitive U.S. technology will be diverted to the People’s Liberation Army or Ministry of State Security has increased, all while undermining the territory’s autonomy.” The United States Department of State added “the United States will today end exports of U.S.-origin defense equipment and will take steps toward imposing the same restrictions on U.S. defense and dual-use technologies to Hong Kong as it does for China.”
  • The Democratic National Committee (DNC) updated its “social media comparative analysis to reflect changes companies have made in recent months to their counter disinformation and election integrity policies.” The DNC is working with Facebook/Instagram, Twitter, Google/YouTube, and now Snapchat to “to combat platform manipulation and train our campaigns on how best to secure their accounts and protect their brands against disinformation.”
  • The Office of the Privacy Commissioner of Canada (OPC) and three privacy agencies for provinces of Canada announced an investigation “into a Tim Hortons mobile ordering application after media reports raised concerns about how the app may be collecting and using data about people’s movements as they go about their daily activities.” A journalist made a request to Tim Hortons under Canada’s Personal Information Protection and Electronic Documents Act (PIPEDA) and learned the company’s app had logged his longitude and latitude coordinates over 2,700 times in five months, sometimes when he was not using the app even though the company has claimed it only tracks users when the app is being used. Moreover, Tim Hortons combines data from sister companies also owned by Restaurant Brands International like Burger King and Popeyes.
  • The United Kingdom’s Information Commissioner’s Office (ICO) released an “investigation report into the use of mobile phone extraction (MPE) by police forces when conducting criminal investigations in England and Wales” which “found that police data extraction practices vary across the country, with excessive amounts of personal data often being extracted and stored without an appropriate basis in existing data protection law.” The ICO made a range of recommendations, many of which will require a legislative revamp of the laws that currently govern these practices.
  • Ireland’s Data Protection Commission released its “2018-2020 Regulatory Activity Under GDPR” and listed the following enforcement actions under the General Data Protection Regulation:
    • An Garda Síochana–reprimand and corrective powers applied in accordance with the Data Protection Act, 2018.
    • Tusla; The Child and Family Agency –reprimand and fine applied in accordance with the Data Protection Act, 2018.
    • Tusla; The Child and Family Agency –reprimand and fine applied in accordance with the Data Protection Act, 2018.
    • Twitter–Inquiry completed and draft decision forwarded to EU concerned data protection authorities in accordance with Article 60 of the GDPR.
    • DEASP-Enforcement notice issued regarding the use of the Public Services Card (currently under appeal).
    • 59 Section 10 decisions issued.
    • 15,000 breach notifications assessed and concluded.
    • 9 litigation cases concluded in the Irish Courts.
    • Hearing in CJEU Standard Contractual Clauses case brought by DPC to Irish High Court.
    • 80 % of cases received under the GDPR have been concluded.
  • The National Telecommunications and Information Administration (NTIA) issued its “American Broadband Initiative Progress Report,” an update on a Trump Administration inter-agency effort to implement “a cohesive government-wide strategy to reform broadband deployment” started in 2019. NTIA highlighted the following accomplishment:
    • Through the ReConnect program, as of March 2020, the U.S. Department of Agriculture (USDA) awarded over $744 million in funds to support more than 80 broadband projects benefiting more than 430,000 rural residents in 34 states. The Federal Communications Commission (FCC) and USDA also established processes to coordinate awards for rural broadband deployment to ensure that USDA-funded grants do not overlap with the FCC’s $20 Billion Rural Digital Opportunity Fund (RDOF) or the $9 Billion 5G Fund for Rural America
    • The Department of the Interior (DOI) launched a Joint Overview-Established Locations (JOEL) mapping tool to make site locations visible to service providers looking to locate equipment on Federal property, and added new data layers from the General Services Administration, the U.S. Forest Service, and U.S. Postal Service. Since its release, the map has been viewed 4,294 times, averaging 7 views per day.
    • In June 2019, the General Services Administration (GSA) published the FY 2018 Federal Real Property Profile (FRPP) public data set, updated with a set of filters allowing users to identify Federal property that could be candidates for communications infrastructure installation. This publicly available data now includes the height of buildings and facilities and the elevation above mean sea level, helping the communications industry to determine a structure’s suitability for siting communications facilities. In June 2020, GSA will update the FRPP public data set with more current data from FY 2019.
    • In March 2019, the Department of Commerce’s NTIA updated its website with information about Federal Agencies’ permitting processes and funding information to provide easier, “one-stop” access to the information. NTIA continues to update this information with support from Agencies.
    • In September 2019, NTIA completed the first phase of its National Broadband Availability Map (NBAM), a geographic information system platform which allows for the visualization and analysis of federal, state, and commercially available data sets. As of June 2020, the NBAM program includes 18 States who are partnering on this critical broadband data platform.
    • In February 2020, GSA and USDA’s Forest Service (FS) finalized a revised Standard Form (SF-299), making this Common Application Form suitable for telecommunications purposes.

Further Reading

  • Google will start paying some publishers for news articles” – The Verge. In part because of pressure from regulators in Australia and France, Google will begin paying some new outlets for articles. This could be the start of a larger trend of online platforms compensating media which has long argued this should be the case. However, similar systems in Germany and Spain earlier this decade failed to bolster the media in those countries financially, and Google responded to the Spanish statute by ceasing to operate its News platform in that country.
  • Trump’s strike at Twitter risks collateral damage inside the executive branch” – Politico. One aspect to the Trump Administration executive order on online platforms is that it directs federal agencies to review their online advertising and marketing subject to additional Office of Management and Budget and Department of Justice review. If fully implemented, this process could derail a number of agency initiatives ranging from military recruitment to fighting drug addiction.
  • Column: With its Sprint merger in the bag, T-Mobile is already backing away from its promises” – The Los Angeles Times. Critics of the T-Mobile-Sprint merger have pounced on a recent filing with the California Public Utilities Commission in which the company has asked for two additional years to build out its 5G network despite making this a signal promise in selling California Attorney General Xavier Becerra on the deal. Likewise, the company is trying to renegotiate its promise to create 1,000 new jobs in the state.
  • Facebook policy changes fail to quell advertiser revolt as Coca-Cola pulls ads” – The Guardian. Despite Facebook CEO Mark Zuckerberg’s announcement of policy changes (see Other Developments above), advertisers continue to join a widening boycott that some companies are applying across all major social media platforms. Unilever, Coca Cola, Hershey’s, Honda, and other joined the movement. The majority of Facebook’s income comes from advertising, so a sustained boycott could do more than pushing down the company’s share value. And, the changes announced at the end of last week do not seem to have impressed the boycott’s organizers. It would be interesting if pressure placed on companies advertising on Facebook affects more change than pressure from the right and left in the United States, European Union, and elsewhere.
  • Trump administration tells Facebook, Twitter to act against calls to topple statues, commit violent acts” – The Washington Post. The Department of Homeland Security sent letters late last week to the largest technology companies, asserting they may have played a role in “burglary, arson, aggravated assault, rioting, looting, and defacing public property” by allowing people to post on or use their platforms. The thrust of the argument seems to be that Twitter, Facebook, Apple, Google, and other companies should have done more to prevent people from posting and sharing material that allegedly resulted in violence. Acting Secretary of Homeland Security Chad Wolf argued “In the wake of George Floyd’s death, America faced an unprecedented threat from violent extremists seeking to co-opt the tragedy of his death for illicit purposes.” These letters did not mention President Donald Trump’s tweets that seem to encourage authorities to use violence against protestors. Moreover, they seem to be of a piece with the recent executive order in that there is a scant legal basis for the action designed to cow the social media platforms.
  • Twitch, Reddit crack down on Trump-linked content as industry faces reckoning” – Politico. Two platforms acted against President Donald Trump and his supporters for violating the platforms terms of service and rules. The irony here is that the recent executive order on social platforms seeks to have them held accountable for not operating according to their terms of service.
  • Inside Facebook’s fight against European regulation” – Politico. Through until now unavailable European Commission documents on meetings with and positions of Facebook, this article traces the slow evolution of the company’s no-regulation approach in the European Union (EU) to a public position ostensibly amenable to regulation. It is also perhaps the tale of using lobbying tactics that work in Washington, DC, that have largely failed to gain traction in Brussels.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by congerdesign from Pixabay