Further Reading, Other Development, and Coming Events (7 December)

Further Reading

  • Facebook steps up campaign to ban false information about coronavirus vaccines” By Elizabeth Dwoskin — The Washington Post. In its latest step to find and remove lies, misinformation, and disinformation, the social media giant is now committing to removing and blocking untrue material about COVID-19 vaccines, especially from the anti-vaccine community. Will the next step be to take on anti-vaccination proponents generally?
  • Comcast’s 1.2 TB data cap seems like a ton of data—until you factor in remote work” By Rob Pegoraro — Fast Company. Despite many people and children working and learning from home, Comcast is reimposing a 1.2 terabyte limit on data for homes. Sounds like quite a lot until you factor in video meetings, streaming, etc. So far, other providers have not set a cap.
  • Google’s star AI ethics researcher, one of a few Black women in the field, says she was fired for a critical email” By Drew Harwell and Nitasha Tiku — The Washington Post. Timnit Gebru, a top flight artificial intelligence (AI) computer scientist, was fired for questioning Google’s review of a paper she wanted to present at an AI conference that is likely critical of the company’s AI projects. Google claims she resigned, but Gebru says she was fired. She has long been an advocate for women and minorities in tech and AI and her ouster will likely only increase scrutiny of and questions about Google’s commitment to diversity and an ethical approach to the development and deployment of AI. It will also probably lead to more employee disenchantment about the company that follows in the wake of protests about Google’s involvement with the United States Department of Defense’s Project Maven and hiring of former United States Department of Homeland Security chief of staff Miles Taylor who was involved with the policies that resulted in caging children and separating families on the southern border of the United States.
  • Humans Can Help Clean Up Facebook and Twitter” By Greg Bensinger — The New York Times. In this opinion piece, the argument is made that social media platforms should redeploy their human monitors to the accounts that violate terms of service most frequently (e.g., President Donald Trump) and more aggressively label and remove untrue or inflammatory content, they would have a greater impact on lies, misinformation, and disinformation.
  • Showdown looms over digital services tax” By Ashley Gold — Axios. Because the Organization for Economic Cooperation and Development (OECD) has not reached a deal on digital services taxes, a number of the United States (U.S.) allies could move forward with taxes on U.S. multinationals like Amazon, Google, and Apple. The Trump Administration has variously taken an adversarial position threatening to retaliate against countries like France who have enacted a tax that has not been collected during the OECD negotiations. The U.S. also withdrew from talks. It is probable the Biden Administration will be more willing to work in a multi-lateral fashion and may strike a deal on an issue that it not going away as the United Kingdom, Italy, and Canada also have plans for a digital tax.
  • Trump’s threat to veto defense bill over social-media protections is heading to a showdown with Congress” By Karoun Demirjian and Tony Romm — The Washington Post. I suppose I should mention of the President’s demands that the FY 2021 National Defense Authorization Act (NDAA) contain a repeal of 47 U.S.C. 230 (Section 230 of the Communications Act) that came at the eleventh hour and fifty-ninth minute of negotiations on a final version of the bill. Via Twitter, Donald Trump threatened to veto the bill which has been passed annually for decades. Republicans were not having it, however, even if they agreed on Trump’s desire to remove liability protection for technology companies. And yet, if Trump continues to insist on a repeal, Republicans may find themselves in a bind and the bill could conceivably get pulled until President-elect Joe Biden is sworn in. On the other hand, Trump’s veto threats about renaming military bases currently bearing the names of Confederate figures have not been renewed even though the final version of the bill contains language instituting a process to do just that.

Other Developments

  • The Senate Judiciary Committee held over its most recent bill to narrow 47 U.S.C. 230 (Section 230 of the Communications Act) that provides liability protection for technology companies for third-party material posted on their platforms and any decisions to edit, alter, or remove such content. The committee opted to hold the “Online Content Policy Modernization Act” (S.4632), which may mean the bill’s chances of making it to the Senate floor are low. What’s more, even if the Senate passes Section 230 legislation, it is not clear there will be sufficient agreement with Democrats in the House to get a final bill to the President before the end of this Congress. On 1 October, the committee also decided to hold over bill to try to reconcile the fifteen amendments submitted for consideration. The Committee could soon meet again to formally markup and report out this legislation.
    • At the earlier hearing, Chair Lindsey Graham (R-SC) submitted an amendment revising the bill’s reforms to Section 230 that incorporate some of the below amendments but includes new language. For example, the bill includes a definition of “good faith,” a term not currently defined in Section 230. This term would be construed as a platform taking down or restricting content only according to its publicly available terms of service, not as a pretext, and equally to all similarly situated content. Moreover, good faith would require alerting the user and giving him or her an opportunity to respond subject to certain exceptions. The amendment also makes clear that certain existing means of suing are still available to users (e.g. suing claiming a breach of contract.)
    • Senator Mike Lee (R-UT) offered a host of amendments:
      • EHF20913 would remove “user[s]” from the reduced liability shield that online platforms would receive under the bill. Consequently, users would still not be legally liable for the content posted by another user.
      • EHF20914 would revise the language the language regarding the type of content platforms could take down with legal protection to make clear it would not just be “unlawful” content but rather content “in violation of a duly enacted law of the United States,” possibly meaning federal laws and not state laws. Or, more likely, the intent would be to foreclose the possibility a platform would say it is acting in concert with a foreign law and still assert immunity.
      • EHF20920 would add language making clear that taking down material that violates terms of service or use according to an objectively reasonable belief would be shielded from liability.
      • OLL20928 would expand legal protection to platforms for removing or restricting spam,
      • OLL20929 would bar the Federal Communications Commission (FCC) from a rulemaking on Section 230.
      • OLL20930 adds language making clear if part of the revised Section 230 is found unconstitutional, the rest of the law would still be applicable.
      • OLL20938 revises the definition of an “information content provider,” the term of art in Section 230 that identifies a platform, to expand when platforms may be responsible for the creation or development of information and consequently liable for a lawsuit.
    • Senator Josh Hawley (R-MO) offered an amendment that would create a new right of action for people to sue large platforms for taking down his or her content if not done in “good faith.” The amendment limits this right only to “edge providers” who are platforms with more than 30 million users in the U.S. , 300 million users worldwide, and with revenues of more than $1.5 billion. This would likely exclude all platforms except for Twitter, Facebook, Instagram, TikTok, Snapchat, and a select group of a few others.
    • Senator John Kennedy (R-LA) offered an amendment that removes all Section 230 legal immunity from platforms that collect personal data and then uses an “automated function” to deliver targeted or tailored content to a user unless a user “knowingly and intentionally elect[s]” to receive such content.
  • The Massachusetts Institute of Technology’s (MIT) Work of the Future Task Force issued its final report and drew the following conclusions:
    • Technological change is simultaneously replacing existing work and creating new work. It is not eliminating work altogether.
    • Momentous impacts of technological change are unfolding gradually.
    • Rising labor productivity has not translated into broad increases in incomes because labor market institutions and policies have fallen into disrepair.
    • Improving the quality of jobs requires innovation in labor market institutions.
    • Fostering opportunity and economic mobility necessitates cultivating and refreshing worker skills.
    • Investing in innovation will drive new job creation, speed growth, and meet rising competitive challenges.
    • The Task Force stated:
      • In the two-and-a-half years since the Task Force set to work, autonomous vehicles, robotics, and AI have advanced remarkably. But the world has not been turned on its head by automation, nor has the labor market. Despite massive private investment, technology deadlines have been pushed back, part of a normal evolution as breathless promises turn into pilot trials, business plans, and early deployments — the diligent, if prosaic, work of making real technologies work in real settings to meet the demands of hard-nosed customers and managers.
      • Yet, if our research did not confirm the dystopian vision of robots ushering workers off of factor y floors or artificial intelligence rendering superfluous human expertise and judgment, it did uncover something equally pernicious: Amidst a technological ecosystem delivering rising productivity, and an economy generating plenty of jobs (at least until the COVID-19 crisis), we found a labor market in which the fruits are so unequally distributed, so skewed towards the top, that the majority of workers have tasted only a tiny morsel of a vast har vest.
      • As this report documents, the labor market impacts of technologies like AI and robotics are taking years to unfold. But we have no time to spare in preparing for them. If those technologies deploy into the labor institutions of today, which were designed for the last century, we will see similar effects to recent decades: downward pressure on wages, skills, and benefits, and an increasingly bifurcated labor market. This report, and the MIT Work of the Future Task Force, suggest a better alternative: building a future for work that har vests the dividends of rapidly advancing automation and ever-more powerful computers to deliver opportunity and economic security for workers. To channel the rising productivity stemming from technological innovations into broadly shared gains, we must foster institutional innovations that complement technological change.
  • The European Data Protection Supervisor (EDPS) Wojciech Wiewiorówski published his “preliminary opinion on the European Commission’s (EC) Communication on “A European strategy for data” and the creation of a common space in the area of health, namely the European Health Data Space (EHDS).” The EDPS lauded the goal of the EHDS, “the prevention, detection and cure of diseases, as well as for evidence-based decisions in order to enhance effectiveness, accessibility and sustainability of the healthcare systems.” However, Wiewiorówski articulated his concerns that the EC needs to think through the applicability of the General Data Protection Regulation (GDPR), among other European Union (EU) laws before it can legally move forward. The EDPS stated:
    • The EDPS calls for the establishment of a thought-through legal basis for the processing operations under the EHDS in line with Article 6(1) GDPR and also recalls that such processing must comply with Article 9 GDPR for the processing of special categories of data.
    • Moreover, the EDPS highlights that due to the sensitivity of the data to be processed within the EHDS, the boundaries of what constitutes a lawful processing and a compatible further processing of the data must be crystal-clear for all the stakeholders involved. Therefore, the transparency and the public availability of the information relating to the processing on the EHDS will be key to enhance public trust in the EHDS.
    • The EDPS also calls on the Commission to clarify the roles and responsibilities of the parties involved and to clearly identify the precise categories of data to be made available to the EHDS. Additionally, he calls on the Member States to establish mechanisms to assess the validity and quality of the sources of the data.
    • The EDPS underlines the importance of vesting the EHDS with a comprehensive security infrastructure, including both organisational and state-of-the-art technical security measures to protect the data fed into the EHDS. In this context, he recalls that Data Protection Impact Assessments may be a very useful tool to determine the risks of the processing operations and the mitigation measures that should be adopted.
    • The EDPS recommends paying special attention to the ethical use of data within the EHDS framework, for which he suggests taking into account existing ethics committees and their role in the context of national legislation.
    • The EDPS is convinced that the success of the EHDS will depend on the establishment of a strong data governance mechanism that provides for sufficient assurances of a lawful, responsible, ethical management anchored in EU values, including respect for fundamental rights. The governance mechanism should regulate, at least, the entities that will be allowed to make data available to the EHDS, the EHDS users, the Member States’ national contact points/ permit authorities, and the role of DPAs within this context.
    • The EDPS is interested in policy initiatives to achieve ‘digital sovereignty’ and has a preference for data being processed by entities sharing European values, including privacy and data protection. Moreover, the EDPS calls on the Commission to ensure that the stakeholders taking part in the EHDS, and in particular, the controllers, do not transfer personal data unless data subjects whose personal data are transferred to a third country are afforded a level of protection essentially equivalent to that guaranteed within the European Union.
    • The EDPS calls on Member States to guarantee the effective implementation of the right to data portability specifically in the EHDS, together with the development of the necessary technical requirements. In this regard, he considers that a gap analysis might be required regarding the need to integrate the GDPR safeguards with other regulatory safeguards, provided e.g. by competition law or ethical guidelines.
  • The Office of Management and Budget (OMB) extended a guidance memorandum directing agencies to consolidate data centers after Congress pushed back the sunset date for the program. OMB extended OMB Memorandum M-19-19, Update to Data Center Optimization Initiative (DCOI) through 30 September 2022, which applies “to the 24 Federal agencies covered by the Chief Financial Officers (CFO) Act of 1990, which includes the Department of Defense.” The DCOI was codified in the “Federal Information Technology Acquisition Reform” (FITARA) (P.L. 113-291) and extended in 2018 until October 1, 2020. And this sunset was pushed back another two years in the FY 2020 National Defense Authorization Act (NDAA) (P.L. 116-92).
    • In March 2020, the Government Accountability Office (GAO) issued another of its periodic assessments of the DCOI, started in 2012 by the Obama Administration to shrink the federal government’s footprint of data centers, increase efficiency and security, save money, and reduce energy usage.
    • The GAO found that 23 of the 24 agencies participating in the DCOI met or planned to meet their FY 2019 goals to close 286 of the 2,727 data centers considered part of the DCOI. This latter figure deserves some discussion, for the Trump Administration changed the definition of what is a data center to exclude smaller ones (so-called non-tiered data centers). GAO asserted that “recent OMB DCOI policy changes will reduce the number of data centers covered by the policy and both OMB and agencies may lose important visibility over the security risks posed by these facilities.” Nonetheless, these agencies are projecting savings of $241.5 million when all the 286 data centers planned for closure in FY 2019 actually close. It bears note that the GAO admitted in a footnote it “did not independently validate agencies’ reported cost savings figures,” so these numbers may not be reliable.
    • In terms of how to improve the DCOI, the GAO stated that “[i]n addition to reiterating our prior open recommendations to the agencies in our review regarding their need to meet DCOI’s closure and savings goals and optimization metrics, we are making a total of eight new recommendations—four to OMB and four to three of the 24 agencies. Specifically:
      • The Director of the Office of Management and Budget should (1) require that agencies explicitly document annual data center closure goals in their DCOI strategic plans and (2) track those goals on the IT Dashboard. (Recommendation 1)
      • The Director of the Office of Management and Budget should require agencies to report in their quarterly inventory submissions those facilities previously reported as data centers, even if those facilities are not subject to the closure and optimization requirements of DCOI. (Recommendation 2)
      • The Director of the Office of Management and Budget should document OMB’s decisions on whether to approve individual data centers when designated by agencies as either a mission critical facility or as a facility not subject to DCOI. (Recommendation 3)
      • The Director of the Office of Management and Budget should take action to address the key performance measurement characteristics missing from the DCOI optimization metrics, as identified in this report. (Recommendation 4)
  • Australia’s Inspector-General of Intelligence and Security (IGIS) released its first report on how well the nation’s security services did in observing the law with respect to COVID  app  data. The IGIS “is satisfied that the relevant agencies have policies and procedures in place and are taking reasonable steps to avoid intentional collection of COVID app data.” The IGIS revealed that “[i]ncidental collection in the course of the lawful collection of other data has occurred (and is permitted by the Privacy Act); however, there is no evidence that any agency within IGIS jurisdiction has decrypted, accessed or used any COVID app data.” The IGIS is also “satisfied  that  the intelligence agencies within IGIS jurisdiction which have the capability to incidentally collect a least some types of COVID app data:
    • Are aware of their responsibilities under Part VIIIA of the Privacy Act and are taking active steps to minimise the risk that they may collect COVID app data.
    • Have appropriate  policies  and  procedures  in  place  to  respond  to  any  incidental  collection of COVID app data that they become aware of. 
    • Are taking steps to ensure any COVID app data is not accessed, used or disclosed.
    • Are taking steps to ensure any COVID app data is deleted as soon as practicable.
    • Have not decrypted any COVID app data.
    • Are applying the usual security measures in place in intelligence agencies such that a ‘spill’ of any data, including COVID app data, is unlikely.
  • New Zealand’s Government Communications Security Bureau’s National Cyber Security Centre (NCSC) has released its annual Cyber Threat Report that found that “nationally significant organisations continue to be frequently targeted by malicious cyber actors of all types…[and] state-sponsored and non-state actors targeted public and private sector organisations to steal information, generate revenue, or disrupt networks and services.” The NCSC added:
    • Malicious cyber actors have shown their willingness to target New Zealand organisations in all sectors using a range of increasingly advanced tools and techniques. Newly disclosed vulnerabilities in products and services, alongside the adoption of new services and working arrangements, are rapidly exploited by state-sponsored actors and cyber criminals alike. A common theme this year, which emerged prior to the COVID-19 pandemic, was the exploitation of known vulnerabilities in internet-facing applications, including corporate security products, remote desktop services and virtual private network applications.
  • The former Director of the United States’ (U.S.) Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA) wrote an opinion piece disputing President Donald Trump’s claims that the 2020 Presidential Election was fraudulent. Christopher Krebs asserted:
    • While I no longer regularly speak to election officials, my understanding is that in the 2020 results no significant discrepancies attributed to manipulation have been discovered in the post-election canvassing, audit and recount processes.
    • This point cannot be emphasized enough: The secretaries of state in Georgia, Michigan, Arizona, Nevada and Pennsylvania, as well officials in Wisconsin, all worked overtime to ensure there was a paper trail that could be audited or recounted by hand, independent of any allegedly hacked software or hardware.
    • That’s why Americans’ confidence in the security of the 2020 election is entirely justified. Paper ballots and post-election checks ensured the accuracy of the count. Consider Georgia: The state conducted a full hand recount of the presidential election, a first of its kind, and the outcome of the manual count was consistent with the computer-based count. Clearly, the Georgia count was not manipulated, resoundingly debunking claims by the president and his allies about the involvement of CIA supercomputers, malicious software programs or corporate rigging aided by long-gone foreign dictators.

Coming Events

  • The National Institute of Standards and Technology (NIST) will hold a webinar on the Draft Federal Information Processing Standards (FIPS) 201-3 on 9 December.
  • On 9 December, the Senate Commerce, Science, and Transportation Committee will hold a hearing titled “The Invalidation of the EU-US Privacy Shield and the Future of Transatlantic Data Flows” with the following witnesses:
    • The Honorable Noah Phillips, Commissioner, Federal Trade Commission
    • Ms. Victoria Espinel, President and Chief Executive Officer, BSA – The Software Alliance
    • Mr. James Sullivan, Deputy Assistant Secretary for Services, International Trade Administration, U.S. Department of Commerce
    • Mr. Peter Swire, Elizabeth and Tommy Holder Chair of Law and Ethics, Georgia Tech Scheller College of Business, and Research Director, Cross-Border Data Forum
  • On 10 December, the Federal Communications Commission (FCC) will hold an open meeting and has released a tentative agenda:
    • Securing the Communications Supply Chain. The Commission will consider a Report and Order that would require Eligible Telecommunications Carriers to remove equipment and services that pose an unacceptable risk to the national security of the United States or the security and safety of its people, would establish the Secure and Trusted Communications Networks Reimbursement Program, and would establish the procedures and criteria for publishing a list of covered communications equipment and services that must be removed. (WC Docket No. 18-89)
    • National Security Matter. The Commission will consider a national security matter.
    • National Security Matter. The Commission will consider a national security matter.
    • Allowing Earlier Equipment Marketing and Importation Opportunities. The Commission will consider a Notice of Proposed Rulemaking that would propose updates to its marketing and importation rules to permit, prior to equipment authorization, conditional sales of radiofrequency devices to consumers under certain circumstances and importation of a limited number of radiofrequency devices for certain pre-sale activities. (ET Docket No. 20-382)
    • Promoting Broadcast Internet Innovation Through ATSC 3.0. The Commission will consider a Report and Order that would modify and clarify existing rules to promote the deployment of Broadcast Internet services as part of the transition to ATSC 3.0. (MB Docket No. 20-145)

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Photo by Daniel Schludi on Unsplash

Schrems II Guidance

The agency that oversees the data protection of EU agencies has laid out its view on how they should comply with the GDPR after the EU-US Privacy Shield.

The European Data Protection Supervisor (EDPS) has published a strategy detailing how European Union (EU) agencies and bodies should comply with the Court of Justice of the European Union’s (CJEU) ruling that struck down the EU-United States (U.S.) Privacy Shield (aka Schrems II) and threw into question the compliance of Standard Contractual Clauses (SCC) with EU law and the General Protection Data Regulation (GDPR). The EDPS has already started working with EU Institutions’, bodies, offices and agencies (EUIs) on the process of determining if their transfers of the personal data of people in the EU to the U.S. meets the CJEU’s judgement.

The EDPS makes clear most of the transfers by EUIs to the U.S. are on account of using U.S. information and communications technology (ICT) products and services, meaning U.S. multinationals like Microsoft, Google, and others. The EDPS has proposed a strategy that would first identify risks and then move to address them. It bears stressing that this strategy applies only to EUIs and not private sector controllers, but it is likely the European Data Protection Board (EDPB) and EU DPAs will take notice of the EDPS’ strategy on how to comply with Schrems II. However, the EDPS acknowledges that it is obliged to follow the EDPB’s lead and vows to change its strategy upon issuance of EDPB guidance on Schrems II and SCC. And yet, the EDPS explained that EUIs will need to report back on how they are implementing the steps in the strategy, particularly on those ongoing transfers to countries like the U.S. that have inadequate data protection laws, those transfers that have been suspended, and any transfers being conducted per derogations in the GDPR. On the basis of this feedback, the EDPS will “establish long-term compliance” in 2021.

It seems a bit backwards for the EDPS to task the EUIs with determining which transfers under SCC may proceed under the GDPR when it might be a more efficient process for the EDPS to take on this job directly and rule on the ICT services and providers, permitting all EUIs to understand which comply with EU law and which do not. However, the EDPS is exploring the possibility of determining the sufficiency of data protection in other nations, most likely, first and foremost the U.S., and then working with EU stakeholders to coordinate compliance with the CJEU’s ruling and the GDPR.

The EDPS claimed the CJEU “clarified the roles and responsibilitiesof controllers, recipients of data outside of the European Economic Area (EEA) (data importers) and supervisory authorities…[and] ruled the following:

  • The Court invalidated the Privacy Shield adequacy Decision and confirmed that the SCCs were valid providing that they include effective mechanisms to ensure compliance in practice with the “essentially equivalent” level of protection guaranteed within the EU by the General Data Protection Regulation (GDPR). Transfers of personal data pursuant to the SCCs are suspended or prohibited in the event of a breach of such clauses, or in case it is impossible to honour them.
  • The SCCs for transfers may then require, depending on the prevailing position of a particular third country, the adoption of supplementary measures by the controller in order to ensure compliance with the level of protection guaranteed within the EU.
  • In order to continue these data transfers, the Court stresses that before transferring personal data to a third country, it is the data exporters’ and data importers’ responsibility to assess whether the legislation of the third country of destination enables the data importer to comply with the guarantees provided through the transfer tools in place. If this is not the case, it is also the exporter and the importer’s duty to assess whether they can implement supplementary measures to ensure an essentially equivalent level of protection as provided by EU law. Should data exporters, after taking into account the circumstances of the transfer and possible supplementary measures, conclude that appropriate safeguards cannot be ensured, they are required to suspend or terminate the transfer of personal data. In case the exporter intends nevertheless to continue the transfer of personal data, they must notify their competent SA.
  • The competent supervisory authority is required to suspend or prohibit a transfer of personal data to a third country pursuant to the SCCs if, when considering the circumstances of that transfer, those clauses are not or cannot be complied with in the third country of destination and the protection of the data transferred under EU law cannot be ensured by other means.

EDPS explained:

The EDPS’ report on the 2017 survey entitled, Measuring compliance with data protection rules in EU institutions, provides evidence that there has been a significant rise in the number of transfers related to the core business of EUIs in recent years. This number is even higher now, due to the increased use of ICT services and social media. The EDPS’ own-initiative investigation into the use of Microsoft products and services by EUIs and subsequent recommendations in that regard confirms the importance to ensure a level of protection that is essentially equivalent as the one guaranteed within the EU, as provided by relevant data protection laws, to be interpreted in accordance with the EU Charter. In this context, the EDPS has already flagged a number of linked issues concerning sub-processors, data location, international transfers and the risk of unlawful disclosure of data – issues that the EUIs were unable to control and ensure proper safeguards to protect data that left the EU/EEA. The issues we raised in our investigation report are consistent with the concerns expressed in the Court’s Judgment, which we are assessing in relation to any processor agreed to by EUIs.

Regarding data flows to the U.S. quite possibly in violation of the GDPR and Schrems II, the EDPS:

  • Moreover, a majority of data flows to processors most probably happen because EUIs use service providers that are either based in the U.S. or that use sub-processors based in the U.S., in particular for ICT services, which fall under the scope of U.S. surveillance laws. Such companies have primarily relied on the Privacy Shield adequacy Decision to transfer personal data to the U.S. and the use of SCCs as a secondary measure.
  • Therefore, the present Strategy emphasizes the priority to address transfers of data by EUIs or on their behalf in the context of controller to process or contract and/or processor to sub-processor contracts, in particular towards the United States.

The EDPS is calling for “a twofold approach as the most appropriate:

(1) Identify urgent compliance and/or enforcement actions through a risk based approach for transfers towards the U.S. presenting high risks for data subjects and in parallel

(2) provide guidance and pursue mid-term case-by-case EDPS compliance and or enforcement actions for all transfers towards the U.S. or other third countries.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Pete Linforth from Pixabay

Further Reading, Other Developments, and Coming Events (29 October)

Further Reading

  •  “Cyberattacks hit Louisiana government offices as worries rise about election hacking” By Eric Geller — Politico. The Louisiana National Guard located and addressed a remote access trojan, a common precursor to ransomware attacks, in some of the state’s systems. This may or may not have been the beginning stages of an election day attack, and other states have made similar discoveries.
  • Kicked off Weibo? Here’s what happens next.” By Shen Lu — Rest of World. Beijing is increasingly cracking down on dissent on Weibo, the People’s Republic of China’s (PRC) version of Twitter. People get banned for posting content critical of the PRC government or pro-Hong Kong. Some are allowed back and are usually banned again. Some buy burner accounts inevitably to get also get banned.
  • Inside the campaign to ‘pizzagate’ Hunter Biden” By Ben Collins and Brandy Zadrozny — NBC News. The sordid tale of how allies or advocates of the Trump Campaign have tried to propagate rumors of illegal acts committed by Hunter Biden in an attempt to smear former Vice President Joe Biden as was done to former Secretary of State Hillary Clinton in 2016.
  • Russians Who Pose Election Threat Have Hacked Nuclear Plants and Power Grid” By Nicole Perlroth — The New York Times. Some of Russia’s best hackers have been prowling around state and local governments’ systems for unknown ends. These are the same hackers, named Dragonfly or Energetic Bear by researchers, who have penetrated a number of electric utilities and the power grid in the United States, including a nuclear plant. It is not clear what these hackers want to do, which worries U.S. officials and cybersecurity experts and researchers.
  • Activists Turn Facial Recognition Tools Against the Police” By Kashmir Hill — The New York Times. In an interesting twist, protestors and civil liberties groups are adopting facial recognition technology to try to identify police officers who attack protestors or commit acts of violence who refuse to identify themselves.

Other Developments

  • The United Kingdom’s Information Commissioner’s Office (ICO) has completed its investigation into the data brokering practices of Equifax, Transunion, and Experian and found widespread privacy and data protection violations. Equifax and Transunion were amendable to working with the ICO to correct abuses and shutter illegal products and businesses, but Experian was not. In the words of the ICO, Experian “did not accept that they were required to make the changes set out by the ICO, and as such were not prepared to issue privacy information directly to individuals nor cease the use of credit reference data for direct marketing purposes.” Consequently, Experian must affect specified changes within nine months or face “a fine of up to £20m or 4% of the organisation’s total annual worldwide turnover.” The ICO investigated using its powers under the British Data Protection Act 2018 and the General Data Protection Regulation (GDPR).
    • The ICO found widespread problems in the data brokering businesses of the three firms:
      • The investigation found how the three CRAs were trading, enriching and enhancing people’s personal data without their knowledge. This processing resulted in products which were used by commercial organisations, political parties or charities to find new customers, identify the people most likely to be able to afford goods and services, and build profiles about people.
      • The ICO found that significant ‘invisible’ processing took place, likely affecting millions of adults in the UK. It is ‘invisible’ because the individual is not aware that the organisation is collecting and using their personal data. This is against data protection law.
      • Although the CRAs varied widely in size and practice, the ICO found significant data protection failures at each company. As well as the failure to be transparent, the regulator found that personal data provided to each CRA, in order for them to provide their statutory credit referencing function, was being used in limited ways for marketing purposes. Some of the CRAs were also using profiling to generate new or previously unknown information about people, which is often privacy invasive.
      • Other thematic failings identified were:
        • Although the CRAs did provide some privacy information on their websites about their data broking activities, their privacy information did not clearly explain what they were doing with people’s data;
        • Separately, they were using certain lawful bases incorrectly for processing people’s data.
      • The ICO issued its report “Investigation into data protection compliance in the direct marketing data broking sector,” with these key findings:
        • Key finding 1: The privacy information of the CRAs did not clearly explain their processing with respect to their marketing services. CRAs have to revise and improve their privacy information. Those engaging in data broking activities must ensure that their privacy information is compliant with the GDPR.
        • Key finding 2: In the circumstances we assessed the CRAs were incorrectly relying on an exception from the requirement to directly provide privacy information to individuals (excluding where the data processed has come solely from the open electoral register or would be in conflict with the purpose of processing – such as suppression lists like the TPS). To comply with the GDPR, CRAs have to ensure that they provide appropriate privacy information directly to all the individuals for whom they hold personal data in their capacity as data brokers for direct marketing purposes. Those engaging in data broking activities must ensure individuals have the information required by Article 14.
        • Key finding 3: The CRAs were using personal data collected for credit referencing purposes for direct marketing purposes. The CRAs must not use this data for direct marketing purposes unless this has been transparently explained to individuals and they have consented to this use. Where the CRAs are currently using personal data obtained for credit referencing purposes for direct marketing, they must stop using it.
        • Key finding 4: The consents relied on by Equifax were not valid under the GDPR. To comply with the GDPR, CRAs must ensure that the consent is valid, if they intend to rely on consent obtained by a third party. Those engaging in data broking activities must ensure that any consents they use meet the standard of the GDPR.
        • Key finding 5: Legitimate interest assessments (LIAs) conducted by the CRAs in respect of their marketing services were not properly weighted. The CRAs must revise their LIAs to reconsider the balance of their own interests against the rights and freedoms of individuals in the context of their marketing services. Where an objective LIA does not favour the interests of the organisation, the processing of that data must stop until that processing can be made lawful. Those engaging in data broking activities must ensure that LIAs are conducted objectively taking into account all factors.
        • Key finding 6: In some cases Experian was obtaining data on the basis of consent and then processing it on the basis of legitimate interests. Switching from consent to legitimate interests in this situation is not appropriate. Where personal data is collected by a third party and shared for direct marketing purposes on the basis of consent, then the appropriate lawful basis for subsequent processing for these purposes will also be consent. Experian must therefore delete any data supplied to it on the basis of consent that it is processing on the basis of legitimate interests.
  • The United States (U.S.) Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA), the Federal Bureau of Investigation (FBI), and the U.S. Cyber Command Cyber National Mission Force (CNMF) issued a joint advisory on the “the tactics, techniques, and procedures (TTPs) used by North Korean advanced persistent threat (APT) group Kimsuky—against worldwide targets—to gain intelligence on various topics of interest to the North Korean government.” CISA, FBI, and CNMF stated “individuals and organizations within this target profile increase their defenses and adopt a heightened state of awareness…[and] [p]articularly important mitigations include safeguards against spearphishing, use of multi-factor authentication, and user awareness training.” The agencies noted:
    • This advisory describes known Kimsuky TTPs, as found in open-source and intelligence reporting through July 2020. The target audience for this advisory is commercial sector businesses desiring to protect their networks from North Korean APT activity.
    • The agencies highlighted the key findings:
      • Kimsuky is most likely tasked by the North Korean regime with a global intelligence gathering mission.
      • Kimsuky employs common social engineering tactics, spearphishing, and watering hole attacks to exfiltrate desired information from victims.
      •  Kimsuky is most likely to use spearphishing to gain initial access into victim hosts or networks.
      • Kimsuky conducts its intelligence collection activities against individuals and organizations in South Korea, Japan, and the United States.
      • Kimsuky focuses its intelligence collection activities on foreign policy and national security issues related to the Korean peninsula, nuclear policy, and sanctions.
      • Kimsuky specifically targets:
        • Individuals identified as experts in various fields,
        • Think tanks, and
        • South Korean government entities.
  • European Data Protection Supervisor (EDPS) Wojciech Wiewiórowski made remarks at the European Union Agency for Cybersecurity’s (ENISA) Annual Privacy Forum and advocated for a European Union (EU) moratorium on the rollout of new technology like facial recognition and artificial intelligence (AI) until this “development can be reconciled with the values and fundamental rights that are at the foundation of our democratic societies.” He claimed the EU could maintain the rights of its people while taking the lead in cutting edge technologies. Wiewiórowski asserted:
    • Now we are entering a new phase of contactless tracking of individuals in public areas. Remote facial recognition technology has developed quickly; so much so that some authorities and private entities want to use it in many places. If this all becomes true, we could be tracked everywhere in the world.
    • I do not believe that such a development can be reconciled with the values and fundamental rights that are at the foundation of our democratic societies. The EDPS therefore, together with other authorities, supports a moratorium on the rollout of such technologies. The aim of this moratorium would be twofold. Firstly, an informed and democratic debate would take place. Secondly, the EU and Member States would put in place all the appropriate safeguards, including a comprehensive legal framework, to guarantee the proportionality of the respective technologies and systems in relation to their specific use.
    • As an example, any new regulatory framework for AI should, in my view:
      • apply both to EU Member States and to EU institutions, offices, bodies and agencies;
      • be designed to protect individuals, communities and society as a whole, from any negative impact;
      • propose a robust and nuanced risk classification scheme, ensuring that any significant potential harm posed by AI applications is matched with appropriate mitigating measures.
    • We must ensure that Europe’s leading role in AI, or any other technology in development, does not come at the cost of our fundamental rights. Europe must remain true to its values and provide the grounds for innovation. We will only get it right if we ensure that technology serves both individuals and society.
    • Faced with these developments, transparency is a starting point for proper debate and assessment. Transparency for citizens puts them in a position to understand what they are subject to, and to decide whether they want to accept the infringements of their rights.
  • The Office of the Privacy Commissioner of Canada (OPC) and “its international counterparts” laid out their thinking on “stronger privacy protections and greater accountability in the development and use of facial recognition technology and artificial intelligence (AI) systems” at the recent Global Privacy Assembly. The OPC summarized the two resolutions adopted at the assembly:
    • the resolution on facial recognition technology acknowledges that this technology can benefit security and public safety. However, it asserts that facial recognition can erode data protection, privacy and human rights because it is highly intrusive and enables widespread surveillance that can produce inaccurate results. The resolution also calls on data protection authorities to work together to develop principles and expectations that strengthen data protection and ensure privacy by design in the development of innovative uses of this technology.
    • a resolution on the development and use of AI systems that urges organizations developing or using them to ensure human accountability for AI systems and address adverse impacts on human rights. The resolution encourages governments to amend personal data protection laws to make clear legal obligations for accountability in the development and use of AI. It also calls on governments, public authorities and other stakeholders to work with data protection authorities to ensure legal compliance, accountability and ethics in the development and use of AI systems.
  • The Alliance for Securing Democracy (ASD) at the German Marshall Fund of the United States (GMFUS) issued a report, “A Future Internet for Democracies: Contesting China’s Push for Dominance in 5G, 6G, and the Internet of Everything” that “provides a roadmap for contesting China’s growing dominance in this critical information arena across infrastructure, application, and governance dimensions—one that doubles down on geostrategic interests and allied cooperation.” ASD stated “[a]n allied approach that is rooted firmly in shared values and resists an authoritarian divide-and-conquer strategy is vital for the success of democracies in commercial, military, and governance domains.” ASD asserted:
    • The United States and its democratic allies are engaged in a contest for the soul of the Future Internet. Conceived as a beacon of free expression with the power to tear down communication barriers across free and unfree societies alike, the Internet today faces significant challenges to its status as the world’s ultimate connector.1 In creating connectivity and space for democratic speech, it has also enabled new means of authoritarian control and the suppression of human rights through censorship and surveillance. As tensions between democracies and the People’s Republic of China (PRC) heat up over Internet technologies, the prospect of a dichotomous Inter-net comes more sharply into focus: a democratic Internet where information flows freely and an authoritarian Internet where it is tightly controlled—separated not by an Iron Curtain, but a Silicon one. The Future Internet is deeply enmeshed in the dawning information contest between autocracies and democracies.2 It is the base layer—the foundation—on which communication takes place and the entry point into narrative and societal influence. How the next generation of Internet technologies are created, defined, governed, and ultimately used will have an outsized impact on this information contest—and the larger geopolitical contest—between democracy and authoritarianism.
    • ASD found:
      • The Chinese Communist Party (CCP) has a history of creating infrastructure dependence and using it for geopolitical leverage. As such, China’s global market dominance in Future Internet infrastructure carries unacceptable risks for democracies.
      • The contest to shape 6G standards is already underway, with China leading the charge internationally. As the United States ponders how it ended up on the back foot on 5G, China is moving ahead with new proposals that would increase authoritarian control and undermine fundamental freedoms.
      • The battle over the Future Internet is playing out in the Global South. As more developed nations eschew Chinese network equipment, democracies’ response has largely ignored this global build-out of networks and applications in the proving ground of the developing world that threaten both technological competitiveness and universal rights.
      • China is exporting “technology to anticipate crime”—a dystopian future police state. “Minority report”-style pre-criminal arrests decimate the practice of the rule of law centered in the presumption of innocence.
      • Personal Data Exfiltration: CCP entities see “Alternative Data” as “New Oil” for AI-driven applications in the Internet-of-Everything. These applications provide new and expanded avenues for mass data collection, as much as they depend on this data to succeed–giving China the means and the motivation to vacuum up the world’s data.
      • Data in, propaganda out: Future Internet technology presents opportunities to influence the information environment, including the development of information applications that simultaneously perform big data collection. Chinese companies are building information platforms into application technologies, reimagining both the public square and private locales as tools for propaganda.
      • Already victims of intellectual property theft by China, the United States and its democratic partners are ill-prepared to secure sensitive information as the Future Internet ecosystem explodes access points. This insecurity will continue to undermine technological competitiveness and national security and compound these effects in new ways.
      • China outnumbers the United States nearly two-to-one on participation in and leadership of critical international Future Internet standards-setting efforts. Technocratic standards bodies are becoming unlikely loci of great power technical competition, as Beijing uses leadership posts to shape the narrative and set the course for the next generation of Internet technologies to support China’s own technological leadership, governance norms, and market access.
      • The world’s oldest UN agency is being leveraged as a propaganda mouthpiece for the CCP’s AI and Future Internet agenda, whitewashing human rights abuses under a banner of “AI for Good.” The upshot is an effort to shape the UN Sustainable Development agenda to put economic development with authoritarian technology–not individual liberty—at their center.
      • A symbiotic relationship has developed between China’s Belt and Road Initiative and UN agencies involved in Future Internet and digital development. In this way, China leverages the United Nations enterprise to capture market dominance in next generation technologies.
  • A Dutch think tank has put together the “(best) practices of Asian countries and the United States in the field of digital connectivity” in the hopes of realizing European Commission President Ursula von der Leyen’s goal of making the next ten years “Europe’s Digital Decade.” The Clingendael Institute explained that the report “covers a wide range of topics related to digital regulation, the e-economy, and telecommunications infrastructure.” The Clingendael Institute asserted:
    • Central to the debate and any policy decision on digital connectivity are the trade-offs concerning privacy, business interests and national security. While all regulations are a combination of these three, the United States (US) has taken a path that prioritises the interests of businesses. This is manifested, for example, in the strong focus on free data flows, both personal and non-personal, to strengthen companies’ competitive advantage in collecting and using data to develop themselves. China’s approach, by contrast, strongly focuses on state security, wherein Chinese businesses are supported and leveraged to pre-empt threats to the country and, more specifically, to the Chinese Communist Party. This is evident from its strict data localisation requirements to prevent any data from being stored outside its borders and a mandatory security assessment for cross-border transfers. The European Union represents a third way, emphasising individuals’ privacy and a human-centred approach that puts people first, and includes a strong focus on ethics, including in data-protection regulations. This Clingendael Report aims to increase awareness and debate about the trade-offs of individual, state and business interests in all subsets of digital connectivity. This is needed to reach a more sustainable EU approach that will outlast the present decade. After all, economic competitiveness is required to secure Europe and to further its principled approach to digital connectivity in the long term. The analysis presented here covers a wide range of topics within digital connectivity’s three subsets: regulation; business; and telecommunications infrastructure. Aiming to contribute to improved European policy-making, this report discusses (best) practices of existing and rising digital powers in Asia and the United States. In every domain, potential avenues for cooperation with those countries are explored as ways forward for the EU.
    • Findings show that the EU and its member states are slowly but steadily moving from being mainly a regulatory power to also claiming their space as a player in the digitalised world. Cloud computing initiative GAIA-X is a key example, constituting a proactive alternative to American and Chinese Cloud providers that is strongly focused on uniting small European initiatives to create a strong and sustainable Cloud infrastructure. Such initiatives, including also the more recent Next Generation Internet (NGI), not only help defend and push European digital norms and standards, but also assist the global competitiveness of European companies and business models by facilitating the availability of large data-sets as well as scaling up. Next to such ‘EU only’ initiatives, working closely together with like-minded partners will benefit the EU and its member states as they seek to finetune and implement their digital strategies. The United States and Asian partners, particularly Japan, South Korea, India and Singapore, are the focus of attention here.

Coming Events

  • On 10 November, the Senate Commerce, Science, and Transportation Committee will hold a hearing to consider nominations, including Nathan Simington’s to be a Member of the Federal Communications Commission.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by David Peterson from Pixabay

Further Reading, Other Developments, and Coming Events (16 September)

Coming Events

  • The United States’ Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) announced that its third annual National Cybersecurity Summit “will be held virtually as a series of webinars every Wednesday for four weeks beginning September 16 and ending October 7:”
    • September 16: Key Cyber Insights
    • September 23: Leading the Digital Transformation
    • September 30: Diversity in Cybersecurity
    • October 7: Defending our Democracy
    • One can register for the event here.
  • The House Homeland Security Committee will hold a hearing titled “Worldwide Threats to the Homeland” on 17 September with the following witnesses:
    • Chad Wolf, Department of Homeland Security
    • Christopher Wray, Director, Federal Bureau of Investigation
    • Christopher Miller, Director, National Counterterrorism Center (NCTC)
  • On 17 September, the House Energy and Commerce Committee’s Communications & technology Subcommittee will hold a hearing titled “Trump FCC: Four Years of Lost Opportunities.”
  • The House Armed Services Committee’s Intelligence and Emerging Threats and Capabilities Subcommittee will hold a hearing’ titled “Interim Review of the National Security Commission on Artificial Intelligence Effort and Recommendations” on 17 September with these witnesses:
    • Dr. Eric Schmidt , Chairman, National Security Commission on Artificial Intelligence 
    • HON Robert Work, Vice Chairman, National Security Commission on Artificial Intelligence, HON Mignon Clyburn, Commissioner, National Security Commission on Artificial Intelligence 
    • Dr. José-Marie Griffiths, Commissioner, National Security Commission on Artificial Intelligence
  • On 22 September, the Federal Trade Commission (FTC) will hold a public workshop “to examine the potential benefits and challenges to consumers and competition raised by data portability.” The agency has released its agenda and explained:
    • The workshop will also feature four panel discussions that will focus on: case studies on data portability rights in the European Union, India, and California; case studies on financial and health portability regimes; reconciling the benefits and risks of data portability; and the material challenges and solutions to realizing data portability’s potential.
  • The Senate Judiciary Committee’s Intellectual Property Subcommittee will hold a hearing “Examining Threats to American Intellectual Property: Cyber-attacks and Counterfeits During the COVID-19 Pandemic” with these witnesses:
    • Adam Hickey, Deputy Assistant Attorney General National Security Division, Department of Justice
    • Clyde Wallace, Deputy Assistant Director Cyber Division, Federal Bureau of Investigation
    • Steve Francis, Assistant Director, HSI Global Trade Investigations Division Director, National Intellectual Property Rights Center, U.S. Immigration and Customs Enforcement, Department of Homeland Security
    • Bryan S. Ware, Assistant Director for Cybersecurity Cyber Security and Infrastructure Security Agency, Department of Homeland Security
  • The Senate Judiciary Committee’s Antitrust, Competition Policy & Consumer Rights Subcommittee will hold a hearing on 30 September titled “Oversight of the Enforcement of the Antitrust Laws” with Federal Trade Commission Chair Joseph Simons and United States Department of Justice Antitrust Division Assistant Attorney General Makan Delhrahim.
  • The Federal Communications Commission (FCC) will hold an open meeting on 30 September and has made available its agenda with these items:
    • Facilitating Shared Use in the 3.1-3.55 GHz Band. The Commission will consider a Report and Order that would remove the existing non-federal allocations from the 3.3-3.55 GHz band as an important step toward making 100 megahertz of spectrum in the 3.45-3.55 GHz band available for commercial use, including 5G, throughout the contiguous United States. The Commission will also consider a Further Notice of Proposed Rulemaking that would propose to add a co-primary, non-federal fixed and mobile (except aeronautical mobile) allocation to the 3.45-3.55 GHz band as well as service, technical, and competitive bidding rules for flexible-use licenses in the band. (WT Docket No. 19-348)
    • Expanding Access to and Investment in the 4.9 GHz Band. The Commission will consider a Sixth Report and Order that would expand access to and investment in the 4.9 GHz (4940-4990 MHz) band by providing states the opportunity to lease this spectrum to commercial entities, electric utilities, and others for both public safety and non-public safety purposes. The Commission also will consider a Seventh Further Notice of Proposed Rulemaking that would propose a new set of licensing rules and seek comment on ways to further facilitate access to and investment in the band. (WP Docket No. 07-100)
    • Improving Transparency and Timeliness of Foreign Ownership Review Process. The Commission will consider a Report and Order that would improve the timeliness and transparency of the process by which it seeks the views of Executive Branch agencies on any national security, law enforcement, foreign policy, and trade policy concerns related to certain applications filed with the Commission. (IB Docket No. 16-155)
    • Promoting Caller ID Authentication to Combat Spoofed Robocalls. The Commission will consider a Report and Order that would continue its work to implement the TRACED Act and promote the deployment of caller ID authentication technology to combat spoofed robocalls. (WC Docket No. 17-97)
    • Combating 911 Fee Diversion. The Commission will consider a Notice of Inquiry that would seek comment on ways to dissuade states and territories from diverting fees collected for 911 to other purposes. (PS Docket Nos. 20-291, 09-14)
    • Modernizing Cable Service Change Notifications. The Commission will consider a Report and Order that would modernize requirements for notices cable operators must provide subscribers and local franchising authorities. (MB Docket Nos. 19-347, 17-105)
    • Eliminating Records Requirements for Cable Operator Interests in Video Programming. The Commission will consider a Report and Order that would eliminate the requirement that cable operators maintain records in their online public inspection files regarding the nature and extent of their attributable interests in video programming services. (MB Docket No. 20-35, 17-105)
    • Reforming IP Captioned Telephone Service Rates and Service Standards. The Commission will consider a Report and Order, Order on Reconsideration, and Further Notice of Proposed Rulemaking that would set compensation rates for Internet Protocol Captioned Telephone Service (IP CTS), deny reconsideration of previously set IP CTS compensation rates, and propose service quality and performance measurement standards for captioned telephone services. (CG Docket Nos. 13-24, 03-123)
    • Enforcement Item. The Commission will consider an enforcement action.

Other Developments

  • The United States House of Representatives took up and passed two technology bills on 14 September. One of the bills, “Internet of Things (IoT) Cybersecurity Improvement Act of 2020” (H.R. 1668), was discussed in yesterday’s Technology Policy Update as part of an outlook on Internet of Things (IoT) legislation (see here for analysis). The House passed a revised version by voice vote, but its fate in the Senate may lie with the Senate Homeland Security & Governmental Affairs Committee, whose chair, Senator Ron Johnson (R-WI), has blocked a number of technology bills during his tenure to the chagrin of some House stakeholders. The House also passed the “AI in Government Act of 2019” (H.R.2575) that would establish an AI Center of Excellence within the General Services Administration that would
    • “(1) advise and promote the efforts of the Federal Government in developing innovative uses of artificial intelligence by the Federal Government to the benefit of the public; and
    • (2) improve cohesion and competency in the use of artificial intelligence.”
    • Also, this bill would direct the Office of Management and Budget (OMB) to “issue a memorandum to the head of each agency that shall—
      • inform the development of artificial intelligence governance approaches by those agencies regarding technologies and applications that—
        • are empowered or enabled by the use of artificial intelligence within that agency; and
        • advance the innovative use of artificial intelligence for the benefit of the public while upholding civil liberties, privacy, and civil rights;
      • consider ways to reduce barriers to the use of artificial intelligence in order to promote innovative application of those technologies for the benefit of the public, while protecting civil liberties, privacy, and civil rights;
      • establish best practices for identifying, assessing, and mitigating any bias on the basis of any classification protected under Federal nondiscrimination laws or other negative unintended consequence stemming from the use of artificial intelligence systems; and
      • provide a template of the required contents of the agency Governance Plans
    • The House Energy and Commerce Committee marked up and reported out more than 30 bills last week including:
      • The “Consumer Product Safety Inspection Enhancement Act” (H.R. 8134) that “would amend the Consumer Product Safety Act to enhance the Consumer Product Safety Commission’s (CPSC) ability to identify unsafe consumer products entering the United States, especially e-commerce shipments entering under the de minimis value exemption. Specifically, the bill would require the CPSC to enhance the targeting, surveillance, and screening of consumer products. The bill also would require electronic filing of certificates of compliance for all consumer products entering the United States.
      • The bill directs the CPSC to: 1) examine a sampling of de minimis shipments and shipments coming from China; 2) detail plans and timelines to effectively address targeting and screening of de minimis shipments; 3) establish metrics by which to evaluate the effectiveness of the CPSC’s efforts in this regard; 4) assess projected technology, resources, and staffing necessary; and 5) submit a report to Congress regarding such efforts. The bill further directs the CPSC to hire at least 16 employees every year until staffing needs are met to help identify violative products at ports.
      • The “AI for Consumer Product Safety Act” (H.R. 8128) that “would direct the Consumer Product Safety Commission (CPSC) to establish a pilot program to explore the use of artificial intelligence for at least one of the following purposes: 1) tracking injury trends; 2) identifying consumer product hazards; 3) monitoring the retail marketplace for the sale of recalled consumer products; or 4) identifying unsafe imported consumer products.” The revised bill passed by the committee “changes the title of the bill to the “Consumer Safety Technology Act”, and adds the text based on the Blockchain Innovation Act (H.R. 8153) and the Digital Taxonomy Act (H.R. 2154)…[and] adds sections that direct the Department of Commerce (DOC), in consultation with the Federal Trade Commission (FTC), to conduct a study and submit to Congress a report on the state of blockchain technology in commerce, including its use to reduce fraud and increase security.” The revised bill “would also require the FTC to submit to Congress a report and recommendations on unfair or deceptive acts or practices relating to digital tokens.”
      • The “American Competitiveness Of a More Productive Emerging Tech Economy Act” or the “American COMPETE Act” (H.R. 8132) “directs the DOC and the FTC to study and report to Congress on the state of the artificial intelligence, quantum computing, blockchain, and the new and advanced materials industries in the U.S…[and] would also require the DOC to study and report to Congress on the state of the Internet of Things (IoT) and IoT manufacturing industries as well as the three-dimensional printing industry” involving “among other things:1) listing industry sectors that develop and use each technology and public-private partnerships focused on promoting the adoption and use of each such technology; 2) establishing a list of federal agencies asserting jurisdiction over such industry sectors; and 3) assessing risks and trends in the marketplace and supply chain of each technology.
      • The bill would direct the DOC to study and report on the effect of unmanned delivery services on U.S. businesses conducting interstate commerce. In addition to these report elements, the bill would require the DOC to examine safety risks and effects on traffic congestion and jobs of unmanned delivery services.
      • Finally, the bill would require the FTC to study and report to Congress on how artificial intelligence may be used to address online harms, including scams directed at senior citizens, disinformation or exploitative content, and content furthering illegal activity.
  • The National Institute of Standards and Technology (NIST) issued NIST Interagency or Internal Report 8272 “Impact Analysis Tool for Interdependent Cyber Supply Chain Risks” designed to help public and private sector entities better address complicated, complex supply chain risks. NIST stated “[t]his publication de-scribes how to use the Cyber Supply Chain Risk Management (C-SCRM) Interdependency Tool that has been developed to help federal agencies identify and assess the potential impact of cybersecurity events in their interconnected supply chains.” NIST explained
    • More organizations are becoming aware of the importance of identifying cybersecurity risks associated with extensive, complicated supply chains. Several solutions have been developed to help manage supply chains; most focus on contract management or compliance. There is a need to provide organizations with a systematic and more usable way to evaluate the potential impacts of cyber supply chain risks relative to an organization’s risk appetite. This is especially important for organizations with complex supply chains and highly interdependent products and suppliers.
    • This publication describes one potential way to visualize and measure these impacts: a Cyber Supply Chain Risk Management (C-SCRM) Interdependency Tool (hereafter “Tool”), which is designed to provide a basic measurement of the potential impact of a cyber supply chain event. The Tool is not intended to measure the risk of an event, where risk is defined as a function of threat, vulnerability, likelihood, and impact. Research conducted by the authors of this publication found that, at the time of publication, existing cybersecurity risk tools and research focused on threats, vulnerabilities, and likelihood, but impact was frequently overlooked. Thus, this Tool is intended to bridge that gap and enable users and tool developers to create a more complete understanding of an organization’s risk by measuring impact in their specific environments.
    • The Tool also provides the user greater visibility over the supply chain and the relative importance of particular projects, products, and suppliers (hereafter referred to as “nodes”) compared to others. This can be determined by examining the metrics that contribute to a node’s importance, such as the amount of access a node has to the acquiring organization’s IT network, physical facilities, and data. By understanding which nodes are the most important in their organization’s supply chain, the user can begin to understand the potential impact a disruption of that node may cause on business operations. The user can then prioritize the completion of risk mitigating actions to reduce the impact a disruption would cause to the organization’s supply chain and overall business.
  • In a blog post, Microsoft released its findings on the escalating threats to political campaigns and figures during the run up to the United States’ (U.S.) election. This warning also served as an advertisement for Microsoft’s security products. But, be that as it may, these findings echo what U.S. security services have been saying for months. Microsoft stated
    • In recent weeks, Microsoft has detected cyberattacks targeting people and organizations involved in the upcoming presidential election, including unsuccessful attacks on people associated with both the Trump and Biden campaigns, as detailed below. We have and will continue to defend our democracy against these attacks through notifications of such activity to impacted customers, security features in our products and services, and legal and technical disruptions. The activity we are announcing today makes clear that foreign activity groups have stepped up their efforts targeting the 2020 election as had been anticipated, and is consistent with what the U.S. government and others have reported. We also report here on attacks against other institutions and enterprises worldwide that reflect similar adversary activity.
    • We have observed that:
      • Strontium, operating from Russia, has attacked more than 200 organizations including political campaigns, advocacy groups, parties and political consultants
      • Zirconium, operating from China, has attacked high-profile individuals associated with the election, including people associated with the Joe Biden for President campaign and prominent leaders in the international affairs community
      • Phosphorus, operating from Iran, has continued to attack the personal accounts of people associated with the Donald J. Trump for President campaign
    • The majority of these attacks were detected and stopped by security tools built into our products. We have directly notified those who were targeted or compromised so they can take action to protect themselves. We are sharing more about the details of these attacks today, and where we’ve named impacted customers, we’re doing so with their support.
    • What we’ve seen is consistent with previous attack patterns that not only target candidates and campaign staffers but also those they consult on key issues. These activities highlight the need for people and organizations involved in the political process to take advantage of free and low-cost security tools to protect themselves as we get closer to election day. At Microsoft, for example, we offer AccountGuard threat monitoring, Microsoft 365 for Campaigns and Election Security Advisors to help secure campaigns and their volunteers. More broadly, these attacks underscore the continued importance of work underway at the United Nations to protect cyberspace and initiatives like the Paris Call for Trust and Security in Cyberspace.
  • The European Data Protection Supervisor (EDPS) has reiterated and expanded upon his calls for caution, prudence, and adherence to European Union (EU) law and principles in the use of artificial intelligence, especially as the EU looks to revamp its approach to AI and data protection. In a blog post, EDPS Wojciech Wiewiórowski stated:
    • The expectations of the increasing use of AI and the related economic advantages for those who control the technologies, as well as its appetite for data, have given rise to fierce competition about technological leadership. In this competition, the EU strives to be a frontrunner while staying true to its own values and ideals.
    • AI comes with its own risks and is not an innocuous, magical tool, which will heal the world harmlessly. For example, the rapid adoption of AI by public administrations in hospitals, utilities and transport services, financial supervisors, and other areas of public interest is considered in the EC White Paper ‘essential’, but we believe that prudency is needed. AI, like any other technology, is a mere tool, and should be designed to serve humankind. Benefits, costs and risks should be considered by anyone adopting a technology, especially by public administrations who process great amounts of personal data.
    • The increase in adoption of AI has not been (yet?) accompanied by a proper assessment of what the impact on individuals and on our society as a whole will likely be. Think especially of live facial recognition (remote biometric identification in the EC White Paper). We support the idea of a moratorium on automated recognition in public spaces of human features in the EU, of faces but also and importantly of gait, fingerprints, DNA, voice, keystrokes and other biometric or behavioural signals.
    • Let’s not rush AI, we have to get it straight so that it is fair and that it serves individuals and society at large.
    • The context in which the consultation for the Data Strategy was conducted gave a prominent place to the role of data in matters of public interest, including combating the virus. This is good and right as the GDPR was crafted so that the processing of personal data should serve humankind. There are existing conditions under which such “processing for the public good” could already take place, and without which the necessary trust of data subjects would not be possible.
    • However, there is a substantial persuasive power in the narratives nudging individuals to ‘volunteer’ their data to address highly moral goals. Concepts such as ‘Data altruism”, or ‘Data donation” and their added value are not entirely clear and there is a need to better define and lay down their scope, and possible purposes, for instance, in the context of scientific research in the health sector. The fundamental right to the protection of personal data cannot be ‘waived’ by the individual concerned, be it through a ‘donation’ or through a ‘sale’ of personal data. The data controller is fully bound by the personal data rules and principles, such as purpose limitation even when processing data that have been ‘donated’ i.e. when consent to the processing had been given by the individual.

Further Reading

  • Peter Thiel Met With The Racist Fringe As He Went All In On Trump” By Rosie Gray and Ryan Mac — BuzzFeed News. A fascinating article about one of the technology world’s more interesting figures. As part of his decision to ally himself with Donald Trump when running for president, Peter Thiel also met with avowed white supremacists. However, it appears that the alliance is no longer worthy of his financial assistance or his public support as he supposedly was disturbed about the Administration’s response to the pandemic. However, Palantir, his company has flourished during the Trump Administration and may be going public right before matters may change under a Biden Administration.
  • TikTok’s Proposed Deal Seeks to Mollify U.S. and China” By David McCabe, Ana Swanson and Erin Griffith — The New York Times. ByteDance is apparently trying to mollify both Washington and Beijing in bringing Oracle onboard as “trusted technology partner,” for the arrangement may be acceptable to both nations under their export control and national security regimes. Oracle handling and safeguarding TikTokj user data would seem to address the Trump Administration’s concerns, but not selling the company nor permitting Oracle to access its algorithm for making recommendations would seem to appease the People’s Republic of China (PRC). Moreover, United States (U.S.) investors would hold control over TikTok even though PRC investors would maintain their stakes. Such an arrangement may satisfy the Committee on Foreign Investment in the United States (CFIUS), which has ordered ByteDance to sell the app that is an integral part of TikTok. The wild card, as always, is where President Donald Trump ultimately comes out on the deal.
  • Oracle’s courting of Trump may help it land TikTok’s business and coveted user data” By Jay Greene and Ellen Nakashima — The Washington Post. This piece dives into why Oracle, at first blush, seems like an unlikely suitor to TikTok, but it’s eroding business position visa vis cloud companies like Amazon explains its desire to diversify. Also, Oracle’s role as a data broker makes all the user data available from TikTok very attractive.
  • Chinese firm harvests social media posts, data of prominent Americans and military” By Gerry Shih — The Washington Post. Another view on Shenzhen Zhenhua Data Technology, the entity from the People’s Republic of China (PRC) exposed for collecting the personal data of more than 2.4 million westerners, many of whom hold positions of power and influence. This article quotes a number of experts allowed to look at what was leaked of the data base who are of the view the PRC has very little in the way of actionable intelligence, at this point. The country is leveraging publicly available big data from a variety of sources and may ultimately makes something useful from these data.
  • “‘This is f—ing crazy’: Florida Latinos swamped by wild conspiracy theories” By Sabrina Rodriguez and Marc Caputo — Politico. A number of sources are spreading rumors about former Vice President Joe Biden and the Democrats generally in order to curb support among a key demographic the party will need to carry overwhelmingly to win Florida.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Photo by Alexander Sinn on Unsplash

Further Reading, Other Developments, and Coming Events (13 August)

Here are Further Reading, Other Developments, and Coming Events:

Coming Events

  • On 18 August, the National Institute of Standards and Technology (NIST) will host the “Bias in AI Workshop, a virtual event to develop a shared understanding of bias in AI, what it is, and how to measure it.”
  • The United States’ Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) announced that its third annual National Cybersecurity Summit “will be held virtually as a series of webinars every Wednesday for four weeks beginning September 16 and ending October 7:”
    • September 16: Key Cyber Insights
    • September 23: Leading the Digital Transformation
    • September 30: Diversity in Cybersecurity
    • October 7: Defending our Democracy
    • One can register for the event here.
  • The Senate Judiciary Committee’s Antitrust, Competition Policy & Consumer Rights Subcommittee will hold a hearing on 15 September titled “Stacking the Tech: Has Google Harmed Competition in Online Advertising?.” In their press release, Chair Mike Lee (R-UT) and Ranking Member Amy Klobuchar (D-MN) asserted:
    • Google is the dominant player in online advertising, a business that accounts for around 85% of its revenues and which allows it to monetize the data it collects through the products it offers for free. Recent consumer complaints and investigations by law enforcement have raised questions about whether Google has acquired or maintained its market power in online advertising in violation of the antitrust laws. News reports indicate this may also be the centerpiece of a forthcoming antitrust lawsuit from the U.S. Department of Justice. This hearing will examine these allegations and provide a forum to assess the most important antitrust investigation of the 21st century.

Other Developments

  • Senate Intelligence Committee Acting Chair Marco Rubio (R-FL) and Vice Chairman Mark Warner (D-VA) released a statement indicating the committee had voted to adopt the fifth and final volume of its investigation of the Russian Federation’s interference in the 2016 election. The committee had submitted the report to the Intelligence Community for vetting and have received the report with edits and redactions. The report could be released sometime over the next few weeks.  Rubio and Warner stated “the Senate Intelligence Committee voted to adopt the classified version of the final volume of the Committee’s bipartisan Russia investigation. In the coming days, the Committee will work to incorporate any additional views, as well as work with the Intelligence Community to formalize a properly redacted, declassified, publicly releasable version of the Volume 5 report.” The Senate Intelligence Committee’s has released four previous reports:
  • The National Institute of Standards and Technology (NIST) is accepting comments until 11 September on draft Special Publication 800-53B, “Control Baselines for Information Systems and Organizations,” a guidance document that will serve a key role in the United States government’s efforts to secure and protect the networks and systems it operates and those run by federal contractors. NIST explained:
    • This publication establishes security and privacy control baselines for federal information systems and organizations and provides tailoring guidance for those baselines. The use of the security control baselines is mandatory, in accordance with OMB Circular A-130 [OMB A-130] and the provisions of the Federal Information Security Modernization Act4 [FISMA], which requires the implementation of a set of minimum controls to protect federal information and  information systems. Whereas use of the privacy control baseline is not mandated by law or [OMB A-130], SP 800-53B, along with other supporting NIST publications, is designed to help organizations identify the security and privacy controls needed to manage risk and satisfy the security and privacy requirements in FISMA, the Privacy Act of 1974 [PRIVACT], selected OMB policies (e.g., [OMB A-130]), and designated Federal Information Processing Standards (FIPS), among others
  • The United States Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) released an “Election Vulnerability Reporting Guide
    to provide “election administrators with a step-by-step guide, list of resources, and a template for establishing a successful vulnerability disclosure program to address possible vulnerabilities in their state and local election systems…[and] [t]he six steps include:
    • Step 1: Identify Systems Where You Would Accept Security Testing, and those Off-Limits
    • Step 2: Draft an Easy-to-Read Vulnerability Disclosure Policy (See Appendix III)
    • Step 3: Establish a Way to Receive Reports/Conduct Follow-On Communication
    • Step 4: Assign Someone to Thank and Communicate with Researchers
    • Step 5: Assign Someone to Vet and Fix the Vulnerabilities
    • Step 6: Consider Sharing Information with Other Affected Parties
  • The United Kingdom’s Information Commissioner’s Office (ICO) has issued “Guidance on AI and data protection” that “clarifies how you can assess the risks to rights and freedoms that AI can pose from a data protection perspective; and the appropriate measures you can implement to mitigate them.” The ICO explained “[w]hile data protection and ‘AI ethics’ overlap, this guidance does not provide generic ethical or design principles for your use of AI.” The ICO stated “[i]t corresponds to data protection principles, and is structured as follows:
    • part one addresses accountability and governance in AI, including data protection impact assessments (DPIAs);
    • part two covers fair, lawful and transparent processing, including lawful bases, assessing and improving AI system performance, and mitigating potential discrimination;
    • part three addresses data minimisation and security; and
    • part four covers compliance with individual rights, including rights related to automated decision-making.
  •  20 state attorneys general wrote Facebook Chief Executive Officer Mark Zuckerberg and Chief Operating Officer Sheryl Sandberg “to request  that  you  take  additional  steps  to prevent   Facebook   from   being used   to   spread   disinformation   and   hate   and   to   facilitate discrimination.” They also asked “that you take more steps to provide redress for users who fall victim to intimidation and harassment, including violence and digital abuse.” The attorneys general said that “[b]ased on our collective experience, we believe that Facebook should take additional actions including the following steps—many of which are highlighted in Facebook’s recent Civil Rights Audit—to strengthen its commitment to civil rights and fighting disinformation and discrimination:
    • Aggressively enforce Facebook policies against hate speech and organized hate organizations: Although Facebook has developed policies against hate speech and organizations that peddle it, we remain concerned that Facebook’s policies on Dangerous Individuals and Organizations, including but not limited to its policies on white nationalist and white supremacist content, are not enforced quickly and comprehensively enough. Content that violates Facebook’s own policies too often escapes removal just because it comes as coded language, rather than specific magic words. And even where Facebook takes steps to address a particular violation, it often fails to proactively address the follow-on actions by replacement or splinter groups that quickly emerge.
    • Allow public, third-party audits of hate content and enforcement: To gauge the ongoing progress of Facebook’s enforcement efforts, independent experts should be permitted access to the data necessary to conduct regular, transparent third-party audits of hate and hate-related misinformation on the platform, including any information made available to the Global Oversight Board. As part of this effort, Facebook should capture data on the prevalence of different forms of hate content on the platform, whether or not covered by Facebook’s own community standards, thus allowing the public to determine whether enforcement of anti-hate policies differs based on the type of hate content at issue.
    • Commit to an ongoing, independent analysis of Facebook’s content population scheme and the prompt development of best practices guidance: By funneling users toward particular types of content, Facebook’s content population scheme, including its algorithms, can push users into extremist online communities that feature divisive and inflammatory messages, often directed at particular groups. Although Facebook has conducted research and considered programs to reduce this risk, there is still no mandatory guidance for coders and other teams involved in content population. Facebook should commit to an ongoing, independent analysis of its content population scheme, including its algorithms, and also continuously implement mandatory protocols as best practices are identified to curb bias and prevent recommendations of hate content and groups.
    • Expand policies limiting inflammatory advertisements that vilify minority groups: Although Facebook currently prohibits ads that claim that certain people, because of their membership in a protected group, pose a threat to the physical safety of communities or the nation, its policies still allow attacks that characterize such groups as threats to national culture or values. The current prohibition should be expanded to include such ads.
  • New Zealand’s Ministry of Statistics “launched the Algorithm Charter for Aotearoa New Zealand” that “signals that [the nation’s agencies] are committed to being consistent, transparent and accountable in their use of algorithms.”
    • The Ministry explained “[t]he Algorithm Charter is part of a wider ecosystem and works together with existing tools, networks and research, including:
      • Principles for the Safe and Effective Use of Data and Analytics (Privacy Commissioner and Government Chief Data Steward, 2018)
      • Government Use of Artificial Intelligence in New Zealand (New Zealand Law Foundation and Otago University, 2019)
      • Trustworthy AI in Aotearoa – AI Principles (AI Forum New Zealand, 2020)
      • Open Government Partnership, an international agreement to increase transparency.
      • Data Protection and Use Policy (Social Wellbeing Agency, 2020)
      • Privacy, Human Rights and Ethics Framework (Ministry of Social Development).
  • The European Union (EU) imposed its first cyber sanctions under its Framework for a Joint EU Diplomatic Response to Malicious Cyber Activities (aka the cyber diplomacy toolbox) against six hackers and three entities from the Russian Federation, the People’s Republic of China (PRC) and the Democratic People’s Republic of Korea for attacks against the against the Organisation for the Prohibition of Chemical Weapons (OPCW) in the Netherlands, the malware attacks known as Petya and WannaCry, and Operation Cloud Hopper. The EU’s cyber sanctions follow sanctions the United States has placed on a number of people and entities from the same nations and also indictments the U.S. Department of Justice has announced over the years. The sanctions are part of the effort to levy costs on nations and actors that conduct cyber attacks. The EU explained:
    • The attempted cyber-attack was aimed at hacking into the Wi-Fi network of the OPCW, which, if successful, would have compromised the security of the network and the OPCW’s ongoing investigatory work. The Netherlands Defence Intelligence and Security Service (DISS) (Militaire Inlichtingen- en Veiligheidsdienst – MIVD) disrupted the attempted cyber-attack, thereby preventing serious damage to the OPCW.
    • “WannaCry” disrupted information systems around the world by targeting information systems with ransomware and blocking access to data. It affected information systems of companies in the Union, including information systems relating to services necessary for the maintenance of essential services and economic activities within Member States.
    • “NotPetya” or “EternalPetya” rendered data inaccessible in a number of companies in the Union, wider Europe and worldwide, by targeting computers with ransomware and blocking access to data, resulting amongst others in significant economic loss. The cyber-attack on a Ukrainian power grid resulted in parts of it being switched off during winter.
    • “Operation Cloud Hopper” has targeted information systems of multinational companies in six continents, including companies located in the Union, and gained unauthorised access to commercially sensitive data, resulting in significant economic loss.
  • The United States’ Federal Communications Commission (FCC) is asking for comments on the Department of Commerce’s the National Telecommunications and Information Administration’s (NTIA) petition asking the agency to start a rulemaking to clarify alleged ambiguities in 47 USC 230 regarding the limits of the liability shield for the content others post online versus the liability protection for “good faith” moderation by the platform itself. The NTIA was acting per direction in an executive order allegedly aiming to correct online censorship. Executive Order 13925, “Preventing Online Censorship” was issued in late May after Twitter factchecked two of President Donald Trump’s Tweets regarding false claims made about mail voting in California in response to the COVID-19 pandemic. Comments are due by 2 September.
  • The Australian Competition & Consumer Commission (ACCC) released for public consultation a draft of “a mandatory code of conduct to address bargaining power imbalances between Australian news media businesses and digital platforms, specifically Google and Facebook.” The government in Canberra had asked the ACCC to draft this code earlier this year after talks broke down between the Australian Treasury
    • The ACCC explained
      • The code would commence following the introduction and passage of relevant legislation in the Australian Parliament. The ACCC released an exposure draft of this legislation on 31 July 2020, with consultation on the draft due to conclude on 28 August 2020. Final legislation is expected to be introduced to Parliament shortly after conclusion of this consultation process.
    • This is not the ACCC’s first interaction with the companies. Late last year, the ACCC announced a legal action against Google “alleging they engaged in misleading conduct and made false or misleading representations to consumers about the personal location data Google collects, keeps and uses” according to the agency’s press release. In its initial filing, the ACCC is claiming that Google mislead and deceived the public in contravention of the Australian Competition Law and Android users were harmed because those that switched off Location Services were unaware that their location information was still be collected and used by Google for it was not readily apparent that Web & App Activity also needed to be switched off.
    • A year ago, the ACCC released its final report in its “Digital Platforms Inquiry” that “proposes specific recommendations aimed at addressing some of the actual and potential negative impacts of digital platforms in the media and advertising markets, and also more broadly on consumers.”
  • The United States’ Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) issued “released core guidance documentation for the Trusted Internet Connections (TIC) program, developed to assist agencies in protecting modern information technology architectures and services.” CISA explained “In accordance with the Office of Management and Budget (OMB) Memorandum (M) 19-26: Update to the TIC Initiative, TIC 3.0 expands on the original initiative to drive security standards and leverage advances in technology to secure a wide spectrum of agency network architectures.” Specifically, CISA released three core guidance documents:
    • Program Guidebook (Volume 1) – Outlines the modernized TIC program and includes its historical context
    • Reference Architecture (Volume 2) – Defines the concepts of the program to guide and constrain the diverse implementations of the security capabilities
  • Senators Ron Wyden (D-OR), Bill Cassidy (R-LA) and ten other Members wrote the Federal Trade Commission (FTC) urging the agency “to investigate widespread privacy violations by companies in the advertising technology (adtech) industry that are selling private data about millions of Americans, collected without their knowledge or consent from their phones, computers, and smart TVs.” They asked the FTC “to use its authority to conduct broad industry probes under Section 6(b) of the FTC Act to determine whether adtech companies and their data broker partners have violated federal laws prohibiting unfair and deceptive business practices.” They argued “[t]he FTC should not proceed with its review of the Children’s Online Privacy Protection Act (COPPA) Rule before it has completed this investigation.”
  •  “100 U.S. women lawmakers and current and former legislators from around the world,” including Speaker of the House Nancy Pelosi (D-CA), sent a letter to Facebook CEO Mark Zuckerberg and COO Sheryl Sandberg urging the company “to take decisive action to protect women from rampant and increasing online attacks on their platform that have caused many women to avoid or abandon careers in politics and public service.” They noted “[j]ust a few days ago, a manipulated and widely shared video that depicted Speaker Pelosi slurring her speech was once again circulating on major social media platforms, gaining countless views before TikTok, Twitter, and YouTube all removed the footage…[and] [t]he video remains on Facebook and is labeled “partly false,” continuing to gain millions of views.” The current and former legislators “called on Facebook to enforce existing rules, including:
    • Quick removal of posts that threaten candidates with physical violence, sexual violence or death, and that glorify, incite or praise violence against women; disable the relevant accounts, and refer offenders to law enforcement.
    • Eliminate malicious hate speech targeting women, including violent, objectifying or dehumanizing speech, statements of inferiority, and derogatory sexual terms;
    • Remove accounts that repeatedly violate terms of service by threatening, harassing or doxing or that use false identities to attack women leaders and candidates; and
    • Remove manipulated images or videos misrepresenting women public figures.
  • The United States’ Departments of Commerce and Homeland Security released an update “highlighting more than 50 activities led by industry and government that demonstrate progress in the drive to counter botnet threats.” in May 2018, the agencies submitted “A Report to the President on Enhancing the Resilience of the Internet and Communications Ecosystem Against Botnets and Other Automated, Distributed Threats” that identified a number of steps and prompted a follow on “A Road Map Toward Resilience Against Botnets” released in November 2018.
  • United States (U.S.) Secretary of Commerce Wilbur Ross and European Commissioner for Justice Didier Reynders released a joint statement explaining that “[t]he U.S. Department of Commerce and the European Commission have initiated discussions to evaluate the potential for an enhanced EU-U.S. Privacy Shield framework to comply with the July 16 judgment of the Court of Justice of the European Union in the Schrems II case.”
    • Maximillian Schrems filed a complaint against Facebook with Ireland’s Data Protection Commission (DPC) in 2013, alleging that the company’s transfer of his personal data violated his rights under European Union law because of the mass U.S. surveillance revealed by former National Security Agency (NSA) contractor Edward Snowden. Ultimately, this case resulted in a 2015 Court of Justice of the European Union (CJEU) ruling that invalidated the Safe Harbor agreement under which the personal data of EU residents was transferred to the US by commercial concerns. The EU and US executed a follow on agreement, the EU-U.S. Privacy Shield, that was designed to address some of the problems the CJEU turned up, and the U.S. passed a law, the “Judicial Redress Act of 2015” (P.L. 114-126), to provide EU citizens a way to exercise their EU rights in US courts via the “Privacy Act of 1974.”
    • However, Schrems continued and soon sought to challenge the legality of the European Commission’s signing off on the Privacy Shield agreement, the adequacy decision issued in 2016, and also the use of standard contractual clauses (SCC) by companies for the transfer of personal data to the US. The CJEU struck down the adequacy decision, throwing into doubt many entities’ transfers out of the EU into the U.S. but upheld SCCs in a way that suggested EU data protection authorities (DPA) may need to review all such agreements to ensure they comply with EU law.
  • The European Commission (EC) announced an “an in-depth investigation to assess the proposed acquisition of Fitbit by Google under the EU Merger Regulation.” The EC voiced its concern “that the proposed transaction would further entrench Google’s market position in the online advertising markets by increasing the already vast amount of data that Google could use for personalisation of the ads it serves and displays.” The EC detailed its “preliminary competition concerns:
    • Following its first phase investigation, the Commission has concerns about the impact of the transaction on the supply of online search and display advertising services (the sale of advertising space on, respectively, the result page of an internet search engine or other internet pages), as well as on the supply of ”ad tech” services (analytics and digital tools used to facilitate the programmatic sale and purchase of digital advertising). By acquiring Fitbit, Google would acquire (i) the database maintained by Fitbit about its users’ health and fitness; and (ii) the technology to develop a database similar to Fitbit’s one.
    • The data collected via wrist-worn wearable devices appears, at this stage of the Commission’s review of the transaction, to be an important advantage in the online advertising markets. By increasing the data advantage of Google in the personalisation of the ads it serves via its search engine and displays on other internet pages, it would be more difficult for rivals to match Google’s online advertising services. Thus, the transaction would raise barriers to entry and expansion for Google’s competitors for these services, to the ultimate detriment of advertisers and publishers that would face higher prices and have less choice.
    • At this stage of the investigation, the Commission considers that Google:
      • is dominant in the supply of online search advertising services in the EEA countries (with the exception of Portugal for which market shares are not available);
      • holds a strong market position in the supply of online display advertising services at least in Austria, Belgium, Bulgaria, Croatia, Denmark, France, Germany, Greece, Hungary, Ireland, Italy, Netherlands, Norway, Poland, Romania, Slovakia, Slovenia, Spain, Sweden and the United Kingdom, in particular in relation to off-social networks display ads;
      • holds a strong market position in the supply of ad tech services in the EEA.
    • The Commission will now carry out an in-depth investigation into the effects of the transaction to determine whether its initial competition concerns regarding the online advertising markets are confirmed.
    • In addition, the Commission will also further examine:
      • the effects of the combination of Fitbit’s and Google’s databases and capabilities in the digital healthcare sector, which is still at a nascent stage in Europe; and
      • whether Google would have the ability and incentive to degrade the interoperability of rivals’ wearables with Google’s Android operating system for smartphones once it owns Fitbit.
    • In February after the deal had been announced, the European Data Protection Board (EDPB) made clear it position that Google and Fitbit will need to scrupulously observe the General Data Protection Regulation’s privacy and data security requirements if the body is sign off on the proposed $2.2 billion acquisition. Moreover, at present Google has not informed European Union (EU) regulators of the proposed deal. The deal comes at a time when both EU and U.S. regulators are already investigating Google for alleged antitrust and anticompetitive practices, and the EDPB’s opinion could carry weight in this process.
  • The United States’ (U.S.) Department of Homeland Security released a Privacy Impact Assessment for the U.S. Border Patrol (USPB) Digital Forensics Programs that details how it may conduct searches of electronic devices at the U.S. border and ports of entry. DHS explained
    • As part of USBP’s law enforcement duties, USBP may search and extract information from electronic devices, including: laptop computers; thumb drives; compact disks; digital versatile disks (DVDs); mobile phones; subscriber identity module (SIM) cards; digital cameras; vehicles; and other devices capable of storing electronic information.
    • Last year, a U.S. District Court held that U.S. Customs and Border Protection (CPB) and U.S. Immigration and Customs Enforcement’s (ICE) current practices for searches of smartphones and computers at the U.S. border are unconstitutional and the agency must have reasonable suspicion before conducting such a search. However, the Court declined the plaintiffs’ request that the information taken off of their devices be expunged by the agencies. This ruling follows a Department of Homeland Security Office of the Inspector General (OIG) report that found CPB “did not always conduct searches of electronic devices at U.S. ports of entry according to its Standard Operating Procedures” and asserted that “[t]hese deficiencies in supervision, guidance, and equipment management, combined with a lack of performance measures, limit [CPB’s] ability to detect and deter illegal activities related to terrorism; national security; human, drug, and bulk cash smuggling; and child pornography.”
    • In terms of a legal backdrop, the United States Supreme Court has found that searches and seizures of electronic devices at borders and airports are subject to lesser legal standards than those conducted elsewhere in the U.S. under most circumstances. Generally, the government’s interest in securing the border against the flow of contraband and people not allowed to enter allow considerable leeway to the warrant requirements for many other types of searches. However, in recent years two federal appeals courts (the Fourth and Ninth Circuits) have held that searches of electronic devices require suspicion on the part of government agents while another appeals court (the Eleventh Circuit) held differently. Consequently, there is not a uniform legal standard for these searches.
  • The Inter-American Development Bank (IDB) and the Organization of Americans States (OAS) released their second assessment of cybersecurity across Latin America and the Caribbean that used the Cybersecurity Capacity Maturity Model for Nations (CMM) developed at University of Oxford’s Global Cyber Security Capacity Centre (GSCC). The IDB and OAS explained:
    • When the first edition of the report “Cybersecurity: Are We Ready in Latin America and the Caribbean?” was released in March 2016, the IDB and the OAS aimed to provide the countries of Latin America and the Caribbean (LAC) not only with a picture of the state of cybersecurity but also guidance about the next steps that should be pursued to strengthen national cybersecurity capacities. This was the first study of its kind, presenting the state of cybersecurity with a comprehensive vision and covering all LAC countries.
    • The great challenges of cybersecurity, like those of the internet itself, are of a global nature. Therefore, it is undeniable that the countries of LAC must continue to foster greater cooperation among themselves, while involving all relevant actors, as well as establishing a mechanism for monitoring, analysis, and impact assessment related to cybersecurity both nationally and regionally. More data in relation to cybersecurity would allow for the introduction of a culture of cyberrisk management that needs to be extended both in the public and private sectors. Countries must be prepared to adapt quickly to the dynamic environment around us and make decisions based on a constantly changing threat landscape. Our member states may manage these risks by understanding the impact on and the likelihood of cyberthreats to their citizens, organizations, and national critical infrastructure. Moving to the next level of maturity will require a comprehensive and sustainable cybersecurity policy, supported by the country’s political agenda, with allocation of  financial resources and qualified human capital to carry it out.
    • The COVID-19 pandemic will pass, but events that will require intensive use of digital technologies so that the world can carry on will continue happening. The challenge of protecting our digital space will, therefore, continue to grow. It is the hope of the IDB and the OAS that this edition of the report will help LAC countries to have a better understanding of their current state of cybersecurity capacity and be useful in the design of the policy initiatives that will lead them to increase their level of cyberresilience.
  • The European Data Protection Supervisor (EDPS) issued an opinion on “the European Commission’s action plan for a comprehensive Union policy on preventing money laundering and terrorism financing (C(2020)2800 final), published on 7 May 2020.” The EDPS asserted:
    • While  the  EDPS acknowledges the  importance  of  the  fight  against money  laundering  and terrorism financing as an objective of general interest, we call for the legislation to strike a balance between the interference with the fundamental rights of privacy and personal data protection and  the measures that  are  necessary  to  effectively  achieve  the  general  interest goals on anti-money  laundering  and  countering the  financing  of terrorism (AML/CFT) (the principle of proportionality).
    • The EDPS recommends that the Commission monitors the effective implementation of the existing  AML/CFT  framework while ensuring that the  GDPR  and  the  data  protection framework are respected and complied with. This is particularly relevant for the works on the interconnection of central bank account mechanisms and beneficial ownership registers that should be largely inspired by the principles of data minimisation, accuracy and privacy-by-design and by default.  

Further Reading

  • China already has your data. Trump’s TikTok and WeChat bans can’t stop that.” By Aynne Kokas – The Washington Post. This article persuasively makes the case that even if a ban on TikTok and WeChat were to work, and there are substantive questions as to how a ban would given how widely the former has been downloaded, the People’s Republic of China (PRC) is almost certainly acquiring massive reams of data on Americans through a variety of apps, platforms, and games. For example, Tencent, owner of WeChat, has a 40% stake in Epic Games that has Fortnite, a massively popular multiplayer game (if you have never heard of it, ask one of the children in your family). Moreover, a recent change to PRC law mandates that companies operating in the PRC must share their data bases for cybersecurity reviews, which may be an opportunity aside from hacking and exfiltrating United States entities, to access data. In summation, if the Trump Administration is serious about stopping the flow of data from the U.S. to the PRC, these executive orders will do very little.
  • Big Tech Makes Inroads With the Biden Campaign” by David McCabe and Kenneth P. Vogel – The New York Times. Most likely long before former Vice President Joe Biden clinched the Democratic nomination, advisers volunteered to help plot out his policy positions, a process that intensified this year. Of course, this includes technology policy, and many of those volunteering for the campaign’s Innovation Policy Committee have worked or are working for large technology companies directly or as consultants or lobbyists. This piece details some of these people and their relationships and how the Biden campaign is managing possible conflicts of interest. Naturally, those on the left wing of the Democratic Party calling for tighter antitrust, competition, and privacy regulation are concerned that Biden might be pulled away from these positions despite his public statements arguing that the United States government needs to get tougher with some practices.
  • A Bible Burning, a Russian News Agency and a Story Too Good to Check Out” By Matthew Rosenberg and Julian E. Barnes – The New York Times. The Russian Federation seems to be using a new tactic with some success for sowing discord in the United States that is the information equivalent of throwing fuel onto a fire. In this case, a fake story manufactured by a Russian outlet was seized on by some prominent Republicans, in part, because it fits their preferred world view of protestors. In this instance, a Russian outlet created a fake story amplifying an actual event that went viral. We will likely see more of this, and it is not confined to fake stories intended to appeal to the right. The same is happening with content meant for the left wing in the United States.
  • Facebook cracks down on political content disguised as local news” by Sara Fischer – Axios. As part of its continuing effort to crack down on violations of its policies, Facebook will no longer allow groups with a political viewpoint to masquerade as news. The company and outside experts have identified a range of instances where groups propagating a viewpoint, as opposed to reporting, have used a Facebook exemption by pretending to be local news outlets.
  • QAnon groups have millions of members on Facebook, documents show” By Ari Sen and Brandy Zadrozny – NBC News. It appears as if some Facebooks are leaking the results of an internal investigation that identified more than 1 million users who are part of QAnon groups. Most likely these employees want the company to take a stronger stance on the conspiracy group QAnon like the company has with COVID-19 lies and misinformation.
  • And, since Senator Kamala Harris (D-CA) was named former Vice President Joe Biden’s (D-DE) vice presidential pick, this article has become even more relevant than when I highlighted it in late July: “New Emails Reveal Warm Relationship Between Kamala Harris And Big Tech” – HuffPost. Obtained via an Freedom of Information request, new email from Senator Kamala Harris’ (D-CA) tenure as her state’s attorney general suggest she was willing to overlook the role Facebook, Google, and others played and still play in one of her signature issues: revenge porn. This article makes the case Harris came down hard on a scammer running a revenge porn site but did not press the tech giants with any vigor to take down such material from their platforms. Consequently, the case is made if Harris is former Vice President Joe Biden’s vice presidential candidate, this would signal a go easy approach on large companies even though many Democrats have been calling to break up these companies and vigorously enforce antitrust laws. Harris has largely not engaged on tech issues during her tenure in the Senate. To be fair, many of these companies are headquartered in California and pump billions of dollars into the state’s economy annually, putting Harris in a tricky position politically. Of course, such pieces should be taken with a grain of salt since it may have been suggested or planted by one of Harris’ rivals for the vice president nomination or someone looking to settle a score.
  • Unwanted Truths: Inside Trump’s Battles With U.S. Intelligence Agencies” by Robert Draper – The New York Times. A deeply sourced article on the outright antipathy between President Donald Trump and Intelligence Community officials, particularly over the issue of how deeply Russia interfered in the election in 2016. A number of former officials have been fired or forced out because they refused to knuckle under to the White House’s desire to soften or massage conclusions of Russia’s past and current actions to undermine the 2020 election in order to favor Trump.
  • Huawei says it’s running out of chips for its smartphones because of US sanctions” By Kim Lyons – The Verge and “Huawei: Smartphone chips running out under US sanctions” by Joe McDonald – The Associated Press. United States (U.S.) sanctions have started biting the Chinese technology company Huawei, which announced it will likely run out of processor chips for its smartphones. U.S. sanctions bar any company from selling high technology items like processors to Huawei, and this capability is not independently available in the People’s Republic of China (PRC) at present.
  • Targeting WeChat, Trump Takes Aim at China’s Bridge to the World” By Paul Mozur and Raymond Zhong – The New York Times. This piece explains WeChat, the app, the Trump Administration is trying to ban in the United States (U.S.) without any warning. It is like a combination of Facebook, WhatsApp, news app, and payment platform and is used by more than 1.2 billion people.
  • This Tool Could Protect Your Photos From Facial Recognition” By Kashmir Hill – The New York Times. Researchers at the University of Chicago have found a method of subtly altering photos of people that appears to foil most facial recognition technologies. However, a number of experts interviewed said it is too late to stop companies like AI Clearview.
  • I Tried to Live Without the Tech Giants. It Was Impossible.” By Kashmir Hill – The New York Times. This New York Times reporter tried living without the products of large technology companies, which involved some fairly obvious challenges and some that were not so obvious. Of course, it was hard for her to skip Facebook, Instagram, and the like, but cutting out Google and Amazon proved hardest and basically impossible because of the latter’s cloud presence and the former’s web presence. The fact that some of the companies cannot be avoided if one wants to be online likely lends weight to those making the case these companies are anti-competitive.
  • To Head Off Regulators, Google Makes Certain Words Taboo” by Adrianne Jeffries – The Markup. Apparently, in what is a standard practice at large companies, employees at Google were coached to avoid using certain terms or phrases that antitrust regulators would take notice of such as: “market,” “barriers to entry,” and “network effects.” The Markup obtained a 16 August 2019 document titled “Five Rules of Thumb For Written Communications” that starts by asserting “[w]ords matter…[e]specially in antitrust laws” and goes on to advise Google’s employees:
    • We’re out to help users, not hurt competitors.
    • Our users should always be free to switch, and we don’t lock anyone in.
    • We’ve got lots of competitors, so don’t assume we control or dominate any market.
    • Don’t try and define a market or estimate our market share.
    • Assume every document you generate, including email, will be seen by regulators.
  • Facebook Fired An Employee Who Collected Evidence Of Right-Wing Pages Getting Preferential Treatment” By Craig Silverman and Ryan Mac – BuzzFeed News. A Facebook engineer was fired after adducing proof in an internal communications system that the social media platform is more willing to change false and negative ratings to claims made by conservative outlets and personalities than any other viewpoint. If this is true, it would be opposite to the narrative spun by the Trump Administration and many Republicans in Congress. Moreover, Facebook’s incentives would seem to align with giving conservatives more preferential treatment because many of these websites advertise on Facebook, the company probably does not want to get crosswise with the Administration, sensational posts and content drive engagement which increases user numbers that allows for higher ad rates, and it wants to appear fair and impartial.
  • How Pro-Trump Forces Work the Refs in Silicon Valley” By Ben Smith – The New York Times. This piece traces the nearly four decade old effort of Republicans to sway mainstream media and now Silicon Valley to its viewpoint.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Photo credit: Gerd Altmann on Pixabay

Further Reading, Other Developments, and Coming Events (21 July)

First things first, if you would like to receive my Technology Policy Update, email me. You can find some of these Updates from 2019 and 2020 here.

Here are Further Reading, Other Developments, and Coming Events.

Coming Events

  • The Federal Trade Commission (FTC) will hold its fifth annual PrivacyCon on 21 July and has released its agenda.
  • On 22 July, the Senate Homeland Security & Governmental Affairs Committee will markup a number of bills and nominations, including:
    • The nomination of Derek Kan to the Office of Management and Budget’s Deputy Director
    • The “Federal Emergency Pandemic Response Act” (S.4204)
    • The “Securing Healthcare and Response Equipment Act of 2020” (S.4210)
    • The “National Response Framework Improvement Act of 2020” (S.4153)
    • The “National Infrastructure Simulation and Analysis Center Pandemic Modeling Act of 2020” (S.4157)
    • The “PPE Supply Chain Transparency Act of 2020” (S.4158)
    • The “REAL ID Act Modernization Act” (S.4133)
    • The “Safeguarding American Innovation Act” (S.3997)
    • The “Information Technology Modernization Centers of Excellence Program Act” (S.4200)
    • The “Telework for U.S. Innovation Act” (S.4318)
    • The “GAO Database Modernization Act” (S.____)
    • The “CFO Vision Act of 2020” (S.3287)
    • The “No Tik Tok on Government Devices Act” (S. 3455)
    • The “Cybersecurity Advisory Committee Authorization Act of 2020” (S. 4024)
  • On 23 July, the Senate Commerce, Science, and Transportation Committee’s Communications, Technology, Innovation, and the Internet Subcommittee will hold a hearing on “The State of U.S. Spectrum Policy” with the following witnesses:
    • Mr. Tom Power, Senior Vice President and General Counsel, CTIA
    • Mr. Mark Gibson, Director of Business Development, CommScope
    • Dr. Roslyn Layton, Visiting Researcher, Aalborg University
    • Mr. Michael Calabrese, Director, Wireless Future Project, Open Technology Institute at New America
  • On  27 July, the House Judiciary Committee’s Antitrust, Commercial, and Administrative Law Subcommittee will hold its sixth hearing on “Online Platforms and Market Power” titled “Examining the Dominance of Amazon, Apple, Facebook, and Google” that will reportedly have the heads of the four companies as witnesses.
  • On 6 August, the Federal Communications Commission (FCC) will hold an open meeting to likely consider the following items:
    • C-band Auction Procedures – The Commission will consider a Public Notice that would adopt procedures for the auction of new flexible-use overlay licenses in the 3.7–3.98 GHz band (Auction 107) for 5G, the Internet of Things, and other advanced wireless services. (AU Docket No. 20-25)
    • Radio Duplication Rules – The Commission will consider a Report and Order that would eliminate the radio duplication rule with regard to AM stations and retain the rule for FM stations. (MB Docket Nos. 19-310. 17-105)
    • Common Antenna Siting Rules – The Commission will consider a Report and Order that would eliminate the common antenna siting rules for FM and TV broadcaster applicants and licensees. (MB Docket Nos. 19-282, 17-105)
    • Telecommunications Relay Service – The Commission will consider a Report and Order to repeal certain TRS rules that are no longer needed in light of changes in technology and voice communications services. (CG Docket No. 03-123)
    • Inmate Calling Services – The Commission will consider a Report and Order on Remand and a Fourth Further Notice of Proposed Rulemaking that would respond to remands by the U.S. Court of Appeals for the District of Columbia Circuit and propose to comprehensively reform rates and charges for the inmate calling services within the Commission’s jurisdiction.  (WC Docket No. 12-375)

Other Developments

  • A United States court has denied a motion by an Israeli technology company to dismiss an American tech giant’s suit that the former infected its messaging system with malware for purposes of espionage and harassment. In October 2019, WhatsApp and Facebook filed suit against the Israeli security firm, NSO Group, alleging that in April 2019, it sent “malware to approximately 1,400 mobile phones and devices…designed to infect the Target Devices for the purpose of conducting surveillance of specific WhatsApp users.” This step was taken, Facebook and WhatsApp claim, in order to circumvent WhatApp’s end-to-end encryption. The social media companies are suing “for injunctive relief and damages pursuant to the Computer Fraud and Abuse Act, 18 U.S.C. § 1030, and the California Comprehensive Computer Data Access and Fraud Act, California Penal Code § 502, and for breach of contract and trespass to chattels.” In the District Court’s ruling from last week, it rejected the NSO Group’s claims that it deserved sovereign immunity from the lawsuit because it was working for sovereign governments among others and will allow WhatsApp and Facebook to proceed with their suit.
  • The European Data Protection Supervisor (EDPS) published a report “on how EU institutions, bodies and agencies (EUIs) carry out Data Protection Impact Assessments (DPIAs) when processing information that presents a high risk to the rights and freedom of natural persons” according to the EDPS’ press release. The EDPS detailed its lessons learned, suggestions on how EU institutions could execute better DPIAs, and additional guidance on how DPIAs should be performed in the future.
  • The Court of Justice of the European Union’s (CJEU) Advocate General Saugmandsgaard Øe rendered his opinion in case concerning the possible lability of YouTube and Uploaded for a user posting copyrighted materials without the consent of the owners. In a CJEU summary, Øe found “as EU law currently stands, online platform operators, such as YouTube and Uploaded, are not directly liable for the illegal uploading of protected works by the users of those platforms.” Øe noted that “Directive  2019/790 on  copyright  and  related rights  in  the  Digital  Single  Market introduces, for online platform operators such as YouTube, a new liability regime specific to works illegally uploaded by  the  users  of  such  platforms….which  must  be  transposed  by  each Member State into its national law by 7 June 2021at the latest, requires, inter alia, those operators to obtain an authorisation from the rightholders, for example by concluding a licensing agreement, for the works uploaded by users of their platforms.” The Advocate General’s decisions are not binding but work to inform the CJEU as it decides cases, but it is not uncommon for the CJEU to incorporate the Advocate General’s findings in their decisions.
  • The United Kingdom’s Parliament’s House of Lords’ Select Committee on Democracy and Digital Technologies released its report regarding “a pandemic of ‘misinformation’ and ‘disinformation’…[that] [i]f allowed to flourish these counterfeit truths will result in the collapse of public trust, and without trust democracy as we know it will simply decline into irrelevance.” The committee explained the report “addresses a number of concerns, including the urgent case for reform of electoral law and our overwhelming need to become a digitally literate society” including “forty-five  recommendations  which,  taken  together,  we  believe could serve as a useful response to a whole series of concerns.”
  • Belgium’s data protection authority, the Autorité de protection des données, has fined Google €600,000 for violations related to the company’s failure to heed the right to be forgotten as enforced under the General Data Protection Regulation (GDPR).  
  • The National Institute of Standards and Technology (NIST) released two crosswalks undertaken by outside entities comparing the NIST Privacy Framework: A Tool for Improving Privacy through Enterprise Risk Management to the General Data Protection Regulation (GDPR) and ISO/IEC 27701, private sector privacy guidance:
    • The Enterprivacy Consulting Group’s crosswalk for the GDPR-Regulation 2016/679.
  • Senator Josh Hawley (R-MO) sent Twitter CEO Jack Dorsey a second letter regarding the Twitter hack and asserted:
    • [R]eports also indicate that screenshots of Twitter’s internal tools have been circulating within the hacking community. One such screenshot indicates that Twitter employs tools allowing it to append “Search Blacklist,” “Trends Blacklist,” “Bounced,” and “ReadOnly” flags to user accounts. Given your insistence in testimony to Congress that Twitter does not engage in politically biased “shadowbanning” and the public interest in Twitter’s moderation practices, it is notable that Twitter reportedly suspended user accounts sharing screenshots of this panel.
    • Hawley posed a series of questions seeking to root out a bias against conservative viewpoints on the platform, a frequently leveled charge.
  • The Ranking Members of the House Foreign Affairs Committee, House Energy and Commerce Committee, and House Financial Services Committee wrote President Donald Trump to “encourage you to consider utilizing your ability under existing authorities to sanction PRC-linked hackers” for “targeting U.S. institutions and “attempting to identify and illicitly obtain valuable intellectual property (IP) and public health data related to vaccines, treatments, and testing from networks and personnel affiliated with COVID-19-related research.” In a May unclassified public service announcement, the Federal Bureau of Investigation (FBI) and CISA named the People’s Republic of China as a nation waging a cyber campaign against U.S. COVID-19 researchers. The agencies stated they “are issuing this announcement to raise awareness of the threat to COVID-19-related research.” Last week, The United Kingdom’s National Cyber Security Centre (NCSC), Canada’s Communications  Security Establishment (CSE), United States’ National Security Agency (NSA) and the United States’ Department of Homeland Security’s Cybersecurity and Infrastructure Security  Agency (CISA) issued a joint advisory on a Russian hacking organization’s efforts have “targeted various organisations involved in COVID-19 vaccine development in Canada, the United States and the United Kingdom, highly likely with the intention of stealing information and intellectual property relating to the development and testing of COVID-19 vaccines.”

Further Reading

  • Twitter’s security holes are now the nation’s problem“ – Politico; “Twitter hack triggers investigations and lawmaker concerns” – The Washington Post; “Hackers Convinced Twitter Employee to Help Them Hijack Accounts” – Vice’s Motherboard; “Twitter Struggles to Unpack a Hack Within Its Walls” and “Hackers Tell the Story of the Twitter Attack From the Inside” – The New York Times. After the hacking last week that took over a number of high profile people’s accounts (e.g. Barack Obama, Bill Gates, Elon Musk, etc.), policymakers in Washington are pressing Twitter for explanations and remediation to prevent any such future attacks, especially in the run up to the 2020 election. Reportedly, a group of hackers looking to push a Bitcoin scam took over accounts of famous people and then made it appear they were selling Bitcoin. Republicans and Democrats in the United States’ capital are alarmed that such a hack by another nation could throw the country and world into chaos. One media outlet is reporting the hackers provided proof they bribed a Twitter employee with access to administrative credentials to pull off the hack. Another is reporting that a hacker got into Twitter’s Slack channel where the credentials were posted. Nonetheless, the Federal Bureau of Investigation (FBI) has opened an inquiry. It is unclear whether the hackers accessed people’s DM’s, and Senator Ron Wyden (D-OR) noted he has secured a commitment from the company in 2018 to use encryption to secure DMs that has not yet been implemented. The company will have to answer more tough questions at a time when it is in the crosshairs of the rump Administration for alleged abuses of 47 U.S.C. 230 in stifling conservative viewpoints after the platform fact checked the President and has taken down a range of accounts. And, of course, working in the background is the company’s 2011 settlement with the Federal Trade Commission (FTC) in which the agency claimed Twitter violated the FTC Act by “engag[ing] in a number of practices that, taken together, failed to provide reasonable and appropriate security to: prevent unauthorized access to nonpublic user information and honor the privacy choices exercised by its users in designating certain tweets as nonpublic…[and by] fail[ing] to prevent unauthorized administrative control of the Twitter system.” If the agency investigates and finds similar misconduct, they could seek sizeable monetary damages in federal court.
  • F.T.C.’s Facebook Investigation May Stretch Past Election” – The New York Times. Even though media accounts say the United States Department of Justice will bring an antitrust action against Google possibly as early as this month, it now appears the Federal Trade Commission (FTC) will not be bringing a case against Facebook until next year. It appears the agency is weighing whether it should depose CEO Mark Zuckerberg and COO Sheryl Sandberg and has made additional rounds of document requests, all of which has reportedly slowed down the investigation. Of course, should the investigation stretch into next year, a President Joe Biden could designate a new chair of the agency, which could change the scope and tenor of the investigation.
  • New Emails Reveal Warm Relationship Between Kamala Harris And Big Tech” – HuffPost. Obtained via an Freedom of Information request, new email from Senator Kamala Harris’ (D-CA) tenure as her state’s attorney general suggest she was willing to overlook the role Facebook, Google, and others played and still play in one of her signature issues: revenge porn. This article makes the case Harris came down hard on a scammer running a revenge porn site but did not press the tech giants with any vigor to take down such material from their platforms. Consequently, the case is made if Harris is former Vice President Joe Biden’s vice presidential candidate, this would signal a go easy approach on large companies even though many Democrats have been calling to break up these companies and vigorously enforce antitrust laws. Harris has largely not engaged on tech issues during her tenure in the Senate. To be fair, many of these companies are headquartered in California and pump billions of dollars into the state’s economy annually, putting Harris in a tricky position politically. Of course, such pieces should be taken with a grain of salt since it may have been suggested or planted by one of Harris’ rivals for the vice president nomination or someone looking to settle a score.
  • Inside Big Tech’s Years-Long Manipulation Of American Op-Ed Pages” – Big Technology from Alan Krantowitz. To no great surprise, large technology companies have adopted a widely used tactic of getting someone sympathetic to “write” an op-ed for a local newspaper to show it is not just big companies pushing for a policy. In this case, it was, and likely still is, the argument against breaking up the tech giants or regulating them more closely. In one case, it is not clear the person who allegedly “wrote” the article actually even knew about it.
  • Trump campaign pushes Facebook ads bashing TikTok” – CNN. The White House is using new means to argue TikTok poses a threat to Americans and national security: advertisements on Facebook by the Trump campaign. The ads repeated the same basic message that has been coming out of the White House that TikTok has been denying: that the app collects and sends user sensitive user data to the People’s Republic of China (PRC). Another wrinkle TikTok pointed to is that Facebook is readying a competitor, Instagram Reels, set to be unveiled as early as this week.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Photo by Produtora Midtrack from Pexels

Technology Policy Update (10 April)

First things first, if you would like to receive my Technology Policy Update, email me. You can find some of these Updates from 2019 here.

Here are the articles from this edition:

  • “Paper” Hearing on COVID-19 and Big Data
  • DOD Revises Cybersecurity Model For Contractors; Accreditation Body Holds Webinar
  • EC Calls For EU-Wide Approach on Big Data and COVID-19
  • EU’s Data Supervisor Calls For Limits On Using Data In Fighting COVID-19
  • EDPB Fast Tracks Privacy and Processing Guidance For COVID-19
  • Warner Asks OMB For Uniform Guidance On Contractors
  • OCR Announces HIPAA Enforcement Discretion
  • Executive Order Formalizes Review of Foreign Investment in Telecommunications
  • CISA Guides Agencies On Telework Best Practices and Security

U.S. and Other Governments Respond To Privacy and Data Implications of COVID-19

Federal agencies have continued to respond to the changing conditions presented by the increased number of COVID-19. However, while the U.S. government has not weighed in officially on the legality and appropriateness of using people’s location data from phones in order to combat the spread of the virus, European authorities have.

Last week, the Federal Trade Commission (FTC) sought to assure businesses and other regulated entities that the agency would look kindly on some activities that might otherwise be anti-competitive if the ultimate goal is to help consumers get by and survive COVID-19. Yet, the agency made clear that it would continue to police unfair and deceptive practices.

FTC Chair Joe Simons issued a statement explaining that “FTC staff in the Bureau of Consumer Protection remain hard at work protecting consumers from deceptive and unfair commercial practices” but “the FTC will remain flexible and reasonable in enforcing compliance requirements that may hinder the provision of important goods and services to consumers.” Simons added “[t]o be clear, by being flexible and reasonable, I am not suggesting that we will tolerate companies deceiving consumers, using tactics that violate well-established consumer protections, or taking unfair advantage of these uniquely challenging times…[and] [a]t all times, good faith efforts undertaken to provide needed goods and services to consumers will be taken into account in making enforcement decisions.” He stated “[t]he FTC is ready to assist businesses that may seek guidance about compliance obligations on consumer protection issues during this unprecedented time.”

On April 3, the FTC and the Federal Communications Commission (FCC) transmitted letters “ to threecompanies providing Voice over Internet Protocol (VoIP) services,warning them that routing and transmitting illegal robocalls, including Coronavirus-related scam calls, is illegal and may lead to federal law enforcement against them” per the agencies’ press release.

The FTC and FCC noted “a separate letter to USTelecom – The Broadband Association (USTelecom), a trade association that represents U.S.-based telecommunications-related businesses…thanks USTelecom for identifying and mitigating fraudulent robocalls that are taking advantage of the Coronavirus national health crisis, and notes that the USTelecom Industry Traceback Group has helped identify various entities that appear to be responsible for originating or transmitting Coronavirus-related scam robocalls.” The agencies stated:

The letter further notifies USTelecom that if, after 48 hours of the release of the letter, any of the specified gateway or originating providers continue to route or transmit the specified originators’ robocalls on its network, the FCC will: 1) authorize other U.S. providers to block all calls coming from that gateway or originating provider; and 2) authorize other U.S. providers to take any other steps as needed to prevent further transmission of unlawful calls originating from the originator.

Last week, FTC staff sent “letters to nine Voice over Internet Protocol (VoIP) service providers and other companies warning them that “assisting and facilitating” illegal telemarketing or robocalls related to the coronavirus or COVID-19 pandemic is against the law” according to the agency’s press release. The FTC argued that “[m]any of these calls prey upon consumers’ fear of the virus to perpetrate scams or sow disinformation.”

Earlier in March, according to the agencies’ press release, the FTC and Food and Drug Administration (FDA) “sent warning letters to seven companies allegedly selling unapproved products that may violate federal law by making deceptive or scientifically unsupported claims about their ability to treat coronavirus (COVID-19) [that]…are the first issued by the agencies alleging unapproved and/or unsupported claims that products can treat or prevent coronavirus: 1) Vital Silver, 2) Quinessence Aromatherapy Ltd., 3) N-ergetics, 4) GuruNanda, LLC, 5) Vivify Holistic Clinic, 6) Herbal Amy LLC, and 7) The Jim Bakker Show.” The agencies alleged “[t]he recipients are companies that advertise products—including teas, essential oils, and colloidal silver—as able to treat or prevent coronavirus…[but] [a]ccording to the FDA, however, there are no approved vaccines, drugs, or investigational products currently available to treat or prevent the virus.”

The FTC also joined the Department of Justice (DOJ) in a statement “to make clear to the public that there are many ways  firms,  including  competitors,  can  engage  in  procompetitive  collaboration  that  does  not  violate the antitrust laws.”

Internationally, agencies with data protection and privacy responsibilities have also moved to remind public and private sector entities of how latitude they have under national law to use personal data to fight COVID-19. The European Data Protection Supervisor (EDPS) Wojciech Wiewiórowski responded to a request from the European Union’s Directorate‑General for Communications Networks, Content and Technology “on the monitoring of the spread of the COVID-19 outbreak,” presumably through the use of location data and metadata to track EU citizens to monitor health and compliance. One of the EDPS’ primary duties is to enforce data protection laws on EU agencies.

Wiewiórowski explained

  • Firstly, let me underline that data protection rule currently in force in Europe are flexible enough to allow for various measures taken in the fight against pandemics. I am aware of the discussions taking place in some Member States with telecommunications providers with the objective of using such data to track the spread of the COVID-19 outbreak.
  • I share and support your call for an urgent establishment of a coordinated European approach to handle the emergency in the most efficient, effective and compliant way possible.
  • There is a clear need to act at the European level now.

Wiewiórowski stated that “[o]n the basis of the information provided in your letter and in absence of a more specific data model, please find below some elements for your consideration:

  • Data anonymization
    • It is clear from your letter that you intend to use only anonymous data to map movements of people with the objective of ensuring the stability of the internal market and coordinating crisis response. Effectively anonymised data fall outside of the scope of data protection rules
    • At the same time, effective anonymisation requires more than simply removing obvious identifiers such as phone numbers and IMEI numbers. In your letter, you also mention that data would be aggregated, which can provide an additional safeguard.
    • I understand that the Health Security Committee established by Decision (EU) 1082/2013 you make explicit reference to would be the relevant forum for exchanges with the Member States in this case. The Commission should ensure that the data model would enable it to respond to the needs of the users of these analyses. Moreover, the Commission should clearly define the dataset it wants to obtain and ensure transparency towards the public, to avoid any possible misunderstandings. I would appreciate if you could share with me a copy of the data model, once defined, for information.
  • Data security and data access
    • As mentioned above, to the extent the data obtained by the Commission would be anonymous, it falls outside the scope of data protection rules. Nonetheless, information security obligations under Commission Decision 2017/464still apply, as do confidentiality obligations under the Staff Regulations for any Commission staff processing the information. Should the Commission rely on third parties to process the information, these third parties have to apply equivalent security measures and be bound by strict confidentiality obligations and prohibitions on further use as well. I would also like to stress the importance of applying adequate measures to ensure the secure transmission of data from the telecom providers. It would also be preferable to limit access to the data to authorised experts in spatial epidemiology, data protection and data science.
  • Data retention
    • I also welcome that the data obtained from mobile operators would be deleted as soon as the current emergency comes to an end. It should be also clear that these special services are deployed because of this specific crisis and are of temporary character. The EDPS often stresses that such developments usually do not contain the possibility to step back when the emergency is gone. I would like to stress that such solution should be still recognised as extraordinary.

Wiewiórowski added that he wanted “to recall the importance of full transparency to the public on the purpose and procedure of the measures to be enacted…[and] I would also encourage you to keep your Data Protection Officer involved throughout the entire process to provide assurance that the data processed had indeed been effectively anonymised.” Wiewiórowski stressed that “should the Commission feel compelled at any point in the future to change the envisaged modalities for processing, a new consultation of the EDPS would be necessary…[and] [t]he EDPS is ready not only to consult the plans but also to actively involve its resources in the process of development of products and services that may have significant value to the public.”

The Office of the Privacy Commissioner of Canada (OPC) issued guidance “to help organizations subject to federal privacy laws understand their privacy-related obligations during the COVID-19 outbreak” according to the agency’s press release. OPC explained that “[d]uring a public health crisis, privacy laws still apply, but they are not a barrier to appropriate information sharing.” OPC stated that “[t]he new document provides general guidance on applying the Privacy Act, which covers the personal information-handling practices of federal government departments and agencies, and the Personal Information Protection and Electronic Documents Act (PIPEDA), Canada’s federal private-sector privacy law, in the context of the current outbreak.” OPC added that “[a]ll organizations must continue to operate with lawful authority and exercise good judgment…[and] [g]overnment institutions will need to apply the principles of necessity and proportionality, whether in applying existing measures or in deciding on new actions to address the current crisis.” OPC declare it “will continue to protect the privacy of Canadians, while adopting a flexible and contextual approach in its application of the law.”

On April 1, the Office of the Australian Information Commissioner (OAIC) issued a press release announcing “privacy guidance for agencies and private sector employers to help keep workplaces safe and handle personal information appropriately as part of the COVID-19 response. This includes:

  • Using and disclosing individuals’ personal information, including sensitive health information, on a ‘need-to-know’ basis
  • Only collecting, using or disclosing the minimum amount of personal information reasonably necessary to prevent or manage COVID-19
  • Advising staff about how their personal information will be handled in responding to any potential or confirmed COVID-19 cases in the workplace
  • Taking reasonable steps to keep personal information secure, including where employees are working remotely.

OAIC asserted it and “state and territory privacy regulators have convened a National COVID-19 Privacy Team to respond to proposals with national implications.”