Further Reading, Other Developments, and Coming Events (19 January 2021)

Further Reading

  • Hong Kong telecoms provider blocks website for first time, citing security law” — Reuters; “A Hong Kong Website Gets Blocked, Raising Censorship Fears” By Paul Mozur and Aaron Krolik — The New York Times. The Hong Kong Broadband Network (HKBN) blocked access to a website about the 2019 protests against the People’s Republic of China (PRC) (called HKChronicles) under a recently enacted security law critics had warned would lead to exactly this sort of outcome. Allegedly, the Hong Kong police had invoked the National Security Law for the first time, and other telecommunications companies have followed suit.
  • Biden to counter China tech by urging investment in US: adviser” By Yifan Yu — Nikkei Asia. President-elect Joe Biden’s head of the National Economic Council said at a public event that the Biden Administration would focus less on tariffs and other similar instruments to counter the People’s Republic of China (PRC). Instead, the incoming President would try to foster investment in United States companies and technologies to fend off the PRC’s growing strength in a number of crucial fields. Also, a Biden Administration would work more with traditional U.S. allies to contest policies from Beijing.
  • Revealed: walkie-talkie app Zello hosted far-right groups who stormed Capitol” By Micah Loewinger and Hampton Stall — The Guardian. Some of the rioters and insurrectionists whop attacked the United States Capitol on 6 January were using another, lesser known communications app, Zello, to coordinate their actions. The app has since taken down a number of right-wing and extremist groups that have flourished for months if not years on the platform. It remains to be seen how smaller platforms will be scrutinized under a Biden Presidency. Zello has reportedly been aware that these groups have been using their platform and opted not to police their conduct.
  • They Used to Post Selfies. Now They’re Trying to Reverse the Election.” By Stuart A. Thompson and Charlie Warzel — The New York Times. The three people who amassed considerable extremist followings seem each to be part believer and part opportunist. A fascinating series of profiles about the three.
  • Telegram tries, and fails, to remove extremist content” By Mark Scott — Politico. Platforms other than Facebook and Twiiter are struggling to moderate right wing and extremist content that violates their policies and terms of service.

Other Developments

  • The Biden-Harris transition team announced that a statutorily established science advisor will now be a member of the Cabinet and named its nominee for this and other positions. The Office of Science and Technology Policy (OSTP) was created by executive order in the Ford Administration and then codified by Congress. However, the OSTP Director has not been a member of the Cabinet alongside the Senate-confirmed Secretaries and others. President-elect Joe Biden has decided to elevate the OSTP Director to the Cabinet, likely in order to signal the importance of science and technology in his Administration. The current OSTP has exercised unusual influence in the Trump Administration under the helm of OSTP Associate Director Michael Kratsios and shaped policy in a number of realms like artificial intelligence, national security, and others.
    • In the press release, the transition team explained:
      • Dr. Eric Lander will be nominated as Director of the OSTP and serve as the Presidential Science Advisor. The president-elect is elevating the role of science within the White House, including by designating the Presidential Science Advisor as a member of the Cabinet for the first time in history. One of the country’s leading scientists, Dr. Lander was a principal leader of the Human Genome Project and has been a pioneer in the field of genomic medicine. He is the founding director of the Broad Institute of MIT and Harvard, one of the nation’s leading research institutes. During the Obama-Biden administration, he served as external Co-Chair of the President’s Council of Advisors on Science and Technology. Dr. Lander will be the first life scientist to serve as Presidential Science Advisor.
      • Dr. Alondra Nelson will serve as OSTP Deputy Director for Science and Society. A distinguished scholar of science, technology, social inequality, and race, Dr. Nelson is president of the Social Science Research Council, an independent, nonprofit organization linking social science research to practice and policy. She is also a professor at the Institute for Advanced Study, one of the nation’s most distinguished research institutes, located in Princeton, NJ.
      • Dr. Frances H. Arnold and Dr. Maria Zuber will serve as the external Co-Chairs of the President’s Council of Advisors on Science and Technology (PCAST). An expert in protein engineering, Dr. Arnold is the first American woman to win the Nobel Prize in Chemistry. Dr. Zuber, an expert in geophysics and planetary science, is the first woman to lead a NASA spacecraft mission and has chaired the National Science Board. They are the first women to serve as co-chairs of PCAST.
      • Dr. Francis Collins will continue serving in his role as Director of the National Institutes of Health.
      • Kei Koizumi will serve as OSTP Chief of Staff and is one of the nation’s leading experts on the federal science budget.
      • Narda Jones, who will serve as OSTP Legislative Affairs Director, was Senior Technology Policy Advisor and Counsel for the Democratic staff of the U.S. Senate Committee on Commerce, Science and Transportation.
  • The United States (U.S.) Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA) issued a report on supply chain security by a public-private sector advisory body, which represents one of the lines of effort of the U.S. government to better secure technology and electronics that emanate from the People’s Republic of China (PRC). CISA’s National Risk Management Center co-chairs the Information and Communications Technology (ICT) Supply Chain Risk Management (SCRM) Task Force along with the Information Technology Sector Coordinating Council and the Communications Sector Coordinating Council. The ICT SCRM published its Year 2 Report that “builds upon” its Interim Report and asserted:
    • Over the past year, the Task Force has expanded upon its first-year progress to advance meaningful partnership around supply chain risk management. Specifically, the Task Force:
      • Developed reference material to support overcoming legal obstacles to information sharing
      • Updated the Threat Evaluation Report, which evaluates threats to suppliers, with additional scenarios and mitigation measures for the corresponding threat scenarios
      • Produced a report and case studies providing in -depth descriptions of control categories and information regarding when and how to use a Qualified List to manage supply chain risks
      • Developed a template for SCRM compliance assessments and internal evaluations of alignment to industry standards
      • Analyzed the current and potential impacts from the COVID-19 pandemic, and developed a system map to visualize ICT supply chain routes and identify chokepoints
      • Surveyed supply chain related programs and initiatives that provide opportunities for potential TaskForce engagement
    • Congress established an entity to address and help police supply chain risk at the end of 2018 in the “Strengthening and Enhancing Cyber-capabilities by Utilizing Risk Exposure Technology Act” (SECURE Act) (P.L. 115-390). The Federal Acquisition Security Council (FASC) has a number of responsibilities, including:
      • developing an information sharing process for agencies to circulate decisions throughout the federal government made to exclude entities determined to be IT supply chain risks
      • establishing a process by which entities determined to be IT supply chain risks may be excluded from procurement government-wide (exclusion orders) or suspect IT must be removed from government systems (removal orders)
      • creating an exception process under which IT from an entity subject to a removal or exclusion order may be used if warranted by national interest or national security
      • issuing recommendations for agencies on excluding entities and IT from the IT supply chain and “consent for a contractor to subcontract” and mitigation steps entities would need to take in order for the Council to rescind a removal or exclusion order
      • In September 2020, the FASC released an interim regulation that took effect upon being published that “implement[s] the requirements of the laws that govern the operation of the FASC, the sharing of supply chain risk information, and the exercise of its authorities to recommend issuance of removal and exclusion orders to address supply chain security risks…”
  • The Australian government has released its bill to remake how platforms like Facebook, Google, and others may use the content of new media, including provision for payment. The “Treasury Laws Amendment (News Media and Digital Platforms Mandatory Bargaining Code) Bill 2020” “establishes a mandatory code of conduct to help support the sustainability of the Australian news media sector by addressing bargaining power imbalances between digital platforms and Australian news businesses.” The agency charged with developing legislation, the Australian Competition and Consumer Commission (ACCC), has tussled with Google in particular over what this law would look like with the technology giant threatening to withdraw from Australia altogether. The ACCC had determined in its July 2019 Digital Platform Inquiry:
    • that there is a bargaining power imbalance between digital platforms and news media businesses so that news media businesses are not able to negotiate for a share of the revenue generated by the digital platforms and to which the news content created by the news media businesses contributes. Government intervention is necessary because of the public benefit provided by the production and dissemination of news, and the importance of a strong independent media in a well-functioning democracy.
    • In an Explanatory Memorandum, it is explained:
      • The Bill establishes a mandatory code of conduct to address bargaining power imbalances between digital platform services and Australian news businesses…by setting out six main elements:
        • bargaining–which require the responsible digital platform corporations and registered news business corporations that have indicated an intention to bargain, to do so in good faith;
        • compulsory arbitration–where parties cannot come to a negotiated agreement about remuneration relating to the making available of covered news content on designated digital platform services, an arbitral panel will select between two final offers made by the bargaining parties;
        • general requirements –which, among other things, require responsible digital platform corporations to provide registered news business corporations with advance notification of planned changes to an algorithm or internal practice that will have a significant effect on covered news content;
        • non-differentiation requirements –responsible digital platform corporations must not differentiate between the news businesses participating in the Code, or between participants and non-participants, because of matters that arise in relation to their participation or non-participation in the Code;
        • contracting out–the Bill recognises that a digital platform corporation may reach a commercial bargain with a news business outside the Code about remuneration or other matters. It provides that parties who notify the ACCC of such agreements would not need to comply with the general requirements, bargaining and compulsory arbitration rules (as set out in the agreement); and
        • standard offers –digital platform corporations may make standard offers to news businesses, which are intended to reduce the time and cost associated with negotiations, particularly for smaller news businesses. If the parties notify the ACCC of an agreed standard offer, those parties do not need to comply with bargaining and compulsory arbitration (as set out in the agreement);
  • The Federal Trade Commission (FTC) has reached a settlement with an mobile advertising company over “allegations that it failed to provide in-game rewards users were promised for completing advertising offers.” The FTC unanimously agreed to the proposed settlement with Tapjoy, Inc. that bars the company “from misleading users about the rewards they can earn and must monitor its third-party advertiser partners to ensure they do what is necessary to enable Tapjoy to deliver promised rewards to consumers.” The FTC drafted a 20 year settlement that will obligate Tapjoy, Inc. to refrain from certain practices that violate the FTC Act; in this case that includes not making false claims about the rewards people can get if they take or do not take some action in an online game. Tapjoy, Inc. will also need to submit compliance reports, keep records, and make materials available to the FTC upon demand. Any failure to meet the terms of the settlement could prompt the FTC to seek redress in federal court, including more than $43,000 per violation.
    • In the complaint, the FTC outlined Tapjoy, Inc.’s illegal conduct:
      • Tapjoy operates an advertising platform within mobile gaming applications (“apps”). On the platform, Tapjoy promotes offers of in-app rewards (e.g., virtual currency) to consumers who complete an action, such as taking a survey or otherwise engaging with third-party advertising. Often, these consumers must divulge personal information or spend money. In many instances, Tapjoy never issues the promised reward to consumers who complete an action as instructed, or only issues the currency after a substantial delay. Consumers who attempt to contact Tapjoy to complain about missing rewards find it difficult to do so, and many consumers who complete an action as instructed and are able to submit a complaint nevertheless do not receive the promised reward.  Tapjoy has received hundreds of thousands of complaints concerning its failure to issue promised rewards to consumers. Tapjoy nevertheless has withheld rewards from consumers who have completed all required actions.
    • In its press release, the FTC highlighted the salient terms of the settlement:
      • As part of the proposed settlement, Tapjoy is prohibited from misrepresenting the rewards it offers consumers and the terms under which they are offered. In addition, the company must clearly and conspicuously display the terms under which consumers can receive such rewards and must specify that the third-party advertisers it works with determine if a reward should be issued. Tapjoy also will be required to monitor its advertisers to ensure they are following through on promised rewards, investigate complaints from consumers who say they did not receive their rewards, and discipline advertisers who deceive consumers.
    • FTC Commissioners Rohit Chopra and Rebecca Kelly Slaughter issued a joint statement, and in their summary section, they asserted:
      • The explosive growth of mobile gaming has led to mounting concerns about harmful practices, including unlawful surveillance, dark patterns, and facilitation of fraud.
      • Tapjoy’s failure to properly police its mobile gaming advertising platform cheated developers and gamers out of promised compensation and rewards.
      • The Commission must closely scrutinize today’s gaming gatekeepers, including app stores and advertising middlemen, to prevent harm to developers and gamers.
    • On the last point, Chopra and Kelly Slaughter argued:
      • We should all be concerned that gatekeepers can harm developers and squelch innovation. The clearest example is rent extraction: Apple and Google charge mobile app developers on their platforms up to 30 percent of sales, and even bar developers from trying to avoid this tax through offering alternative payment systems. While larger gaming companies are pursuing legal action against these practices, developers and small businesses risk severe retaliation for speaking up, including outright suspension from app stores – an effective death sentence.
      • This market structure also has cascading effects on gamers and consumers. Under heavy taxation by Apple and Google, developers have been forced to adopt alternative monetization models that rely on surveillance, manipulation, and other harmful practices.
  • The United Kingdom’s (UK) High Court ruled against the use of general warrants for online surveillance by the Uk’s security agencies (MI5, MI6, and the Government Communication Headquarters (GCHQ)). Privacy International (PI), a British advocacy organization, had brought the suit after Edward Snowden revealed the scope of the United States National Security Agency’s (NSA) surveillance activities, including bulk collection of information, a significant portion of which required hacking. PI sued in a special tribunal formed to resolve claims against British security agencies where the government asserted general warrants would suffice for purposes of mass hacking. PI disagreed and argued this was counter to 250 years of established law in the UK that warrants must be based on reasonable suspicion, specific in what is being sought, and proportionate. The High Court agreed with PI.
    • In its statement after the ruling, PI asserted:
      • Because general warrants are by definition not targeted (and could therefore apply to hundreds, thousands or even millions of people) they violate individuals’ right not to not have their property searched without lawful authority, and are therefore illegal.
      • The adaptation of these 250-year-old principles to modern government hacking and property interference is of great significance. The Court signals that fundamental constitutional principles still need to be applied in the context of surveillance and that the government cannot circumvent traditional protections afforded by the common law.
  • In Indiana, the attorney general is calling on the governor to “to adopt a safe harbor rule I proposed that would incentivize companies to take strong data protection measures, which will reduce the scale and frequency of cyberattacks in Indiana.” Attorney General Curtis Hill urged Governor Eric J. Holcomb to allow a change in the state’s data security regulations to be made effective.
    • The proposed rule provides:
      • Procedures adopted under IC 24-4.9-3-3.5(c) are presumed reasonable if the procedures comply with this section, including one (1) of the following applicable standards:
        • (1) A covered entity implements and maintains a cybersecurity program that complies with the National Institute of Standards and Technology (NIST) cybersecurity framework and follows the most recent version of one (1) of the following standards:
          • (A) NIST Special Publication 800-171.
          • (B) NIST SP 800-53.
          • (C) The Federal Risk and Authorization Management Program (FedRAMP) security assessment framework.
          • (D) International Organization for Standardization/International Electrotechnical Commission 27000 family – information security management systems.
        • (2) A covered entity is regulated by the federal or state government and complies with one (1) of the following standards as it applies to the covered entity:
          • (A) The federal USA Patriot Act (P.L. 107-56).
          • (B) Executive Order 13224.
          • (C) The federal Driver’s Privacy Protection Act (18 U.S.C. 2721 et seq.).
          • (D) The federal Fair Credit Reporting Act (15 U.S.C. 1681 et seq.).
          • (E) The federal Health Insurance Portability and Accountability Act (HIPAA) (P.L. 104-191).
        • (3) A covered entity complies with the current version of the payment card industry data security standard in place at the time of the breach of security of data, as published by the Payment Card Industry Security Standard Council.
      • The regulations further provide that if a data base owner can show “its data security plan was reasonably designed, implemented, and executed to prevent the breach of security of data” then it “will not be subject to a civil action from the office of the attorney general arising from the breach of security of data.”
  • The Tech Transparency Project (TTP) is claiming that Apple “has removed apps in China at the government’s request” the majority of which “involve activities like illegal gambling and porn.” However, TTP is asserting that its analysis “suggests Apple is proactively blocking scores of other apps that are politically sensitive for Beijing.”

Coming Events

  • On 19 January, the Senate Intelligence Committee will hold a hearing on the nomination of Avril Haines to be the Director of National Intelligence.
  • The Senate Homeland Security and Governmental Affairs Committee will hold a hearing on the nomination of Alejandro N. Mayorkas to be Secretary of Homeland Security on 19 January.
  • On 19 January, the Senate Armed Services Committee will hold a hearing on former General Lloyd Austin III to be Secretary of Defense.
  • On 27 July, the Federal Trade Commission (FTC) will hold PrivacyCon 2021.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2021. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Further Reading, Other Developments, and Coming Events (11 January 2021)

Further Reading

  • Why the Russian hack is so significant, and why it’s close to a worst-case scenario” By Kevin Collier — NBC News. This article quotes experts who paint a very ugly picture for the United States (U.S.) in trying to recover from the Russian Federation’s hack. Firstly, the Russians are very good at what they do and likely built multiple backdoors in systems they would want to ensure they have access to after using SolarWinds’ update system to gain initial entry. Secondly, broadly speaking, at present, U.S. agencies and companies have two very unpalatable options: spend months hunting through their systems for any such backdoors or other issues or rebuild their systems from scratch. The ramifications of this hack will continue to be felt well into the Biden Administration.
  • The storming of Capitol Hill was organized on social media.” By Sheera Frenkel — The New York Times. As the repercussions of the riot and apparently attempted insurrection continue to be felt, one aspect that has received attention and will continue to receive attention is the role social media platforms played. Platforms used predominantly by right wing and extremist groups like Gab and Parler were used extensively to plan and execute the attack. This fact and the ongoing content moderation issues at larger platforms will surely inform the Section 230 and privacy legislation debates expected to occur this year and into the future.
  • Comcast data cap blasted by lawmakers as it expands into 12 more states” By Jon Brodkin — Ars Technica. Comcast has extended to other states its 1.2TB cap on household broadband usage, and lawmakers in Massachusetts have written the company, claiming this will hurt low-income families working and schooling children at home. Comcast claims this affects only a small class of subscribers, so-called “super users.” Such a move always seemed in retrospect as data is now the most valuable commodity.
  • Finnish lawmakers’ emails hacked in suspected espionage incident” By Shannon Vavra — cyberscoop. Another legislature of a democratic nation has been hacked, and given the recent hacks of Norway’s Parliament and Germany’s Bundestag by the Russians, it may well turn out they were behind this hack that “obtain[ed] information either to benefit a foreign state or to harm Finland” according to Finland’s National Bureau of Investigation.
  • Facebook Forced Its Employees To Stop Discussing Trump’s Coup Attempt” By Ryan Mac — BuzzFeed News. Reportedly, Facebook shut down internal dialogue about the misgivings voiced by employees about its response to the lies in President Donald Trump’s video and the platform’s role in creating the conditions that caused Trump supporters to storm the United States (U.S.) Capitol. Internally and externally, Facebook equivocated on whether it would go so far as Twitter in taking down Trump’s video and content.
  • WhatsApp gives users an ultimatum: Share data with Facebook or stop using the app” By Dan Goodin — Ars Technica. Very likely in response to coming changes to the Apple iOS that will allow for greater control of privacy, Facebook is giving WhatsApp users a choice: accept our new terms of service that allows personal data to be shared with and used by Facebook or have your account permanently deleted.
  • Insecure wheels: Police turn to car data to destroy suspects’ alibis” By Olivia Solon — NBC News. Like any other computerized, connected device, cars are increasingly a source law enforcement (and likely intelligence agencies) are using to investigate crimes. If you sync your phone via USB or Bluetooth, most modern cars will access your phone and store all sorts of personal data that can later be accessed. But, other systems in cars can tell investigators where the car was, how heavy it was (i.e. how many people), when doors opened, etc. And, there are not specific federal or state laws in the United States to mandate protection of these data.

Other Developments

  • The Federal Bureau of Investigation (FBI), the Cybersecurity and Infrastructure Security Agency (CISA), the Office of the Director of National Intelligence (ODNI), and the National Security Agency (NSA) issued a joint statement, finally naming the Russian Federation as the likely perpetrator of the massive SolarWinds hack. However, the agencies qualified the language, claiming:
    • This work indicates that an Advanced Persistent Threat (APT) actor, likely Russian in origin, is responsible for most or all of the recently discovered, ongoing cyber compromises of both government and non-governmental networks. At this time, we believe this was, and continues to be, an intelligence gathering effort.
      • Why the language is not more definitive is not clear. Perhaps the agencies are merely exercising caution about whom is blamed for the attack. Perhaps the agencies do not want to anger a White House and President averse to reports of Russian hacking for fear it will be associated with the hacking during the 2016 election that aided the Trump Campaign.
      • However, it is noteworthy the agencies are stating their belief the hacking was related to “intelligence gathering,” suggesting the purpose of the incursions was not to destroy data or launch an attack. Presumably, such an assertion is meant to allays concerns that the Russian Federation intends to attack the United States (U.S.) like it did in Ukraine and Georgia in the last decade.
    • The Cyber Unified Coordination Group (UCG) convened per Presidential Policy Directive (PPD) 41 (which technically is the FBI, CISA, and the ODNI but not the NSA) asserted its belief that
      • of the approximately 18,000 affected public and private sector customers of SolarWinds’ Orion products, a much smaller number has been compromised by follow-on activity on their systems. We have so far identified fewer than 10 U.S. government agencies that fall into this category, and are working to identify the nongovernment entities who also may be impacted.
      • These findings are, of course, preliminary, and there may be incentives for the agencies to be less than forthcoming about what they know of the scope and impact of the hacking.
  • Federal Communications Commission (FCC) Chair Ajit Pai has said he will not proceed with a rulemaking to curtail 47 USC 230 (Section 230) in response to a petition the National Telecommunications and Information Administration (NTIA) filed at the direction of President Donald Trump. Pai remarked “I do not intend to move forward with the notice of proposed rule-making at the FCC” because “in part, because given the results of the election, there’s simply not sufficient time to complete the administrative steps necessary in order to resolve the rule-making.” Pai cautioned Congress and the Biden Administration “to study and deliberate on [reforming Section 230] very seriously,” especially “the immunity provision.”  
    • In October, Pai had announced the FCC would proceed with a notice and comment rulemaking based on the NTIA’s petition asking the agency to start a rulemaking to clarify alleged ambiguities in 47 USC 230 regarding the limits of the liability shield for the content others post online versus the liability protection for “good faith” moderation by the platform itself. The NTIA was acting per direction in an executive order allegedly aiming to correct online censorship. Executive Order 13925, “Preventing Online Censorship” was issued in late May after Twitter factchecked two of President Donald Trump’s Tweets regarding false claims made about mail voting in California in response to the COVID-19 pandemic.
  • A House committee released its most recent assessment of federal cybersecurity and information technology (IT) assessment. The House Oversight Committee’s Government Operations Subcommittee released its 11th biannual scorecard under the “Federal Information Technology Acquisition Reform Act (FITARA). The subcommittee stressed this “marks the first time in the Scorecard’s history that all 24 agencies included in the law have received A’s in a single category” and noted it is “the first time that a category will be retired.” Even though this assessment is labeled the FITARA Scorecard, it is actually a compilation of different metrics borne of other pieces of legislation and executive branch programs.
    • Additionally, 19 of the 24 agencies reviewed received A’s on the Data Center Optimization Initiative (DCOI)
    • However, four agencies received F’s on Agency Chief Information Officer (CIO) authority enhancements, measures aiming to fulfill one of the main purposes of FITARA: empowering agency CIOs as a means of controlling and managing better IT acquisition and usage. It has been an ongoing struggle to get agency compliance with the letter and spirit of federal law and directives to do just this.
    • Five agencies got F’s and two agencies got D’s for failing to hit the schedule for transitioning off of the “the expiring Networx, Washington Interagency Telecommunications System (WITS) 3, and Regional Local Service Agreement (LSA) contracts” to the General Services Administration’s $50 billion Enterprise Infrastructure Solutions (EIS). The GSA explained this program in a recent letter:
      • After March 31, 2020, GSA will disconnect agencies, in phases, to meet the September 30, 2022 milestone for 100% completion of transition. The first phase will include agencies that have been “non-responsive” to transition outreach from GSA. Future phases will be based on each agency’s status at that time and the individual circumstances impacting that agency’s transition progress, such as protests or pending contract modifications. The Agency Transition Sponsor will receive a notification before any services are disconnected, and there will be an opportunity for appeal.
  • A bipartisan quartet of United States Senators urged the Trump Administration in a letter to omit language in a trade agreement with the United Kingdom (UK) that mirrors the liability protection in 47 U.S.C. 230 (Section 230). Senators Rob Portman (R-OH), Mark R. Warner (D-VA), Richard Blumenthal (D-CT), and Charles E. Grassley (R-IA) argued to U.S. Trade Representative Ambassador Robert Lighthizer that a “safe harbor” like the one provided to technology companies for hosting or moderating third party content is outdated, not needed in a free trade agreement, contrary to the will of both the Congress and UK Parliament, and likely to be changed legislatively in the near future. However, left unsaid in the letter, is the fact that Democrats and Republicans generally do not agree on how precisely to change Section 230. There may be consensus that change is needed, but what that change looks like is still a matter much in dispute.
    • Stakeholders in Congress were upset that the Trump Administration included language modeled on Section 230 in the United States-Mexico-Canada Agreement (USMCA), the modification of the North American Free Trade Agreement (NAFTA). For example, House Energy and Commerce Committee Chair Frank Pallone Jr (D-NJ) and then Ranking Member Greg Walden (R-OR) wrote Lighthizer, calling it “inappropriate for the United States to export language mirroring Section 230 while such serious policy discussions are ongoing” in Congress.
  • The Trump White House issued a new United States (U.S.) government strategy for advanced computing to replace the 2019 strategy. The “PIONEERING THE FUTURE ADVANCED COMPUTING ECOSYSTEM: A STRATEGIC PLAN” “envisions a future advanced computing ecosystem that provides the foundation for continuing American leadership in science and engineering, economic competitiveness, and national security.” The Administration asserted:
    • It develops a whole-of-nation approach based on input from government, academia, nonprofits, and industry sectors, and builds on the objectives and recommendations of the 2019 National Strategic Computing Initiative Update: Pioneering the Future of Computing. This strategic plan also identifies agency roles and responsibilities and describes essential operational and coordination structures necessary to support and implement its objectives. The plan outlines the following strategic objectives:
      • Utilize the future advanced computing ecosystem as a strategic resource spanning government, academia, nonprofits, and industry.
      • Establish an innovative, trusted, verified, usable, and sustainable software and data ecosystem.
      • Support foundational, applied, and translational research and development to drive the future of advanced computing and its applications.
      • Expand the diverse, capable, and flexible workforce that is critically needed to build and sustain the advanced computing ecosystem.
  • A federal court threw out a significant portion of a suit Apple brought against a security company, Corellium, that offers technology allowing security researchers to virtualize the iOS in order to undertake research. The United States District Court for the Southern District of Florida summarized the case:
    • On August 15, 2019, Apple filed this lawsuit alleging that Corellium infringed Apple’s copyrights in iOS and circumvented its security measures in violation of the federal Digital Millennium Copyright Act (“DMCA”). Corellium denies that it has violated the DMCA or Apple’s copyrights. Corellium further argues that even if it used Apple’s copyrighted work, such use constitutes “fair use” and, therefore, is legally permissible.
    • The court found “that Corellium’s use of iOS constitutes fair use” but did not for the DMCA claim, thus allowing Apple to proceed with that portion of the suit.
  • The Trump Administration issued a plan on how cloud computing could be marshalled to help federally funded artificial intelligence (AI) research and development (R&D). A select committee made four key recommendations that “should accelerate the use of cloud resources for AI R&D: 1)launch and support pilot projects to identify and explore the advantages and challenges associated with the use of commercial clouds in conducting federally funded AI research; (2) improve education and training opportunities to help researchers better leverage cloud resources for AI R&D; (3) catalog best practices in identity management and single-sign-on strategies to enable more effective use of the variety of commercial cloud resources for AI R&D; and (4) establish and publish best practices for the seamless use of different cloud platforms for AI R&D. Each recommendation, if adopted, should accelerate the use of cloud resources for AI R&D.”

Coming Events

  • On 13 January, the Federal Communications Commission (FCC) will hold its monthly open meeting, and the agency has placed the following items on its tentative agenda “Bureau, Office, and Task Force leaders will summarize the work their teams have done over the last four years in a series of presentations:
    • Panel One. The Commission will hear presentations from the Wireless Telecommunications Bureau, International Bureau, Office of Engineering and Technology, and Office of Economics and Analytics.
    • Panel Two. The Commission will hear presentations from the Wireline Competition Bureau and the Rural Broadband Auctions Task Force.
    • Panel Three. The Commission will hear presentations from the Media Bureau and the Incentive Auction Task Force.
    • Panel Four. The Commission will hear presentations from the Consumer and Governmental Affairs Bureau, Enforcement Bureau, and Public Safety and Homeland Security Bureau.
    • Panel Five. The Commission will hear presentations from the Office of Communications Business Opportunities, Office of Managing Director, and Office of General Counsel.
  • On 27 July, the Federal Trade Commission (FTC) will hold PrivacyCon 2021.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2021. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Gerd Altmann from Pixabay

Further Reading, Other Development, and Coming Events (4 January 2021)

Further Reading

  • Microsoft Says Russian Hackers Viewed Some of Its Source Code” By Nicole Perlroth — The New York Times. The Sluzhba vneshney razvedki Rossiyskoy Federatsii’s (SVR) hack keeps growing and growing with Microsoft admitting its source code was viewed through an employee account. It may be that authorized Microsoft resellers were one of the vectors by which the SVR accessed SolarWinds, FireEye, and ultimately a number of United States (U.S.) government agencies. Expect more revelations to come about the scope and breadth of entities and systems the SVR compromised.
  • In 2020, we reached peak Internet. Here’s what worked — and what flopped.” By Geoffrey Fowler — The Washington Post. The newspaper’s tech columnist reviews the technology used during the pandemic and what is likely to stay with us when life returns to some semblance of normal.
  • Facebook Says It’s Standing Up Against Apple For Small Businesses. Some Of Its Employees Don’t Believe It.” By Craig Silverman and Ryan Mac — BuzzFeed News. Again, two of the best-sourced journalists when it comes to Facebook have exposed employee dissent within the social media and advertising giant, and this time over the company’s advertising blitz positioning it as the champion of small businesses that allegedly stand to be hurt when Apple rolls out iOS 14 that will allow users to block the type of tracking across apps and the internet Facebook thrives on. The company’s PR campaign stands in contrast to the anecdotal stories about errors that harmed and impeded small companies in using Facebook to advertise and sell products and services to cusstomers.
  • SolarWinds hack spotlights a thorny legal problem: Who to blame for espionage?” By Tim Starks — cyberscoop. This piece previews possible and likely inevitable litigation to follow from the SolarWinds hack, including possible securities action on the basis of fishy dumps of stock by executive, breach of contract, and negligence for failing to patch and address vulnerabilities in a timely fashion. Federal and state regulators will probably get on the field, too. But this will probably take years to play out as Home Depot settled claims arising from its 2014 breach with state attorneys general in November 2020.
  • The Tech Policies the Trump Administration Leaves Behind” By Aaron Boyd — Nextgov. A look back at the good, the bad, and the ugly of the Trump Administration’s technology policies, some of which will live on in the Biden Administration.

Other Developments

  • In response to the SolarWinds hack, the Federal Bureau of Investigation (FBI), the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA), and the Office of the Director of National Intelligence (ODNI) issued a joint statement indicating that the process established in Pursuant to Presidential Policy Directive (PPD) 41, an Obama Administration policy has been activated and a Cyber Unified Coordination Group (UCG) has been formed “to coordinate a whole-of-government response to this significant cyber incident.” The agencies explained “[t]he UCG is intended to unify the individual efforts of these agencies as they focus on their separate responsibilities.”
    • In PPD-41 it is explained that a UCG “shall serve as the primary method for coordinating between and among Federal agencies in response to a significant cyber incident as well as for integrating private sector partners into incident response efforts, as appropriate.” Moreover, “[t]he Cyber UCG is intended to result in unity of effort and not to alter agency authorities or leadership, oversight, or command responsibilities.”
  • Following the completion of its “in-depth” investigation, the European Commission (EC) cleared Google’s acquisition of Fitbit with certain conditions, removing a significant hurdle for the American multinational in buying the wearable fitness tracker company. In its press release, the EC explained that after its investigation, “the Commission had concerns that the transaction, as initially notified, would have harmed competition in several markets.” To address and allay concerns, Google bound itself for ten years to a set of commitments that can be unilaterally extended by the EC and will be enforced, in part, by the appointment of a trustee to oversee compliance.
    • The EC was particularly concerned about:
      • Advertising: By acquiring Fitbit, Google would acquire (i) the database maintained by Fitbit about its users’ health and fitness; and (ii) the technology to develop a database similar to that of Fitbit. By increasing the already vast amount of data that Google could use for the personalisation of ads, it would be more difficult for rivals to match Google’s services in the markets for online search advertising, online display advertising, and the entire “ad tech” ecosystem. The transaction would therefore raise barriers to entry and expansion for Google’s competitors for these services to the detriment of advertisers, who would ultimately face higher prices and have less choice.
      • Access to Web Application Programming Interface (‘API’) in the market for digital healthcare: A number of players in this market currently access health and fitness data provided by Fitbit through a Web API, in order to provide services to Fitbit users and obtain their data in return. The Commission found that following the transaction, Google might restrict competitors’ access to the Fitbit Web API. Such a strategy would come especially at the detriment of start-ups in the nascent European digital healthcare space.
      • Wrist-worn wearable devices: The Commission is concerned that following the transaction, Google could put competing manufacturers of wrist-worn wearable devices at a disadvantage by degrading their interoperability with Android smartphones.
    • As noted, Google made a number of commitments to address competition concerns:
      • Ads Commitment:
        • Google will not use for Google Ads the health and wellness data collected from wrist-worn wearable devices and other Fitbit devices of users in the EEA, including search advertising, display advertising, and advertising intermediation products. This refers also to data collected via sensors (including GPS) as well as manually inserted data.
        • Google will maintain a technical separation of the relevant Fitbit’s user data. The data will be stored in a “data silo” which will be separate from any other Google data that is used for advertising.
        • Google will ensure that European Economic Area (‘EEA’) users will have an effective choice to grant or deny the use of health and wellness data stored in their Google Account or Fitbit Account by other Google services (such as Google Search, Google Maps, Google Assistant, and YouTube).
      • Web API Access Commitment:
        • Google will maintain access to users’ health and fitness data to software applications through the Fitbit Web API, without charging for access and subject to user consent.
      • Android APIs Commitment:
        • Google will continue to license for free to Android original equipment manufacturers (OEMs) those public APIs covering all current core functionalities that wrist-worn devices need to interoperate with an Android smartphone. Such core functionalities include but are not limited to, connecting via Bluetooth to an Android smartphone, accessing the smartphone’s camera or its GPS. To ensure that this commitment is future-proof, any improvements of those functionalities and relevant updates are also covered.
        • It is not possible for Google to circumvent the Android API commitment by duplicating the core interoperability APIs outside the Android Open Source Project (AOSP). This is because, according to the commitments, Google has to keep the functionalities afforded by the core interoperability APIs, including any improvements related to the functionalities, in open-source code in the future. Any improvements to the functionalities of these core interoperability APIs (including if ever they were made available to Fitbit via a private API) also need to be developed in AOSP and offered in open-source code to Fitbit’s competitors.
        • To ensure that wearable device OEMs have also access to future functionalities, Google will grant these OEMs access to all Android APIs that it will make available to Android smartphone app developers including those APIs that are part of Google Mobile Services (GMS), a collection of proprietary Google apps that is not a part of the Android Open Source Project.
        • Google also will not circumvent the Android API commitment by degrading users experience with third party wrist-worn devices through the display of warnings, error messages or permission requests in a discriminatory way or by imposing on wrist-worn devices OEMs discriminatory conditions on the access of their companion app to the Google Play Store.
  • The United States (U.S.) Department of Health and Human Services’ (HHS) Office of Civil Rights (OCR) has proposed a major rewrite of the regulations governing medical privacy in the U.S. As the U.S. lacks a unified privacy regime, the proposed changes would affect on those entities in the medical sector subject to the regime, which is admittedly many such entities. Nevertheless, it is almost certain the Biden Administration will pause this rulemaking and quite possibly withdraw it should it prove crosswise with the new White House’s policy goals.
    • HHS issued a notice of proposed rulemaking “to modify the Standards for the Privacy of Individually Identifiable Health Information (Privacy Rule) under the Health Insurance Portability and Accountability Act of 1996 (HIPAA) and the Health Information Technology for Economic and Clinical Health Act of 2009 (HITECH Act).”
      • HHS continued:
        • The Privacy Rule is one of several rules, collectively known as the HIPAA Rules, that protect the privacy and security of individuals’ medical records and other protected health information (PHI), i.e., individually identifiable health information maintained or transmitted by or on behalf of HIPAA covered entities (i.e., health care providers who conduct covered health care transactions electronically, health plans, and health care clearinghouses).
        • The proposals in this NPRM support the Department’s Regulatory Sprint to Coordinated Care (Regulatory Sprint), described in detail below. Specifically, the proposals in this NPRM would amend provisions of the Privacy Rule that could present barriers to coordinated care and case management –or impose other regulatory burdens without sufficiently compensating for, or offsetting, such burdens through privacy protections. These regulatory barriers may impede the transformation of the health care system from a system that pays for procedures and services to a system of value-based health care that pays for quality care.
    • In a press release, OCR asserted:
      • The proposed changes to the HIPAA Privacy Rule include strengthening individuals’ rights to access their own health information, including electronic information; improving information sharing for care coordination and case management for individuals; facilitating greater family and caregiver involvement in the care of individuals experiencing emergencies or health crises; enhancing flexibilities for disclosures in emergency or threatening circumstances, such as the Opioid and COVID-19 public health emergencies; and reducing administrative burdens on HIPAA covered health care providers and health plans, while continuing to protect individuals’ health information privacy interests.
  • The Federal Trade Commission (FTC) has used its powers to compel selected regulated entities to provide requested information in asking that “nine social media and video streaming companies…provide data on how they collect, use, and present personal information, their advertising and user engagement practices, and how their practices affect children and teens.” The TFTC is using its Section 6(b) authority to compel the information from Amazon.com, Inc., ByteDance Ltd., which operates the short video service TikTok, Discord Inc., Facebook, Inc., Reddit, Inc., Snap Inc., Twitter, Inc., WhatsApp Inc., and YouTube LLC. Failure to respond can result in the FTC fining a non-compliant entity.
    • The FTC claimed in its press release it “is seeking information specifically related to:
      • how social media and video streaming services collect, use, track, estimate, or derive personal and demographic information;
      • how they determine which ads and other content are shown to consumers;
      • whether they apply algorithms or data analytics to personal information;
      • how they measure, promote, and research user engagement; and
      • how their practices affect children and teens.
    • The FTC explained in its sample order:
      • The Commission is seeking information concerning the privacy policies, procedures, and practices of Social Media and Video Streaming Service providers, Including the method and manner in which they collect, use, store, and disclose Personal Information about consumers and their devices. The Special Report will assist the Commission in conducting a study of such policies, practices, and procedures.
  • The United States (U.S.) Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) supplemented its Emergency Directive 21-01 to federal civilian agencies in response to the Sluzhba vneshney razvedki Rossiyskoy Federatsii’s (SVR) hack via SolarWinds. In an 18 December update, CISA explained:
    • This section provides additional guidance on the implementation of CISA Emergency Directive (ED) 21-01, to include an update on affected versions, guidance for agencies using third-party service providers, and additional clarity on required actions.
    •  In a 30 December update, CISA stated:
      • Specifically, all federal agencies operating versions of the SolarWinds Orion platform other than those identified as “affected versions” below are required to use at least SolarWinds Orion Platform version 2020.2.1HF2. The National Security Agency (NSA) has examined this version and verified that it eliminates the previously identified malicious code. Given the number and nature of disclosed and undisclosed vulnerabilities in SolarWinds Orion, all instances that remain connected to federal networks must be updated to 2020.2.1 HF2 by COB December 31, 2020. CISA will follow up with additional supplemental guidance, to include further clarifications and hardening requirements.
  • Australia’s Attorney-General’s Department published an unclassified version of the four volumes of the “Report of the Comprehensive Review of the Legal Framework of the National Intelligence Community,” an “examination of the legislative framework underpinning the National Intelligence Community (NIC)…the first and largest since the Hope Royal Commissions considered the Australian Intelligence Community (AIC) in the 1970s and 1980s.” Ultimately, the authors of the report concluded:
    • We do not consider the introduction of a common legislative framework, in the form of a single Act governing all or some NIC agencies, to be a practical, pragmatic or proportionate reform. It would be unlikely that the intended benefits of streamlining and simplifying NIC legislation could be achieved due to the diversity of NIC agency functions—from intelligence to law enforcement, regulatory and policy—and the need to maintain differences in powers, immunities and authorising frameworks. The Review estimates that reform of this scale would cost over $200million and take up to 10years to complete. This would be an impractical and disproportionate undertaking for no substantial gain. In our view, the significant costs and risks of moving to a single, consolidated Act clearly outweigh the limited potential benefits.
    • While not recommending a common legislative framework for the entire NIC, some areas of NIC legislation would benefit from simplification and modernisation. We recommend the repeal of the TIA Act, Surveillance Devices Act 2004(SD Act) and parts of the Australian Security Intelligence Organisation Act 1979 (ASIO Act), and their replacement with a single new Act governing the use of electronic surveillance powers—telecommunications interception, covert access to stored communications, computers and telecommunications data, and the use of optical, listening and tracking devices—under Commonwealth law.
  • The National Institute of Standards and Technology (NIST) released additional materials to supplement a major rewrite of a foundational security guidance document. NIST explained “[n]ew supplemental materials for NIST Special Publication (SP) 800-53 Revision 5, Security and Privacy Controls for Information Systems and Organizations, are available for download to support the December 10, 2020 errata release of SP 800-53 and SP 800-53B, Control Baselines for Information Systems and Organizations.” These supplemental materials include:
    • A comparison of the NIST SP 800-53 Revision 5 controls and control enhancements to Revision 4. The spreadsheet describes the changes to each control and control enhancement, provides a brief summary of the changes, and includes an assessment of the significance of the changes.  Note that this comparison was authored by The MITRE Corporation for the Director of National Intelligence (DNI) and is being shared with permission by DNI.
    • Mapping of the Appendix J Privacy Controls (Revision 4) to Revision 5. The spreadsheet supports organizations using the privacy controls in Appendix J of SP 800-53 Revision 4 that are transitioning to the integrated control catalog in Revision 5.
    • Mappings between NIST SP 800-53 and other frameworks and standards. The mappings provide organizations a general indication of SP 800-53 control coverage with respect to other frameworks and standards. When leveraging the mappings, it is important to consider the intended scope of each publication and how each publication is used; organizations should not assume equivalency based solely on the mapping tables because mappings are not always one-to-one and there is a degree of subjectivity in the mapping analysis.
  • Via a final rule, the Department of Defense (DOD) codified “the National Industrial Security Program Operating Manual (NISPOM) in regulation…[that] establishes requirements for the protection of classified information disclosed to or developed by contractors, licensees, grantees, or certificate holders (hereinafter referred to as contractors) to prevent unauthorized disclosure.” The DOD stated “[i]n addition to adding the NISPOM to the Code of Federal Regulations (CFR), this rule incorporates the requirements of Security Executive Agent Directive (SEAD) 3, “Reporting Requirements for Personnel with Access to Classified Information or Who Hold a Sensitive Position.” The DOD stated “SEAD 3 requires reporting by all contractor cleared personnel who have been granted eligibility for access to classified information.”
    • The DOD added “[t]his NISPOM rule provides for a single nation-wide implementation plan which will, with this rule, include SEAD 3 reporting by all contractor cleared personnel to report specific activities that may adversely impact their continued national security eligibility, such as reporting of foreign travel and foreign contacts.”
    • The DOD explained “NISP Cognizant Security Agencies (CSAs) shall conduct an analysis of such reported activities to determine whether they pose a potential threat to national security and take appropriate action.”
    • The DOD added that “the rule also implements the provisions of Section 842 of Public Law 115-232, which removes the requirement for a covered National Technology and Industrial Base (NTIB) entity operating under a special security agreement pursuant to the NISP to obtain a national interest determination as a condition for access to proscribed information.”
  • An advisory committee housed at the United States (U.S.) Department of Homeland Security (DHS) is calling for the White House to quickly “operationalize intelligence in a classified space with senior executives and cyber experts from most critical entities in the energy, financial services, and communications sectors working directly with intelligence analysts and other government staff.” In their report, the President’s National Infrastructure Advisory Council (NIAC) proposed the creation of a Critical Infrastructure Command Center (CICC) to “provid[e] real-time collaboration between government and industry…[and] take direct action and provide tactical solutions to mitigate, remediate,  and deter threats.” NIAC urged the President to “direct relevant federal agencies to support the private sector in executing the concept, including identifying the required government staff…[and] work with Congress to ensure the appropriate authorities are established to allow the CICC to fully realize its operational functionality.” NIAC recommended “near-term actions to implement the CICC concept:
    • 1.The President should direct the relevant federal agencies to support the private sector in rapidly standing up the CICC concept with the energy, financial services, and communications sectors:
      • a. Within 90 days the private sector will identify the executives who will lead execution of the CICC concept and establish governing criteria (including membership, staffing and rotation, and other logistics).
      • b. Within 120 days the CICC sector executives will identify and assign the necessary CICC staff from the private sector.
      • c. Within 90 days an appropriate venue to house the operational component will be identified and the necessary agreements put in place.
    • 2. The President should direct the Intelligence Community and other relevant government agencies to identify and co-locate the required government staff counterparts to enable the direct coordination required by the CICC. This staff should be pulled from the IC, SSAs, and law enforcement.
    • 3. The President, working with Congress, should establish the appropriate authorities and mission for federal agencies to directly share intelligence with critical infrastructure companies, along with any other authorities required for the CICC concept to be fully successful (identified in Appendix A).
    • 4. Once the CICC concept is fully operational (within 180 days), the responsible executives should deliver a report to the NSC and the NIAC demonstrating how the distinct capabilities of the CICC have been achieved and the impact of the capabilities to date. The report should identify remaining gaps in resources, direction, or authorities.

Coming Events

  • On 13 January, the Federal Communications Commission (FCC) will hold its monthly open meeting, and the agency has placed the following items on its tentative agenda “Bureau, Office, and Task Force leaders will summarize the work their teams have done over the last four years in a series of presentations:
    • Panel One. The Commission will hear presentations from the Wireless Telecommunications Bureau, International Bureau, Office of Engineering and Technology, and Office of Economics and Analytics.
    • Panel Two. The Commission will hear presentations from the Wireline Competition Bureau and the Rural Broadband Auctions Task Force.
    • Panel Three. The Commission will hear presentations from the Media Bureau and the Incentive Auction Task Force.
    • Panel Four. The Commission will hear presentations from the Consumer and Governmental Affairs Bureau, Enforcement Bureau, and Public Safety and Homeland Security Bureau.
    • Panel Five. The Commission will hear presentations from the Office of Communications Business Opportunities, Office of Managing Director, and Office of General Counsel.
  • On 27 July, the Federal Trade Commission (FTC) will hold PrivacyCon 2021.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2021. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by opsa from Pixabay

EC Finally Unveils Digital Services Act and Digital Markets Act

The EU releases its proposals to remake digital markets.

The European Commission (EC) has released its draft proposals to remake how the European Union (EU) regulates digital markets and digital services, the latest in the bloc’s attempts to rein in what it sees as harms and abuses to people and competition in Europe and the world. At the earliest, these proposals would take effect in 2022 and are sure to be vigorously opposed by large United States (U.S.) multinationals like Google and Facebook and will also likely faced more restrained pushback from the U.S. government.

The Digital Markets Act would allow the EU to designate certain core platform services as gatekeepers subject to certain quanitative metrics or on a case-by-case basis. Once a company is deemed a gatekeeper, it would be subject to much greater regulation by the EU and violations of the new act could result in fines of 10% of worldwide revenue.

In its press release, the EC asserted:

European values are at the heart of both proposals. The new rules will better protect consumers and their fundamental rights online, and will lead to fairer and more open digital markets for everyone. A modern rulebook across the single market will foster innovation, growth and competitiveness and will provide users with new, better and reliable online services. It will also support the scaling up of smaller platforms, small and medium-sized enterprises, and start-ups, providing them with easy access to customers across the whole single market while lowering compliance costs. Furthermore, the new rules will prohibit unfair conditions imposed by online platforms that have become or are expected to become gatekeepers to the single market. The two proposals are at the core of the Commission’s ambition to make this Europe’s Digital Decade.

In the Digital Markets Act, the EC explained the problem with large platforms dominating certain digital markets. The EC discussed the harm to people and medium and small businesses as some large companies control certain markets and use their size and dominance to extract unfair prices for inferior services and products. The EC listed the core platform services that might be regulated:

  • online intermediation services (incl. for example marketplaces, app stores and online intermediation services in other sectors like mobility, transport or energy)
  • online search engines,
  • social networking
  • video sharing platform services,
  • number-independent interpersonal electronic communication services,
  • operating systems,
  • cloud services and
  • advertising services, including advertising networks, advertising exchanges and any other advertising intermediation services, where these advertising services are being related to one or more of the other core platform services mentioned above.

Clearly, a number of major American firms could easily be considered “core platform services” including Amazon, Apple, Google, Facebook, Instagram, YouTube, WhatsApp, Microsoft, and others. Whether they would be deemed gatekeepers would hinge on whether they meet the quantitative metrics the EU will put in place, and this will be a rebuttable presumption such that if a firm meets the standards, it may present evidence to the contrary and argue it is not a gatekeeper.

The EC detailed the quantitative metrics in Article 3. A company may qualify if it meets all three of the following criteria subject to further metrics:

A provider of core platform services shall be designated as gatekeeper if:

(a) it has a significant impact on the internal market;

(b) it operates a core platform service which serves as an important gateway for business users to reach end users; and

(c) it enjoys an entrenched and durable position in its operations or it is foreseeable that it will enjoy such a position in the near future.

The other metrics include €6.5 billion in revenue over the last three years or a €65 billion market capitalization and the provision of core platform services in at least three member states to show a “significant impact on internal market.” For the second category listed above, a company would need to provide a core platform service to 45 million or more people in the EU and 10,000 or more businesses in the EU. And, for the last category, passing the 45 million user and 10,000 business threshold for three consecutive years would suffice. The act reads:

A provider of core platform services shall be presumed to satisfy:

(a) the requirement in paragraph 1 point (a) where the undertaking to which it belongs achieves an annual EEA turnover equal to or above EUR 6.5 billion in the last three financial years, or where the average market capitalisation or the equivalent fair market value of the undertaking to which it belongs amounted to at least EUR 65 billion in the last financial year, and it provides a core platform service in at least three Member States;

(b) the requirement in paragraph 1 point (b) where it provides a core platform service that has more than 45 million monthly active end users established or located in the Union and more than 10,000 yearly active business users established in the Union in the last financial year; for the purpose of the first subparagraph, monthly active end users shall refer to the average number of monthly active end users throughout the largest part of the last financial year;

(c) the requirement in paragraph 1 point (c) where the thresholds in point (b) were met in each of the last three financial years.

The EU would also be able to label a provider of core platform services a gatekeeper on a case-by-case basis:

Provision should also be made for the assessment of the gatekeeper role of providers of core platform services which do not satisfy all of the quantitative thresholds, in light of the overall objective requirements that they have a significant impact on the internal market, act as an important gateway for business users to reach end users and benefit from a durable and entrenched position in their operations or it is foreseeable that it will do so in the near future.

It bears note that a company would be found to be a gatekeeper if it is merely foreseeable that it will satisfy these criteria soon. This flexibility could allow the EU to track companies and flag them as gatekeepers before they, in fact, achieve the sort of market dominance this regulation is intended to stop.

Among the relevant excerpts from the “Reasons for and objectives of the proposal” section of the act are:

  • Large platforms have emerged benefitting from characteristics of the sector such as strong network effects, often embedded in their own platform ecosystems, and these platforms represent key structuring elements of today’s digital economy, intermediating the majority of transactions between end users and business users. Many of these undertakings are also comprehensively tracking and profiling end users. A few large platforms increasingly act as gateways or gatekeepers between business users and end users and enjoy an entrenched and durable position, often as a result of the creation of conglomerate ecosystems around their core platform services, which reinforces existing entry barriers.
  • As such, these gatekeepers have a major impact on, have substantial control over the access to, and are entrenched in digital markets, leading to significant dependencies of many business users on these gatekeepers, which leads, in certain cases, to unfair behaviour vis-à-vis these business users. It also leads to negative effects on the contestability of the core platform services concerned. Regulatory initiatives by Member States cannot fully address these effects; without action at EU level, they could lead to a fragmentation of the Internal Market.
  • Unfair practices and lack of contestability lead to inefficient outcomes in the digital sector in terms of higher prices, lower quality, as well as less choice and innovation to the detriment of European consumers. Addressing these problems is of utmost importance in view of the size of the digital economy (estimated at between 4.5% to 15.5% of global GDP in 2019 with a growing trend) and the important role of online platforms in digital markets with its societal and economic implications.
  • Weak contestability and unfair practices in the digital sector are more frequent and pronounced in certain digital services than others. This is the case in particular for widespread and commonly used digital services and infrastructures that mostly directly intermediate between business users and end users.
  • The enforcement experience under EU competition rules, numerous expert reports and studies and the results of the OPC show that there are a number of digital services that have the following features: (i) highly concentrated multi-sided platform services, where usually one or very few large digital platforms set the commercial conditions with considerable autonomy; (ii) a few large digital platforms act as gateways for business users to reach their customers and vice-versa; and (iii) gatekeeper power of these large digital platforms is often misused by means of unfair behaviour vis-à-vis economically dependent business users and customers.
  • The proposal is therefore further limited to a number of ‘core platform services’ where the identified problems are most evident and prominent and where the presence of a limited number of large online platforms that serve as gateways for business users and end users has led or is likely to lead to weak contestability of these services and of the markets in which these intervene. These core platform services include: (i) online intermediation services (incl. for example marketplaces, app stores and online intermediation services in other sectors like mobility, transport or energy) (ii) online search engines, (iii) social networking (iv)video sharing platform services, (v) number-independent interpersonal electronic communication services, (vi) operating systems, (vii) cloud services and (viii) advertising services, including advertising networks, advertising exchanges and any other advertising intermediation services, where these advertising services are being related to one or more of the other core platform services mentioned above.
  • The fact that a digital service qualifies as a core platform service does not mean that issues of contestability and unfair practices arise in relation to every provider of these core platform services. Rather, these concerns appear to be particularly strong when the core platform service is operated by a gatekeeper. Providers of core platform providers can be deemed to be gatekeepers if they: (i) have a significant impact on the internal market, (ii) operate one or more important gateways to customers and (iii) enjoy or are expected to enjoy an entrenched and durable position in their operations.
  • Such gatekeeper status can be determined either with reference to clearly circumscribed and appropriate quantitative metrics, which can serve as rebuttable presumptions to determine the status of specific providers as a gatekeeper, or based on a case-by-case qualitative assessment by means of a market investigation.

The Digital Services Act would add new regulation on top of Directive 2000/31/EC (aka the e-Commerce Directive) by “[b]uilding on the key principles set out in the e-Commerce Directive, which remain valid today.” This new scheme “seeks to ensure the best conditions for the provision of innovative digital services in the internal market, to contribute to online safety and the protection of fundamental rights, and to set a robust and durable governance structure for the effective supervision of providers of intermediary services.”

The Digital Services Act is focused mostly on the information and misinformation present all over the online world and the harms it wreaks on EU citizens. However, the EC is also seeking to balance fundamental EU rights in more tightly regulating online platforms. Like the Digital Markets Act, this regulation would focus on the largest online content, product and services providers, which, as a practical matter, would likely be Facebook, Amazon, Google, Spotify, and a handful of other companies. Once a company has 10% of more of the EU’s population using its offerings, then the requirements of the Digital Services Act would be triggered.

Additionally, the Digital Services Act unites two online issues not usually considered together in the United States (U.S.): harmful online content and harmful online products. Even though it seems logical to consider these online offerings in tandem, there is clear bifurcation in the U.S. in how these two issues are regulated to the extent they are at the federal and state levels.

The Digital Services Act “will introduce a series of new, harmonised EU-wide obligations for digital services, carefully graduated on the basis of those services’ size and impact, such as:

  • Rules for the removal of illegal goods, services or content online;
  • Safeguards for users whose content has been erroneously deleted by platforms;
  • New obligations for very large platforms to take risk-based action to prevent abuse of their systems;
  • Wide-ranging transparency measures, including on online advertising and on the algorithms used to recommend content to users;
  • New powers to scrutinize how platforms work, including by facilitating access by researchers to key platform data;
  • New rules on traceability of business users in online market places, to help track down sellers of illegal goods or services;
  • An innovative cooperation process among public authorities to ensure effective enforcement across the single market.”

The EC explained

new and innovative business models and services, such as online social networks and marketplaces, have allowed business users and consumers to impart and access information and engage in transactions in novel ways. A majority of Union citizens now uses those services on a daily basis. However, the digital transformation and increased use of those services has also resulted in new risks and challenges, both for individual users and for society as a whole.

The EC spelled out what the Digital Services Act would do:

This Regulation lays down harmonised rules on the provision of intermediary services in the internal market. In particular, it establishes:

(a) a framework for the conditional exemption from liability of providers of intermediary services;

(b) rules on specific due diligence obligations tailored to certain specific categories of providers of intermediary services;

(c) rules on the implementation and enforcement of this Regulation, including as regards the cooperation of and coordination between the competent authorities.

The EC explained the purpose of the act:

  • this proposal seeks to ensure the best conditions for the provision of innovative digital services in the internal market, to contribute to online safety and the protection of fundamental rights, and to set a robust and durable governance structure for the effective supervision of providers of intermediary services.
  • The proposal defines clear responsibilities and accountability for providers of intermediary services, and in particular online platforms, such as social media and marketplaces. By setting out clear due-diligence obligations for certain intermediary services, including notice-and-action procedures for illegal content and the possibility to challenge the platforms’ content moderation decisions, the proposal seeks to improve users’ safety online across the entire Union and improve the protection of their fundamental rights. Furthermore, an obligation for certain online platforms to receive, store and partially verify and publish information on traders using their services will ensure a safer and more transparent online environment for consumers.
  • Recognising the particular impact of very large online platforms on our economy and society, the proposal sets a higher standard of transparency and accountability on how the providers of such platforms moderate content, on advertising and on algorithmic processes. It sets obligations to assess the risks their systems pose to develop appropriate risk management tools to protect the integrity of their services against the use of manipulative techniques.

The EC summarized how the act will work:

  • The operational threshold for service providers in scope of these obligations includes those online platforms with a significant reach in the Union, currently estimated to be amounting to more than 45 million recipients of the service. This threshold is proportionate to the risks brought by the reach of the platforms in the Union; where the Union’s population changes by a certain percentage, the Commission will adjust the number of recipients considered for the threshold, so that it consistently corresponds to 10 % of the Union’s population. Additionally, the Digital Services Act will set out a co-regulatory backstop, including building on existing voluntary initiatives.
  • This proposal should constitute the appropriate basis for the development of robust technologies to prevent the reappearance of illegal information, accompanied with the highest safeguards to avoid that lawful content is taken down erroneously; such tools could be developed on the basis of voluntary agreements between all parties concerned and should be encouraged by Member States; it is in the interest of all parties involved in the provision of intermediary services to adopt and implement such procedures; the provisions of this Regulation relating to liability should not preclude the development and effective operation, by the different interested parties, of technical systems of protection and identification and of automated recognition made possible by digital technology within the limits laid down by Regulation 2016/679.
  • Union citizens and others are exposed to ever-increasing risks and harms online – from the spread of illegal content and activities, to limitations to express themselves and other societal harms. The envisaged policy measures in this legislative proposal will substantially improve this situation by providing a modern, future-proof governance framework, effectively safeguarding the rights and legitimate interests of all parties involved, most of all Union citizens. The proposal introduces important safeguards to allow citizens to freely express themselves, while enhancing user agency in the online environment, as well as the exercise of other fundamental rights such as the right to an effective remedy, non-discrimination, rights of the child as well as the protection of personal data and privacy online.
  • The proposed Regulation will mitigate risks of erroneous or unjustified blocking speech, address the chilling effects on speech, stimulate the freedom to receive information and hold opinions, as well as reinforce users’ redress possibilities. Specific groups or persons may be vulnerable or disadvantaged in their use of online services because of their gender, race or ethnic origin, religion or belief, disability, age or sexual orientation. They can be disproportionately affected by restrictions and removal measures following from (unconscious or conscious) biases potentially embedded in the notification systems by users and third parties, as well as replicated in automated content moderation tools used by platforms. The proposal will mitigate discriminatory risks, particularly for those groups or persons and will contribute to the protection of the rights of the child and the right to human dignity online. The proposal will only require removal of illegal content and will impose mandatory safeguards when users’ information is removed, including the provision of explanatory information to the user, complaint mechanisms supported by the service providers as well as external out-of-court dispute resolution mechanism. Furthermore, it will ensure EU citizens are also protected when using services provided by providers not established in the Union but active on the internal market, since those providers are covered too.
  • With regard to service providers’ freedom to conduct a business, the costs incurred on businesses are offset by reducing fragmentation across the internal market. The proposal introduces safeguards to alleviate the burden on service providers, including measures against repeated unjustified notices and prior vetting of trusted flaggers by public authorities. Furthermore, certain obligations are targeted to very large online platforms, where the most serious risks often occur and which have the capacity absorb the additional burden.
  • The proposed legislation will preserve the prohibition of general monitoring obligations of the e-Commerce Directive, which in itself is crucial to the required fair balance of fundamental rights in the online world. The new Regulation prohibits general monitoring obligations, as they could disproportionately limit users’ freedom of expression and freedom to receive information, and could burden service providers excessively and thus unduly interfere with their freedom to conduct a business. The prohibition also limits incentives for online surveillance and has positive implications for the protection of personal data and privacy.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Sambeet D from Pixabay

Further Reading, Other Developments, and Coming Events (15 December)

Further Reading

  • DHS, State and NIH join list of federal agencies — now five — hacked in major Russian cyberespionage campaign” By Ellen Nakashima and Craig Timberg — The Washington Post; “Scope of Russian Hack Becomes Clear: Multiple U.S. Agencies Were Hit” By David E. Sanger, Nicole Perlroth and Eric Schmitt — The New York Times; The list of United States (U.S.) government agencies breached by Sluzhba vneshney razvedki Rossiyskoy Federatsii (SVR), the Russian Federation’s Foreign Intelligence Service, has grown. Now the Department of Homeland Security, Defense, and State and the National Institutes of Health are reporting they have been breached. It is unclear if Fortune 500 companies in the U.S. and elsewhere and U.S. nuclear laboratories were also breached in this huge, sophisticated espionage exploit. It appears the Russians were selective and careful, and these hackers may have only accessed information held on U.S. government systems. And yet, the Trump Administration continues to issue equivocal statements neither denying nor acknowledging the hack, leaving the public to depend on quotes from anonymous officials. Perhaps admitting the Russians hacked U.S. government systems would throw light on Russian interference four years ago, and the President is loath to even contemplate that attack. In contrast, President Donald Trump has made all sorts of wild, untrue claims about vote totals being hacked despite no evidence supporting his assertions. It appears that the declaration of mission accomplished by some agencies of the Trump Administration over no Russian hacking of or interference with the 2020 election will be overshadowed by what may prove the most damaging hack of U.S. government systems ever.
  • Revealed: China suspected of spying on Americans via Caribbean phone networks” By Stephanie Kirchgaessner — The Guardian. This story depends on one source, so take it for what it is worth, but allegedly the People’s Republic of China (PRC) is using vulnerabilities in mobile communications networks to hack into the phones of Americans travelling in the Caribbean. If so, the PRC may be exploiting the same Signaling System 7 (SS7) weaknesses an Israeli firm, Circles, is using to sell access to phones, at least according to a report published recently by the University of Toronto’s Citizen Lab.
  • The Cartel Project | Revealed: The Israelis Making Millions Selling Cyberweapons to Latin America” By Amitai Ziv — Haaretz. Speaking of Israeli companies, the NSO Group among others are actively selling offensive cyber and surveillance capabilities to Central American nations often through practices that may be corrupt.
  • U.S. Schools Are Buying Phone-Hacking Tech That the FBI Uses to Investigate Terrorists” By Tom McKay and Dhruv Mehrotra — Gizmodo. Israeli firm Cellebrite and competitors are being used in school systems across the United States (U.S.) to access communications on students’ phones. The U.S. Supreme Court caselaw gives schools very wide discretion for searches, and the Fourth Amendment is largely null and void on school grounds.
  • ‘It’s Hard to Prove’: Why Antitrust Suits Against Facebook Face Hurdles” By Mike Issac and Cecilia Kang — The New York Times. The development of antitrust law over the last few decades may have laid an uphill path for the Federal Trade Commission (FTC) and state attorneys general in securing a breakup of Facebook, something that has not happened on a large scale since the historic splintering of AT&T in the early 1980’s.
  • Exclusive: Israeli Surveillance Companies Are Siphoning Masses Of Location Data From Smartphone Apps” By Thomas Brewster — Forbes. Turns out Israeli firms are using a feature (or what many would call a bug) in the online advertising system that allows those looking to buy ads to get close to real-time location data from application developers looking to sell advertising space. By putting out a shingle as a Demand Side Platform, it is possible to access reaps of location data, and two Israeli companies are doing just that and offering the service of locating and tracking people using this quirk in online advertising. And this is not just companies in Israel. There is a company under scrutiny in the United States (U.S.) that may have used these practices and then provided location data to federal agencies.

Other Developments

  • The Government Accountability Office (GAO) evaluated the United States’ (U.S.) Department of Defense’s electromagnetic spectrum (EMS) operations found that the DOD’s efforts to maintain EMS superiority over the Russian Federation and the People’s Republic of China (PRC). The GAO concluded:
    • Studies have shown that adversaries of the United States, such as China and Russia, are developing capabilities and strategies that could affect DOD superiority in the information environment, including the EMS. DOD has also reported that loss of EMS superiority could result in the department losing control of the battlefield, as its Electromagnetic Spectrum Operations (EMSO) supports many warfighting functions across all domains. DOD recognizes the importance of EMSO to military operations in actual conflicts and in operations short of open conflict that involve the broad information environment. However, gaps we identified in DOD’s ability to develop and implement EMS-related strategies have impeded progress in meeting DOD’s goals. By addressing gaps we found in five areas—(1) the processes and procedures to integrate EMSO throughout the department, (2) governance reforms to correct diffuse organization, (3) responsibility by an official with appropriate authority, (4) a strategy implementation plan, and (5) activities that monitor and assess the department’s progress in implementing the strategy—DOD can capitalize on progress that it has already made and better support ensuring EMS superiority.
    • The GAO recommended:
      • The Secretary of Defense should ensure that the Vice Chairman of the Joint Chiefs of Staff, as Senior Designated Official of the Electromagnetic Spectrum Operations Cross-Functional Team (CFT), identifies the procedures and processes necessary to provide for integrated defense-wide strategy, planning, and budgeting with respect to joint electromagnetic spectrum operations, as required by the FY19 NDAA. (Recommendation 1)
      • The Secretary of Defense should ensure that the Vice Chairman of the Joint Chiefs of Staff as Senior Designated Official of the CFT proposes EMS governance, management, organizational, and operational reforms to the Secretary. (Recommendation 2)
      • The Secretary of Defense should assign clear responsibility to a senior official with authority and resources necessary to compel action for the long-term implementation of the 2020 strategy in time to oversee the execution of the 2020 strategy implementation plan. (Recommendation 3)
      • The Secretary of Defense should ensure that the designated senior official for long-term strategy implementation issues an actionable implementation plan within 180 days following issuance of the 2020 strategy. (Recommendation 4)
      • The Secretary of Defense should ensure that the designated senior official for long-term strategy implementation creates oversight processes that would facilitate the department’s implementation of the 2020 strategy. (Recommendation 5)
  • A forerunner to Apple’s App Store has sued the company, claiming it has monopolized applications on its operating system to the detriment of other parties and done the same with respect to its payment system. The company behind Cydia is arguing that it conceived of and created the first application store for the iPhone, offering a range of programs Apple did not. Cydia is claiming that once Apple understood how lucrative an app store would be, it blocked Cydia and established its own store, the exclusive means through which programs can be installed and used on the iOS. Furthermore, this has enabled Apple to levy 30% of all in-application purchases made, which is allegedly a $50 billion market annually. This is the second high-profile suit this year against Apple. Epic Games, the maker of the popular game, Fortnite, sued Apple earlier this year on many of the same grounds because the company started allowing users to buy directly from it for a 30% discount. Apple responded by removing the game from the App Store, which has blocked players from downloading updated versions. That litigation has just begun. In its complaint, Cydia asserts:
    • Historically, distribution of apps for a specific operating system (“OS”) occurred in a separate and robustly competitive market. Apple, however, began coercing users to utilize no other iOS app distribution service but the App Store, coupling it closer and closer to the iPhone itself in order to crowd out all competition. But Apple did not come up with this idea initially—it only saw the economic promise that iOS app distribution represented after others, like [Cydia], demonstrated that value with their own iOS app distribution products/services. Faced with this realization, Apple then decided to take that separate market (as well as the additional iOS app payment processing market described herein) for itself.
    • Cydia became hugely popular by offering a marketplace to find and obtain third party iOS applications that greatly expanded the capabilities of the stock iPhone, including games, productivity applications, and audio/visual applications such as a video recorder (whereas the original iPhone only allowed still cameraphotos). Apple subsequently took many of these early third party applications’ innovations, incorporating them into the iPhone directly or through apps.
    • But far worse than simply copying others’ innovations, Apple also recognized that it could reap enormous profits if it cornered this fledgling market for iOS app distribution, because that would give Apple complete power over iOS apps, regardless of the developer. Apple therefore initiated a campaign to eliminate competition for iOS app distribution altogether. That campaign has been successful and continues to this day. Apple did (and continues to do) so by, inter alia, tying the App Store app to iPhone purchases by preinstalling it on all iOS devices and then requiring it as the default method to obtain iOS apps, regardless of user preference for other alternatives; technologically locking down the iPhone to prevent App Store competitors like Cydia from even operating on the device; and imposing contractual terms on users that coerce and prevent them from using App Store competitors. Apple has also mandated that iOS app developers use it as their sole option for app payment processing (such as in-app purchases), thus preventing other competitors, such as Cydia, from offering the same service to those developers.
    • Through these and other anticompetitive acts, Apple has wrongfully acquired and maintained monopoly power in the market (or aftermarket) for iOS app distribution, and in the market (or aftermarket) for iOS app payment processing. Apple has frozen Cydia and all other competitors out of both markets, depriving them of the ability to compete with the App Store and to offer developers and consumers better prices, better service, and more choice. This anticompetitive conduct has unsurprisingly generated massive profits and unprecedented market capitalization for Apple, as well as incredible market power.
  • California is asking to join antitrust suit against Google filed by the United States Department of Justice (DOJ) and eleven state attorneys general. This antitrust action centers on Google’s practices of making Google the default search engine on Android devices and paying browsers and other technology entities to make Google the default search engine. However, a number of states that had initially joined the joint state investigation of Google have opted not to join this action and will instead be continuing to investigate, signaling a much broader case than the one filed in the United States District Court for the District of Columbia. In any event, if the suit does proceed, and a change in Administration could result in a swift change in course, it may take years to be resolved. Of course, given the legion leaks from the DOJ and state attorneys general offices about the pressure U.S. Attorney General William Barr placed on staff and attorneys to bring a case before the election, there is criticism that rushing the case may result in a weaker, less comprehensive action that Google may ultimately fend off.
    • And, there is likely to be another lawsuit against Google filed by other state attorneys general. A number of attorneys general who had orginally joined the effort led by Texas Attorney General Ken Paxton in investigating Google released a statement at the time the DOJ suit was filed, indicating their investigation would continue, presaging a different, possibly broader lawsuit that might also address Google’s role in other markets. The attorneys general of New York, Colorado, Iowa, Nebraska, North Carolina, Tennessee, and Utah did not join the case that was filed but may soon file a related but parallel case. They stated:
      • Over the last year, both the U.S. DOJ and state attorneys general have conducted separate but parallel investigations into Google’s anticompetitive market behavior. We appreciate the strong bipartisan cooperation among the states and the good working relationship with the DOJ on these serious issues. This is a historic time for both federal and state antitrust authorities, as we work to protect competition and innovation in our technology markets. We plan to conclude parts of our investigation of Google in the coming weeks. If we decide to file a complaint, we would file a motion to consolidate our case with the DOJ’s. We would then litigate the consolidated case cooperatively, much as we did in the Microsoft case.
  • France’s Commission nationale de l’informatique et des libertés (CNIL) handed down multi-million Euro fines on Google and Amazon for putting cookies on users’ devices. CNIL fined Google a total of €100 million and Amazon €35 million because its investigation of both entities determined “when a user visited [their] website, cookies were automatically placed on his or her computer, without any action required on his or her part…[and] [s]everal of these cookies were used for advertising purposes.”
    • CNIL explained the decision against Google:
      • [CNIL] noticed three breaches of Article 82 of the French Data Protection Act:
      • Deposit of cookies without obtaining the prior consent of the user
        • When a user visited the website google.fr, several cookies used for advertising purposes were automatically placed on his or her computer, without any action required on his or her part.
        • Since this type of cookies can only be placed after the user has expressed his or her consent, the restricted committee considered that the companies had not complied with the requirement provided for in Article 82 of the French Data Protection Act regarding the collection of prior consent before placing cookies that are not essential to the service.
      • Lack of information provided to the users of the search engine google.fr
        • When a user visited the page google.fr, an information banner displayed at the bottom of the page, with the following note “Privacy reminder from Google”, in front of which were two buttons: “Remind me later” and “Access now”.
        • This banner did not provide the user with any information regarding cookies that had however already been placed on his or her computer when arriving on the site. The information was also not provided when he or she clicked on the button “Access now”.
        • Therefore, the restricted committee considered that the information provided by the companies did not enable the users living in France either to be previously and clearly informed regarding the deposit of cookies on their computer or, therefore, to be informed of the purposes of these cookies and the available means enabling to refuse them.
      • Partial failure of the « opposition » mechanism
        • When a user deactivated the ad personalization on the Google search by using the available mechanism from the button “Access now”, one of the advertising cookies was still stored on his or her computer and kept reading information aimed at the server to which it is attached.
        • Therefore, the restricted committee considered that the “opposition” mechanism set up by the companies was partially defective, breaching Article 82 of the French Data Protection Act.
    • CNIL explained the case against Amazon:
      • [CNIL] noticed two breaches of Article 82 of the French Data Protection Act:
      • Deposit of cookies without obtaining the prior consent of the user
        • The restricted committee noted that when a user visited one of the pages of the website amazon.fr, a large number of cookies used for advertising purposes was automatically placed on his or her computer, before any action required on his or her part. Yet, the restricted committee recalled that this type of cookies, which are not essential to the service, can only be placed after the user has expressed his or her consent. It considered that the deposit of cookies at the same time as arriving on the site was a practice which, by its nature, was incompatible with a prior consent.
      • Lack of information provided to the users of the website amazon.fr
        • First, the restricted committee noted that, in the case of a user visiting the website amazon.fr, the information provided was neither clear, nor complete.
        • It considered that the information banner displayed by the company, which was “By using this website, you accept our use of cookies allowing to offer and improve our services. Read More.”, only contained a general and approximate information regarding the purposes of all the cookies placed. In particular, it considered that, by reading the banner, the user could not understand that cookies placed on his or her computer were mainly used to display personalized ads. It also noted that the banner did not explain to the user that it could refuse these cookies and how to do it.
        • Then, the restricted committee noticed that the company’s failure to comply with its obligation was even more obvious regarding the case of users that visited the website amazon.fr after they had clicked on an advertisement published on another website. It underlined that in this case, the same cookies were placed but no information was provided to the users about that.
  • Senator Amy Klobuchar (D-MN) wrote the Secretary of Health and Human Services (HHS), to express “serious concerns regarding recent reports on the data collection practices of Amazon’s health-tracking bracelet (Halo) and to request information on the actions [HHS] is taking to ensure users’ health data is secure.” Klobuchar stated:
    • The Halo is a fitness tracker that users wear on their wrists. The tracker’s smartphone application (app) provides users with a wide-ranging analysis of their health by tracking a range of biological metrics including heartbeat patterns, exercise habits, sleep patterns, and skin temperature. The fitness tracker also enters into uncharted territory by collecting body photos and voice recordings and transmitting this data for analysis. To calculate the user’s body fat percentage, the Halo requires users to take scans of their body using a smartphone app. These photos are then temporarily sent to Amazon’s servers for analysis while the app returns a three-dimensional image of the user’s body, allowing the user to adjust the image to see what they would look like with different percentages of body fat. The Halo also offers a tone analysis feature that examines the nuances of a user’s voice to indicate how the user sounds to others. To accomplish this task, the device has built-in microphones that listen and records a user’s voice by taking periodic samples of speech throughout the day if users opt-in to the feature.
    • Recent reports have raised concerns about the Halo’s access to this extensive personal and private health information. Among publicly available consumer health devices, the Halo appears to collect an unprecedented level of personal information. This raises questions about the extent to which the tracker’s transmission of biological data may reveal private information regarding the user’s health conditions and how this information can be used. Last year, a study by BMJ (formerly the British Medical Journal) found that 79 percent of health apps studied by researchers were found to share user data in a manner that failed to provide transparency about the data being shared. The study concluded that health app developers routinely share consumer data with third-parties and that little transparency exists around such data sharing.
    • Klobuchar asked the Secretary of Health and Human Services Alex Azar II to “respond to the following questions:
      • What actions is HHS taking to ensure that fitness trackers like Halo safeguard users’ private health information?
      • What authority does HHS have to ensure the security and privacy of consumer data collected and analyzed by health tracking devices like Amazon’s Halo?
      • Are additional regulations required to help strengthen privacy and security protections for consumers’ personal health data given the rise of health tracking devices? Why or why not?
      • Please describe in detail what additional authority or resources that the HHS could use to help ensure the security and protection of consumer health data obtained through health tracking devices like the Halo.

Coming Events

  • On 15 December, the Senate Judiciary Committee’s Intellectual Property Subcommittee will hold a hearing titled “The Role of Private Agreements and Existing Technology in Curbing Online Piracy” with these witnesses:
    • Panel I
      • Ms. Ruth Vitale, Chief Executive Officer, CreativeFuture
      • Mr. Probir Mehta, Head of Global Intellectual Property and Trade Policy, Facebook, Inc.
      • Mr. Mitch Glazier, Chairman and CEO, Recording Industry Association of America
      • Mr. Joshua Lamel, Executive Director, Re:Create
    • Panel II
      • Ms. Katherine Oyama, Global Director of Business Public Policy, YouTube
      • Mr. Keith Kupferschmid, Chief Executive Officer, Copyright Alliance
      • Mr. Noah Becker, President and Co-Founder, AdRev
      • Mr. Dean S. Marks, Executive Director and Legal Counsel, Coalition for Online Accountability
  • The Senate Armed Services Committee’s Cybersecurity Subcommittee will hold a closed briefing on Department of Defense Cyber Operations on 15 December with these witnesses:
    • Mr. Thomas C. Wingfield, Deputy Assistant Secretary of Defense for Cyber Policy, Office of the Under Secretary of Defense for Policy
    • Mr. Jeffrey R. Jones, Vice Director, Command, Control, Communications and Computers/Cyber, Joint Staff, J-6
    • Ms. Katherine E. Arrington, Chief Information Security Officer for the Assistant Secretary of Defense for Acquisition, Office of the Under Secretary of Defense for Acquisition and Sustainment
    • Rear Admiral Jeffrey Czerewko, United States Navy, Deputy Director, Global Operations, J39, J3, Joint Staff
  • The Senate Banking, Housing, and Urban Affairs Committee’s Economic Policy Subcommittee will conduct a hearing titled “US-China: Winning the Economic Competition, Part II” on 16 December with these witnesses:
    • The Honorable Will Hurd, Member, United States House of Representatives;
    • Derek Scissors, Resident Scholar, American Enterprise Institute;
    • Melanie M. Hart, Ph.D., Senior Fellow and Director for China Policy, Center for American Progress; and
    • Roy Houseman, Legislative Director, United Steelworkers (USW).
  • On 17 December the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency’s (CISA) Information and Communications Technology (ICT) Supply Chain Risk Management (SCRM) Task Force will convene for a virtual event, “Partnership in Action: Driving Supply Chain Security.”

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Photo by Naya Shaw from Pexels

Further Reading, Other Developments, and Coming Events (14 December)

Further Reading

  • Russian Hackers Broke Into Federal Agencies, U.S. Officials Suspect” By David Sanger — The New York Times.; “Russian government hackers are behind a broad espionage campaign that has compromised U.S. agencies, including Treasury and Commerce” By Ellen Nakashima and Craig Timberg — The Washington Post; “Suspected Russian hackers spied on U.S. Treasury emails – sources” By Chris Bing — Reuters. Apparently, Sluzhba vneshney razvedki Rossiyskoy Federatsii (SVR), the Russian Federation’s Foreign Intelligence Service, has exploited a vulnerability in SolarWinds’ update system used by many United States (U.S.) government systems, Fortune 500 companies, and the U.S.’ top ten largest telecommunications companies. Reportedly, APT29 (aka Cozy Bear) has had free reign in the email systems of the Departments of the Treasury and Commerce among other possible victims. The hackers may have also accessed a range of other entities around the world using the same SolarWind system. Moreover, these penetrations may be related to the recently announced theft of hacking tools a private firm, FireEye, used to test clients’ systems.
  • Hackers steal Pfizer/BioNTech COVID-19 vaccine data in Europe, companies say” By Jack Stubbs — Reuters. The European Union’s (EU) agency that oversees and approve medications has been hacked, and documents related to one of the new COVID-19 vaccines may have been stolen. The European Medicines Agency (EMA) was apparently penetrated, and materials related to Pfizer and BioNTech’s vaccine were exfiltrated. The scope of the theft is not yet known, but this is the latest in many attempts to hack into the entities conducting research on the virus and potential vaccines.
  • The AI Girlfriend Seducing China’s Lonely Men” By Zhang Wanqing — Sixth Tone. A chat bot powered by artificial intelligence that some men in the People’s Republic of China (PRC) are using extensively raises all sorts of ethical and privacy issues. Lonely people have turned to this AI technology and have confided their deepest feelings, which are stored by the company. It seems like a matter of time until these data are mined for commercial value or hacked. Also, the chatbot has run afoul of PRC’s censorship policies. Finally, is this a preview of the world to come, much like the 2013 film, Her, in which humans have relationships with AI beings?
  • YouTube will now remove videos disputing Joe Biden’s election victory” By Makena Kelly — The Verge. The Google subsidiary announced that because the safe harbor deadline has been reached and a sufficient number of states have certified President-elect Joe Biden, the platform will begin taking down misleading election videos. This change in policy may have come about, in part, because of pressure from Democrats in Congress about what they see as Google’s lackluster efforts to find and remove lies, misinformation, and disinformation about the 2020 election.
  • Lots of people are gunning for Google. Meet the man who might have the best shot.” By Emily Birnbaum — Protocol. Colorado Attorney General Phil Weiser may be uniquely qualified to lead state attorneys general on a second antitrust and anti-competition action against Google given his background as a law professor steeped in antitrust and his background in the Department of Justice and White House during the Obama Administration.

Other Developments

  • Cybersecurity firm, FireEye, revealed it was “attacked by a highly sophisticated threat actor, one whose discipline, operational security, and techniques lead us to believe it was a state-sponsored attack” according to CEO Kevin Mandia. This hacking may be related to vast penetration of United States (U.S.) government systems revealed over the weekend. Mandia stated FireEye has “found that the attacker targeted and accessed certain Red Team assessment tools that we use to test our customers’ security…[that] mimic the behavior of many cyber threat actors and enable FireEye to provide essential diagnostic security services to our customers.” Mandia claimed none of these tools were zero-day exploits. FireEye is “proactively releasing methods and means to detect the use of our stolen Red Team tools…[and] out of an abundance of caution, we have developed more than 300 countermeasures for our customers, and the community at large, to use in order to minimize the potential impact of the theft of these tools.
    • Mandia added:
      • Consistent with a nation-state cyber-espionage effort, the attacker primarily sought information related to certain government customers. While the attacker was able to access some of our internal systems, at this point in our investigation, we have seen no evidence that the attacker exfiltrated data from our primary systems that store customer information from our incident response or consulting engagements, or the metadata collected by our products in our dynamic threat intelligence systems. If we discover that customer information was taken, we will contact them directly.
      • Based on my 25 years in cyber security and responding to incidents, I’ve concluded we are witnessing an attack by a nation with top-tier offensive capabilities. This attack is different from the tens of thousands of incidents we have responded to throughout the years. The attackers tailored their world-class capabilities specifically to target and attack FireEye. They are highly trained in operational security and executed with discipline and focus. They operated clandestinely, using methods that counter security tools and forensic examination. They used a novel combination of techniques not witnessed by us or our partners in the past.
      • We are actively investigating in coordination with the Federal Bureau of Investigation and other key partners, including Microsoft. Their initial analysis supports our conclusion that this was the work of a highly sophisticated state-sponsored attacker utilizing novel techniques.    
  • The United States’ (U.S.) Department of Justice filed suit against Facebook for “tactics that discriminated against U.S. workers and routinely preferred temporary visa holders (including H-1B visa holders) for jobs in connection with the permanent labor certification (PERM) process.” The DOJ is asking for injunction to stop Facebook from engaging in the alleged conduct, civil penalties, and damages for workers harmed by this conduct.
    • The DOJ contended:
      • The department’s lawsuit alleges that beginning no later than Jan. 1, 2018 and lasting until at least Sept. 18, 2019, Facebook employed tactics that discriminated against U.S. workers and routinely preferred temporary visa holders (including H-1B visa holders) for jobs in connection with the PERM process. Rather than conducting a genuine search for qualified and available U.S. workers for permanent positions sought by these temporary visa holders, Facebook reserved the positions for temporary visa holders because of their immigration status, according to the complaint. The complaint also alleges that Facebook sought to channel jobs to temporary visa holders at the expense of U.S. workers by failing to advertise those vacancies on its careers website, requiring applicants to apply by physical mail only, and refusing to consider any U.S. workers who applied for those positions. In contrast, Facebook’s usual hiring process relies on recruitment methods designed to encourage applications by advertising positions on its careers website, accepting electronic applications, and not pre-selecting candidates to be hired based on a candidate’s immigration status, according to the lawsuit.
      • In its investigation, the department determined that Facebook’s ineffective recruitment methods dissuaded U.S. workers from applying to its PERM positions. The department concluded that, during the relevant period, Facebook received zero or one U.S. worker applicants for 99.7 percent of its PERM positions, while comparable positions at Facebook that were advertised on its careers website during a similar time period typically attracted 100 or more applicants each. These U.S. workers were denied an opportunity to be considered for the jobs Facebook sought to channel to temporary visa holders, according to the lawsuit. 
      • Not only do Facebook’s alleged practices discriminate against U.S. workers, they have adverse consequences on temporary visa holders by creating an employment relationship that is not on equal terms. An employer that engages in the practices alleged in the lawsuit against Facebook can expect more temporary visa holders to apply for positions and increased retention post-hire. Such temporary visa holders often have limited job mobility and thus are likely to remain with their company until they can adjust status, which for some can be decades.
      • The United States’ complaint seeks civil penalties, back pay on behalf of U.S. workers denied employment at Facebook due to the alleged discrimination in favor of temporary visa holders, and other relief to ensure Facebook stops the alleged violations in the future. According to the lawsuit, and based on the department’s nearly two-year investigation, Facebook’s discrimination against U.S. workers was intentional, widespread, and in violation of a provision of the Immigration and Nationality Act (INA), 8 U.S.C. § 1324b(a)(1), that the Department of Justice’s Civil Rights Division enforces. 
  • A trio of consumer authority regulators took the lead in coming into agreement with Apple to add “a new section to each app’s product page in its App Store, containing key information about the data the app collects and an accessible summary of the most important information from the privacy policy.” The United Kingdom’s UK’s Competition and Markets Authority (CMA), the Netherlands Authority for Consumers and Markets and the Norwegian Consumer Authority led the effort that “ongoing work from the International Consumer Protection and Enforcement Network (ICPEN), involving 27 of its consumer authority members across the world.” The three agencies explained:
    • Consumer protection authorities, including the CMA, became concerned that people were not being given clear information on how their personal data would be used before choosing an app, including on whether the app developer would share their personal data with a third party. Without this information, consumers are unable to compare and choose apps based on how they use personal data.
  • Australia’s Council of Financial Regulators (CFR) has released a Cyber Operational Resilience Intelligence-led Exercises (CORIE) framework “to test and demonstrate the cyber maturity and resilience of institutions within the Australian financial services industry.”

Coming Events

  • On 15 December, the Senate Judiciary Committee’s Intellectual Property Subcommittee will hold a hearing titled “The Role of Private Agreements and Existing Technology in Curbing Online Piracy” with these witnesses:
    • Panel I
      • Ms. Ruth Vitale, Chief Executive Officer, CreativeFuture
      • Mr. Probir Mehta, Head of Global Intellectual Property and Trade Policy, Facebook, Inc.
      • Mr. Mitch Glazier, Chairman and CEO, Recording Industry Association of America
      • Mr. Joshua Lamel, Executive Director, Re:Create
    • Panel II
      • Ms. Katherine Oyama, Global Director of Business Public Policy, YouTube
      • Mr. Keith Kupferschmid, Chief Executive Officer, Copyright Alliance
      • Mr. Noah Becker, President and Co-Founder, AdRev
      • Mr. Dean S. Marks, Executive Director and Legal Counsel, Coalition for Online Accountability
  • The Senate Armed Services Committee’s Cybersecurity Subcommittee will hold a closed briefing on Department of Defense Cyber Operations on 15 December with these witnesses:
    • Mr. Thomas C. Wingfield, Deputy Assistant Secretary of Defense for Cyber Policy, Office of the Under Secretary of Defense for Policy
    • Mr. Jeffrey R. Jones, Vice Director, Command, Control, Communications and Computers/Cyber, Joint Staff, J-6
    • Ms. Katherine E. Arrington, Chief Information Security Officer for the Assistant Secretary of Defense for Acquisition, Office of the Under Secretary of Defense for Acquisition and Sustainment
    • Rear Admiral Jeffrey Czerewko, United States Navy, Deputy Director, Global Operations, J39, J3, Joint Staff
  • The Senate Banking, Housing, and Urban Affairs Committee’s Economic Policy Subcommittee will conduct a hearing titled “US-China: Winning the Economic Competition, Part II” on 16 December with these witnesses:
    • The Honorable Will Hurd, Member, United States House of Representatives;
    • Derek Scissors, Resident Scholar, American Enterprise Institute;
    • Melanie M. Hart, Ph.D., Senior Fellow and Director for China Policy, Center for American Progress; and
    • Roy Houseman, Legislative Director, United Steelworkers (USW).
  • On 17 December the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency’s (CISA) Information and Communications Technology (ICT) Supply Chain Risk Management (SCRM) Task Force will convene for a virtual event, “Partnership in Action: Driving Supply Chain Security.”

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Photo by stein egil liland from Pexels

Task Force Calls For Enhanced Digital Regulation in UK

The UK may soon reform its competition and consumer laws visa vis digital markets.

A United Kingdom (UK) entity has recommended that Prime Minister Boris Johnson and his Conservative government remake digital regulation in the UK, especially with respect to competition policy. A task force has returned an extensive set of recommendations requiring legislation and increased coordination and a new focus for existing regulators. The timeline for such action is not clear, and Downing Street would have to agree before anything happens. However, the UK’s new regulatory scheme and the European Union’s ongoing efforts to revamp its regulatory approach to large technology firms will both likely affect United States (U.S.) multinationals such as Facebook and Google. It may also serve as a template for the U.S. to remake its regulation of digital competition.

The United Kingdom’s Competition & Markets Authority (CMA) led an effort consisting of the Office of Communications (Ofcom) and the Information Commissioner’s Office (ICO) in the form of the Digital Markets Taskforce. The Task Force follows the 2019 “Unlocking digital competition, Report of the Digital Competition Expert Panel”, an effort led by Obama Administration Council of Economic Advisers Chair Jason Furman and the more recent July 2020 “Online platforms and digital advertising market study.” In 2019, the Task Force issued its “Digital Markets Strategy” that “sets out five strategic aims, and seven priority focus areas.”

The Task Force acknowledged its efforts in the UK were not unique. It referenced similar inquiries and plans to reform other nations’ regulation of digital markets in the U.S., the EU, Germany, Japan, and Australia.

The Task Force summarized its findings:

The accumulation and strengthening of market power by a small number of digital firms has the potential to cause significant harm to consumers and businesses that rely on them, to innovative competitors and to the economy and society more widely:

  • A poor deal for consumers and businesses who rely on them. These firms can exploit their powerful positions. For consumers this can mean they get a worse deal than they would in a more competitive market, for example having less protection or control of their data. For businesses this can mean they are, for example, charged higher listing fees or higher prices for advertising online. These higher prices for businesses can then feed through into higher prices for consumers for a wide range of products and services across the economy.
  • Innovative competitors face an unfair disadvantage. A powerful digital firm can extend its strong position in one market into other markets, ultimately giving itself an unfair advantage over its rivals. This means innovative competitors, even if they have a good idea, are likely to find it much harder to compete and grow their businesses. This can result in long-term harmful effects on innovation and the dynamism of UK markets.
  • A less vibrant digital economy. If powerful digital firms act to unfairly disadvantage their innovative competitors, these innovative firms will find it harder to enter and expand in new markets, meaning the ‘unicorns’ of tomorrow that will support jobs and the future digital economy will not emerge.

The Task Force calls for the establishment of a new Digital Markets Unit (DMU) that would be particularly focused on policing potential harm before it occurs. Thus, the Task Force is calling for a regulator that is proactive and nimble enough to address risks to competition and consumers any harm happens. The DMU would oversee a new “Strategic Market Status” regime, and the Task Force is recommending that the government and Parliament revisit and refresh consumer and competition laws. The Task Force stated that the “government should put in place a regulatory framework for the most powerful digital firms, alongside strengthening existing competition and consumer laws…[and] [i]n considering the design of this regulatory framework we have sought to strike the right balance between the following key principles:

  • Evidence driven and effective – regulation must be effective, and that means ensuring it is evidence based, but also that it can react swiftly enough to prevent and address harms. The activities undertaken by the most powerful digital firms are diverse and a ‘one size fits all’ approach could have damaging results.
  • Proportionate and targeted – regulation must be proportionate and targeted at addressing a particular problem, minimising the risk of any possible unintended consequences.
  • Open, transparent and accountable – across all its work the DMU should operate in an open and transparent manner. In reaching decisions it should consult a wide range of parties. It should clearly articulate why it has reached decisions and be held accountable for them.
  • Proactive and forward-looking – the DMU should be focused on preventing harm from occurring, rather than enforcing ex post. It should seek to understand how digital markets might evolve, the risks this poses to competition and innovation, and act proactively to assess and manage those risks.
  • Coherent – the DMU should seek to promote coherence with other regulatory regimes both domestically and internationally, in particular by working through the Digital Regulation Cooperation Forum which is already working to deliver a step change in coordination and cooperation between regulators in digital markets.

The Task Force provided more detail on the new SMS scheme:

The entry point to the SMS regime is an assessment of whether a firm has ‘strategic market status’. This should be an evidence-based economic assessment as to whether a firm has substantial, entrenched market power in at least one digital activity, providing the firm with a strategic position (meaning the effects of its market power are likely to be particularly widespread and/or significant). It is focused on assessing the very factors which may give rise to harm, and which motivate the need for regulatory intervention.

Those firms that are designated with SMS should be subject to the following three pillars of the regime:

  • An enforceable code of conduct that sets out clearly how an SMS firm is expected to behave in relation to the activity motivating its SMS designation. The aim of the code is to manage the effects of market power, for example by preventing practices which exploit consumers and businesses or exclude innovative competitors.
  • Pro-competitive interventions like personal data mobility, interoperability and data access which can be used to address the factors which are the source of an SMS firm’s market power in a particular activity. These interventions seek to drive longer-term dynamic changes in these activities, opening up opportunities for greater competition and innovation.
  • SMS merger rules to ensure closer scrutiny of transactions involving SMS firms, given the particular risks and potential consumer harm arising from these transactions.

The SMS regime should be an ex ante regime, focused on proactively preventing harm. Fostering a compliance culture within SMS firms will be crucial to its overall success. However, a key part of fostering compliance is credible deterrence and the DMU will need to be able to take tough action where harm does occur, requiring firms to change their behaviour, and with the ability to impose substantial penalties. The ability to take tough action sits alongside enabling resolution through a participative approach, whereby the DMU seeks to engage constructively with all affected parties to achieve fast and effective results.

The Task Force sketched its ideal timeline during which Parliament would enact its recommendations, which would be next year at the earliest:

We believe the case for an ex ante regime in digital markets has been made. We therefore welcome the government’s response to the CMA’s online platforms and digital advertising market study, and its commitment to establishing a DMU from April 2021 within the CMA. We also welcome government’s commitment to consult on proposals for a new pro-competition regime in early 2021 and to legislate to put the DMU on a statutory footing when parliamentary time allows. We urge government to move quickly in taking this legislation forward. As government rightly acknowledges, similar action is being pursued across the globe and there is a clear opportunity for the UK to lead the way in championing a modern pro-competition, pro-innovation regime.

The Task Force summarized its recommendations to the government:

A Digital Markets Unit

Recommendation 1: The government should set up a DMU which should seek to further the interests of consumers and citizens in digital markets, by promoting competition and innovation.

  • Recommendation 1a: The DMU should be a centre of expertise and knowledge in relation to competition in digital markets.
  • Recommendation 1b: The DMU should be proactive, seeking to foster compliance with regulatory requirements and taking swift action to prevent harm from occurring.

A pro-competition regime for the most powerful digital firms

Recommendation 2: The government should establish a pro-competition framework, to be overseen by the DMU, to pursue measures in relation to SMS firms which further the interests of consumers and citizens, by promoting competition and innovation.

Recommendation 3: The government should provide the DMU with the power to designate a firm with SMS.

  • Recommendation 3a: SMS should require a finding that the firm has substantial, entrenched market power in at least one digital activity, providing the firm with a strategic position.
  • Recommendation 3b: The DMU should set out in formal guidance its prioritisation rules for designation assessments. These should include the firm’s revenue (globally and within the UK), the activity undertaken by the firm and a consideration of whether a sector regulator is better placed to address the issues of concern.
  • Recommendation 3c: The designation process should be open and transparent with a consultation on the provisional decision and the assessment completed within a statutory deadline.
  • Recommendation 3d: A firm’s SMS designation should be set for a fixed period before being reviewed.
  • Recommendation 3e: When a firm meets the SMS test, the associated remedies should apply only to a subset of the firm’s activities, whilst the status should apply to the firm as a whole.

Recommendation 4: The government should establish the SMS regime such that when the SMS test is met, the DMU can establish an enforceable code of conduct for the firm in relation to its designated activities to prevent it from taking advantage of its power and position.

  • Recommendation 4a: A code should comprise high-level objectives supported by principles and guidance.
  • Recommendation 4b: The objectives of the code should be set out in legislation, with the remainder of the content of each code to be determined by the DMU, tailored to the activity, conduct and harms it is intended to address.
  • Recommendation 4c: The DMU should ensure the code addresses the concerns about the effect of the power and position of SMS firms when dealing with publishers, as identified by the Cairncross Review.
  • Recommendation 4d: The code of conduct should always apply to the activity or activities which are the focus of the SMS designation.
  • Recommendation 4e: The DMU should consult on and establish a code as part of the designation assessment. The DMU should be able to vary the code outside the designation review cycle.

Recommendation 5: SMS firms should have a legal obligation to ensure their conduct is compliant with the requirements of the code at all times and put in place measures to foster compliance.

Recommendation 6: The government should establish the SMS regime such that the DMU can impose pro-competitive interventions on an SMS firm to drive dynamic change as well as to address harms related to the designated activities.

  • Recommendation 6a: With the exception of ownership separation, the DMU should not be limited in the types of remedies it is able to apply.
  • Recommendation 6b: The DMU should be able to implement PCIs anywhere within an SMS firm in order to address a concern related to its substantial entrenched market power and strategic position in a designated activity.
  • Recommendation 6c: In implementing a PCI the DMU should demonstrate that it is an effective and proportionate remedy to an adverse effect on competition or consumers. A PCI investigation should be completed within a fixed statutory deadline.
  • Recommendation 6d: PCIs should be implemented for a limited duration and should be regularly reviewed.

Recommendation 7: The government should establish the SMS regime such that the DMU can undertake monitoring in relation to the conduct of SMS firms and has a range of tools available to resolve concerns.

  • Recommendation 7a: Where appropriate, the DMU should seek to resolve concerns using a participative approach, engaging with parties to deliver fast and effective resolution.
  • Recommendation 7b: The DMU should be able to open formal investigations into breaches of the code and where a breach is found, require an SMS firm to change its behaviour. These investigations should be completed within a fixed statutory deadline.
  • Recommendation 7c: The DMU should be able to impose substantial penalties for breaches of the code and for breaches of code and PCI orders.
  • Recommendation 7d: The DMU should be able to take action quickly on an interim basis where it suspects the code has been breached.
  • Recommendation 7e: The DMU should be able to undertake scoping assessments where it is concerned there is an adverse effect on competition or consumers in relation to a designated activity. The outcome of such assessments could include a code breach investigation, a pro-competitive intervention investigation, or variation to a code principle or guidance.

Recommendation 8: The government should establish the SMS regime such that the DMU can draw information from a wide range of sources, including by using formal information gathering powers, to gather the evidence it needs to inform its work.

Recommendation 9: The government should ensure the DMU’s decisions are made in an open and transparent manner and that it is held accountable for them.

  • Recommendation 9a: The DMU’s decisions should allow for appropriate internal scrutiny.
  • Recommendation 9b: The DMU should consult on its decisions.
  • Recommendation 9c: The DMU’s decisions should be timely, with statutory deadlines used to set expectations and deliver speedy outcomes.
  • Recommendation 9d: The DMU’s decisions should be judicially reviewable on ordinary judicial review principles and the appeals process should deliver robust outcomes at pace.

Recommendation 10: The government should establish the SMS regime such that SMS firms are subject to additional merger control requirements.

Recommendation 11: The government should establish the SMS merger control regime such that SMS firms are required to report all transactions to the CMA. In addition, transactions that meet clear-cut thresholds should be subject to mandatory notification, with completion prohibited prior to clearance. Competition concerns should be assessed using the existing substantive test but a lower and more cautious standard of proof.

A modern competition and consumer regime for digital markets

Recommendation 12: The government should provide the DMU with a duty to monitor digital markets to enable it to build a detailed understanding of how digital businesses operate, and to provide the basis for swifter action to drive competition and innovation and prevent harm.

Recommendation 13: The government should strengthen competition and consumer protection laws and processes to ensure they are better adapted for the digital age.

  • Recommendation 13a: The government should pursue significant reforms to the markets regime to ensure it can be most effectively utilised to promote competition and innovation across digital markets, for example by pursuing measures like data mobility and interoperability.
  • Recommendation 13b: The government should strengthen powers to tackle unlawful or illegal activity or content on digital platforms which could result in economic detriment to consumers and businesses.
  • Recommendation 13c: The government should take action to strengthen powers to enable effective consumer choice in digital markets, including by addressing instances where choice architecture leads to consumer harm.
  • Recommendation 13d: The government should provide for stronger enforcement of the Platform to Business Regulation.

A coherent regulatory landscape

Recommendation 14: The government should ensure the DMU is able to work closely with other regulators with responsibility for digital markets, in particular Ofcom, the ICO and the FCA.

  • Recommendation 14a: The DMU should be able to share information with other regulators and seek reciprocal arrangements.
  • Recommendation 14b: The government should consider, in consultation with Ofcom and the FCA, empowering these agencies with joint powers with the DMU in relation to the SMS regime, with the DMU being the primary authority.

Recommendation 15: The government should enable the DMU to work closely with regulators in other jurisdictions to promote a coherent regulatory landscape.

  • Recommendation 15a: The DMU should be able to share information with regulators in other jurisdictions and should seek reciprocal arrangements.
  • Recommendation 15b: The DMU should explore establishing a network of international competition and consumer agencies to facilitate better monitoring and action in relation to the conduct of SMS firms.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Free-Photos from Pixabay

Further Reading, Other Development, and Coming Events (8 December)

Further Reading

  • Facebook failed to put fact-check labels on 60% of the most viral posts containing Georgia election misinformation that its own fact-checkers had debunked, a new report says” By Tyler Sonnemaker — Business Insider. Despite its vows to improve its managing of untrue and false content, the platform is not consistently taking down such material related to the runoffs for the Georgia Senate seats. The group behind this finding argues it is because Facebook does not want to. What is left unsaid is that engagement drives revenue, and so, Facebook’s incentives are not to police all violations. Rather it would be to take down enough to be able to say their doing something.
  • Federal Labor Agency Says Google Wrongly Fired 2 Employees” By Kate Conger and Noam Scheiber — The New York Times. The National Labor Relations Board (NLRB) has reportedly sided with two employees Google fired for activities that are traditionally considered labor organizing. The two engineers had been dismissed for allegedly violating the company’s data security practices when they researched the company’s retention of a union-busting firm and sought to alert others about organizing. Even though Google is vowing to fight the action, which has not been finalized, it may well settle given the view of Big Tech in Washington these days. This action could also foretell how a Biden Administration NLRB may look at the labor practices of these companies.
  • U.S. states plan to sue Facebook next week: sources” By Diane Bartz — Reuters. We could see state and federal antitrust suits against Facebook this week. One investigation led by New York Attorney General Tish James could include 40 states although the grounds for alleged violations have not been leaked at this point. It may be Facebook’s acquisition of potential rivals Instagram and WhatsApp that have allowed it to dominate the social messaging market. The Federal Trade Commission (FTC) may also file suit, and, again, the grounds are unknown. The European Commission (EC) is also investigating Facebook for possible violations of European Union (EU) antitrust law over the company’s use of the personal data it holds and uses and about its operation of it online marketplace.
  • The Children of Pornhub” By Nicholas Kristof — The New York Times. This column comprehensively traces the reprehensible recent history of a Canadian conglomerate Mindgeek that owns Pornhub where one can find reams of child and non-consensual pornography. Why Ottawa has not cracked down on this firm is a mystery. The passage and implementation of the “Allow States and Victims to Fight Online Sex Trafficking Act of 2017” (P.L. 115-164) that narrowed the liability shield under 47 USC 230 has forced the company to remove content, a significant change from its indifference before the statutory change in law. Kristof suggests some easy, common sense changes Mindgeek could implement to combat the presence of this illegal material, but it seems like the company will do enough to say it is acting without seriously reforming its platform. Why would it? There is too much money to be made. Additionally, those fighting against this sort of material have been pressuring payment platforms to stop doing business with Mindgeek. PayPal has foresworn any  interaction, and due to pressure Visa and Mastercard are “reviewing” their relationship with Mindgeek and Pornhub. In a statement to a different news outlet, Pornhub claimed it is “unequivocally committed to combating child sexual abuse material (CSAM), and has instituted a comprehensive, industry-leading trust and safety policy to identify and eradicate illegal material from our community.” The company further claimed “[a]ny assertion that we allow CSAM is irresponsible and flagrantly untrue….[w]e have zero tolerance for CSAM.”
  • Amazon and Apple Are Powering a Shift Away From Intel’s Chips” By Don Clark — The New York Times. Two tech giants have chosen new faster, cheaper chips signaling a possible industry shift away from Intel, the firm that has been a significant player for decades. Intel will not go quietly, of course, and a key variable is whether must have software and applications are rewritten to accommodate the new chips from a British firm, Arm.

Other Developments

  • The Government Accountability Office (GAO) and the National Academy of Medicine (NAM) have released a joint report on artificial intelligence in healthcare, consisting of GAO’s Technology Assessment: Artificial Intelligence in Health Care: Benefits and Challenges of Technologies to Augment Patient Care and NAM’s Special Publication: Advancing Artificial Intelligence in Health Settings Outside the Hospital and Clinic. GAO’s report “discusses three topics: (1) current and emerging AI tools available for augmenting patient care and their potential benefits, (2) challenges to the development and adoption of these tools, and (3) policy options to maximize benefits and mitigate challenges to the use of AI tools to augment patient care.” NAM’s “paper aims to provide an analysis of: 1) current technologies and future applications of AI in HSOHC, 2) the logistical steps and challenges involved in integrating AI- HSOHC applications into existing provider workflows, and 3) the ethical and legal considerations of such AI tools, followed by a brief proposal of potential key initiatives to guide the development and adoption of AI in health settings outside the hospital and clinic (HSOHC).
    • The GAO “identified five categories of clinical applications where AI tools have shown promise to augment patient care: predicting health trajectories, recommending treatments, guiding surgical care, monitoring patients, and supporting population health management.” The GAO “also identified three categories of administrative applications where AI tools have shown promise to reduce provider burden and increase the efficiency of patient care: recording digital clinical notes, optimizing operational processes, and automating laborious tasks.” The GAO stated:
      • This technology assessment also identifies challenges that hinder the adoption and impact of AI tools to augment patient care, according to stakeholders, experts, and the literature. Difficulties accessing sufficient high-quality data may hamper innovation in this space. Further, some available data may be biased, which can reduce the effectiveness and accuracy of the tools for some people. Addressing bias can be difficult because the electronic health data do not currently represent the general population. It can also be challenging to scale tools up to multiple locations and integrate them into new settings because of differences in institutions and the patient populations they serve. The limited transparency of AI tools used in health care can make it difficult for providers, regulators, and others to determine whether an AI tool is safe and effective. A greater dispersion of data across providers and institutions can make securing patient data difficult. Finally, one expert described how existing case law does not specifically address AI tools, which can make providers and patients reticent to adopt them. Some of these challenges are similar to those identified previously by GAO in its first publication in this series, such as the lack of high-quality, structured data, and others are more specific to patient care, such as liability concerns.
    • The GAO “described six policy options:”
      • Collaboration. Policymakers could encourage interdisciplinary collaboration between developers and health care providers. This could result in AI tools that are easier to implement and use within an existing workflow.
      • Data Access. Policymakers could develop or expand high-quality data access mechanisms. This could help developers address bias concerns by ensuring data are representative, transparent, and equitable.
      • Best Practices. Policymakers could encourage relevant stakeholders and experts to establish best practices (such as standards) for development, implementation, and use of AI technologies. This could help with deployment and scalability of AI tools by providing guidance on data, interoperability, bias, and formatting issues.
      • Interdisciplinary Education. Policymakers could create opportunities for more workers to develop interdisciplinary skills. This could allow providers to use AI tools more effectively, and could be accomplished through a variety of methods, including changing medical curricula or grants.
      • Oversight Clarity. Policymakers could collaborate with relevant stakeholders to clarify appropriate oversight mechanisms. Predictable oversight could help ensure that AI tools remain safe and effective after deployment and throughout their lifecycle.
      • Status Quo. Policymakers could allow current efforts to proceed without intervention.
    • NAM claimed
      • Numerous AI-powered health applications designed for personal use have been shown to improve patient outcomes, building predictions based on large volumes of granular, real-time, and individualized behavioral and medical data. For instance, some forms of telehealth, a technology that has been critical during the COVID-19 pandemic, benefit considerably from AI software focused on natural language processing, which enables efficient triaging of patients based on urgency and type of illness. Beyond patient-provider communication, AI algorithms relevant to diabetic and cardiac care have demonstrated remarkable efficacy in helping patients manage their blood glucose levels in their day-to-day lives and in detecting cases of atrial fibrillation. AI tools that monitor and synthesize longitudinal patient behaviors are also particularly useful in psychiatric care, where of the exact timing of interventions is often critical. For example, smartphone-embedded sensors that track location and proximity of individuals can alert clinicians of possible substance use, prompting immediate intervention. On the population health level, these individual indicators of activity and health can be combined with environmental- and system-level data to generate predictive insight into local and global health trends. The most salient example of this may be the earliest warnings of the COVID-19 outbreak, issued in December 2019 by two private AI technology firms.
      • Successful implementation and widespread adoption of AI applications in HSOHC requires careful consideration of several key issues related to personal data, algorithm development, and health care insurance and payment. Chief among them are data interoperability, standardization, privacy, ameliorating systemic biases in algorithms, reimbursement of AI- assisted services, quality improvement, and integration of AI tools into provider workflows. Overcoming these challenges and optimizing the impact of AI tools on clinical outcomes will involve engaging diverse stakeholders, deliberately designing AI tools and interfaces, rigorously evaluating clinical and economic utility, and diffusing and scaling algorithms across different health settings. In addition to these potential logistical and technical hurdles, it is imperative to consider the legal and ethical issues surrounding AI, particularly as it relates to the fair and humanistic deployment of AI applications in HSOHC. Important legal considerations include the appropriate designation of accountability and liability of medical errors resulting from AI- assisted decisions for ensuring the safety of patients and consumers. Key ethical challenges include upholding the privacy of patients and their data—particularly with regard to non-HIPAA covered entities involved in the development of algorithms—building unbiased AI algorithms based on high-quality data from representative populations, and ensuring equitable access to AI technologies across diverse communities.
  • The National Institute of Standards and Technology (NIST) published a “new study of face recognition technology created after the onset of the COVID-19 pandemic [that] shows that some software developers have made demonstrable progress at recognizing masked faces.” In Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face Recognition Accuracy with Face Masks Using Post-COVID-19 Algorithms (NISTIR 8331), NIST stated the “report augments its predecessor with results for more recent algorithms provided to NIST after mid-March 2020.” NIST said that “[w]hile we do not have information on whether or not a particular algorithm was designed with face coverings in mind, the results show evidence that a number of developers have adapted their algorithms to support face recognition on subjects potentially wearing face masks.” NIST stated that
    • The following results represent observations on algorithms provided to NIST both before and after the COVID-19 pandemic to date. We do not have information on whether or not a particular algorithm was designed with face coverings in mind. The results documented capture a snapshot of algorithms submitted to the FRVT 1:1 in face recognition on subjects potentially wearing face masks.
      • False rejection performance: All algorithms submitted after the pandemic continue to give in-creased false non-match rates (FNMR) when the probes are masked. While a few pre-pandemic algorithms still remain within the most accurate on masked photos, some developers have submit-ted algorithms after the pandemic showing significantly improved accuracy and are now among the most accurate in our test.
      • Evolution of algorithms on face masks: We observe that a number of algorithms submitted since mid-March 2020 show notable reductions in error rates with face masks over their pre-pandemic predecessors. When comparing error rates for unmasked versus masked faces, the median FNMR across algorithms submitted since mid-March 2020 has been reduced by around 25% from the median pre-pandemic results. The figure below presents examples of developer evolution on both masked and unmasked datasets. For some developers, false rejection rates in their algorithms submitted since mid-March 2020 decreased by as much as a factor of 10 over their pre-pandemic algorithms, which is evidence that some providers are adapting their algorithms to handle facemasks. However, in the best cases, when comparing results for unmasked images to masked im-ages, false rejection rates have increased from 0.3%-0.5% (unmasked) to 2.4%-5% (masked).
      • False acceptance performance: As most systems are configured with a fixed threshold, it is necessary to report both false negative and false positive rates for each group at that threshold. When comparing a masked probe to an unmasked enrollment photo, in most cases, false match rates (FMR) are reduced by masks. The effect is generally modest with reductions in FMR usually being smaller than a factor of two. This property is valuable in that masked probes do not impart adverse false match security consequences for verification.
      • Mask-agnostic face recognition: All 1:1 verification algorithms submitted to the FRVT test since the start of the pandemic are evaluated on both masked and unmasked datasets. The test is de-signed this way to mimic operational reality: some images will have masks, some will not (especially enrollment samples from a database or ID card). And to the extent that the use of protective masks will exist for some time, our test will continue to evaluate algorithmic capability on verifying all combinations of masked and unmasked faces.
  • The government in London has issued a progress report on its current cybersecurity strategy that has another year to run. The Paymaster General assessed how well the United Kingdom (UK) has implemented the National Cyber Security Strategy 2016 to 2021 and pointed to goals yet to be achieved. This assessment comes in the shadow of the pending exit of the UK from the European Union (EU) and Prime Minister Boris Johnson’s plans to increase the UK’s role in select defense issues, including cyber operations. The Paymaster General stated:
    • The global landscape has changed significantly since the publication of the National Cyber Security Strategy Progress Report in May 2019. We have seen unprecedented levels of disruption to our way of life that few would have predicted. The COVID-19 pandemic has increased our reliance on digital technologies – for our personal communications with friends and family and our ability to work remotely, as well as for businesses and government to continue to operate effectively, including in support of the national response.
    • These new ways of living and working highlight the importance of cyber security, which is also underlined by wider trends. An ever greater reliance on digital networks and systems, more rapid advances in new technologies, a wider range of threats, and increasing international competition on underlying technologies and standards in cyberspace, emphasise the need for good cyber security practices for individuals, businesses and government.
    • Although the scale and international nature of these changes present challenges, there are also opportunities. With the UK’s departure from the European Union in January 2020, we can define and strengthen Britain’s place in the world as a global leader in cyber security, as an independent, sovereign nation.
    • The sustained, strategic investment and whole of society approach delivered so far through the National Cyber Security Strategy has ensured we are well placed to respond to this changing environment and seize new opportunities.
    • The Paymaster General asserted:
      • [The] report has highlighted growing risks, some accelerated by the COVID-19 pandemic, and longer-term trends that will shape the environment over the next decade:
      • Ever greater reliance on digital networks and systems as daily life moves online, bringing huge benefits but also creating new systemic and individuals risks.
      • Rapid technological change and greater global competition, challenging our ability to shape the technologies that will underpin our future security and prosperity.
      • A wider range of adversaries as criminals gain easier access to commoditised attack capabilities and cyber techniques form a growing part of states’ toolsets.
      • Competing visions for the future of the internet and the risk of fragmentation, making consensus on norms and ethics in cyberspace harder to achieve.
      • In February 2020 the Prime Minister announced the Integrated Review of Security, Defence, Development and Foreign Policy. This will define the government’s ambition for the UK’s role in the world and the long-term strategic aims of our national security and foreign policy. It will set out the way in which the UK will be a problem-solving and burden-sharing nation, and a strong direction for recovery from COVID-19, at home and overseas.
      • This will help to shape our national approach and priorities on cyber security beyond 2021. Cyber security is a key element of our international, defence and security posture, as well as a driving force for our economic prosperity.
  • The University of Toronto’s Citizen Lab published a report on an Israeli surveillance firm that uses “[o]ne of the widest-used—but least appreciated” means of surveilling people (i.e., “leveraging of weaknesses in the global mobile telecommunications infrastructure to monitor and intercept phone calls and traffic.” Citizen Lab explained that an affiliate of the NSO Group, “Circles is known for selling systems to exploit Signaling System 7 (SS7) vulnerabilities, and claims to sell this technology exclusively to nation-states.” Citizen Lab noted that “[u]nlike NSO Group’s Pegasus spyware, the SS7 mechanism by which Circles’ product reportedly operates does not have an obvious signature on a target’s phone, such as the telltale targeting SMS bearing a malicious link that is sometimes present on a phone targeted with Pegasus.” Citizen Lab found that
    • Circles is a surveillance firm that reportedly exploits weaknesses in the global mobile phone system to snoop on calls, texts, and the location of phones around the globe. Circles is affiliated with NSO Group, which develops the oft-abused Pegasus spyware.
    • Circles, whose products work without hacking the phone itself, says they sell only to nation-states. According to leaked documents, Circles customers can purchase a system that they connect to their local telecommunications companies’ infrastructure, or can use a separate system called the “Circles Cloud,” which interconnects with telecommunications companies around the world.
    • According to the U.S. Department of Homeland Security, all U.S. wireless networks are vulnerable to the types of weaknesses reportedly exploited by Circles. A majority of networks around the globe are similarly vulnerable.
    • Using Internet scanning, we found a unique signature associated with the hostnames of Check Point firewalls used in Circles deployments. This scanning enabled us to identify Circles deployments in at least 25 countries.
    • We determine that the governments of the following countries are likely Circles customers: Australia, Belgium, Botswana, Chile, Denmark, Ecuador, El Salvador, Estonia, Equatorial Guinea, Guatemala, Honduras, Indonesia, Israel, Kenya, Malaysia, Mexico, Morocco, Nigeria, Peru, Serbia, Thailand, the United Arab Emirates (UAE), Vietnam, Zambia, and Zimbabwe.
    • Some of the specific government branches we identify with varying degrees of confidence as being Circles customers have a history of leveraging digital technology for human rights abuses. In a few specific cases, we were able to attribute the deployment to a particular customer, such as the Security Operations Command (ISOC) of the Royal Thai Army, which has allegedly tortured detainees.
  • Senators Ron Wyden (D-OR) Elizabeth Warren (D-MA) Edward J. Markey (D-MA) and Brian Schatz (D-HI) “announced that the Department of Homeland Security (DHS) will launch an inspector general investigation into Customs and Border Protection’s (CBP) warrantless tracking of phones in the United States following an inquiry from the senators earlier this year” per their press release.
    • The Senators added:
      • As revealed by public contracts, CBP has paid a government contractor named Venntel nearly half a million dollars for access to a commercial database containing location data mined from applications on millions of Americans’ mobile phones. CBP officials also confirmed the agency’s warrantless tracking of phones in the United States using Venntel’s product in a September 16, 2020 call with Senate staff.
      • In 2018, the Supreme Court held in Carpenter v. United States that the collection of significant quantities of historical location data from Americans’ cell phones is a search under the Fourth Amendment and therefore requires a warrant.
      • In September 2020, Wyden and Warren successfully pressed for an inspector general investigation into the Internal Revenue Service’s use of Venntel’s commercial location tracking service without a court order.
    • In a letter, the DHS Office of the Inspector General (OIG) explained:
      • We have reviewed your request and plan to initiate an audit that we believe will address your concerns. The objective of our audit is to determine if the Department of Homeland Security (DHS) and it [sic] components have developed, updated, and adhered to policies related to cell-phone surveillance devices. In addition, you may be interested in our audit to review DHS’ use and protection of open source intelligence. Open source intelligence, while different from cell phone surveillance, includes the Department’s use of information provided by the public via cellular devices, such as social media status updates, geo-tagged photos, and specific location check-ins.
    • In an October letter, these Senators plus Senator Sherrod Brown (D-OH) argued:
      • CBP is not above the law and it should not be able to buy its way around the Fourth Amendment. Accordingly, we urge you to investigate CBP’s warrantless use of commercial databases containing Americans’ information, including but not limited to Venntel’s location database. We urge you to examine what legal analysis, if any, CBP’s lawyers performed before the agency started to use this surveillance tool. We also request that you determine how CBP was able to begin operational use of Venntel’s location database without the Department of Homeland Security Privacy Office first publishing a Privacy Impact Assessment.
  • The American Civil Liberties Union (ACLU) has filed a lawsuit in a federal court in New York City, seeking an order to compel the United States (U.S.) Department of Homeland Security (DHS), U.S. Customs and Border Protection (CBP), and U.S. Immigration and Customs Enforcement (ICE) “to release records about their purchases of cell phone location data for immigration enforcement and other purposes.” The ACLU made these information requests after numerous media accounts showing that these and other U.S. agencies were buying location data and other sensitive information in ways intended to evade the bar in the Fourth Amendment against unreasonable searches.
    • In its press release, the ACLU asserted:
      • In February, The Wall Street Journal reported that this sensitive location data isn’t just for sale to commercial entities, but is also being purchased by U.S. government agencies, including by U.S. Immigrations and Customs Enforcement to locate and arrest immigrants. The Journal identified one company, Venntel, that was selling access to a massive database to the U.S. Department of Homeland Security, U.S. Customs and Border Protection, and ICE. Subsequent reporting has identified other companies selling access to similar databases to DHS and other agencies, including the U.S. military.
      • These practices raise serious concerns that federal immigration authorities are evading Fourth Amendment protections for cell phone location information by paying for access instead of obtaining a warrant. There’s even more reason for alarm when those agencies evade requests for information — including from U.S. senators — about such practices. That’s why today we asked a federal court to intervene and order DHS, CBP, and ICE to release information about their purchase and use of precise cell phone location information. Transparency is the first step to accountability.
    • The ACLU explained in the suit:
      • Multiple news sources have confirmed these agencies’ purchase of access to databases containing precise location information for millions of people—information gathered by applications (apps) running on their smartphones. The agencies’ purchases raise serious concerns that they are evading Fourth Amendment protections for cell phone location information by paying for access instead of obtaining a warrant. Yet, more than nine months after the ACLU submitted its FOIA request (“the Request”), these agencies have produced no responsive records. The information sought is of immense public significance, not only to shine a light on the government’s use of powerful location-tracking data in the immigration context, but also to assess whether the government’s purchase of this sensitive data complies with constitutional and legal limitations and is subject to appropriate oversight and control.
  • Facebook’s new Oversight Board announced “the first cases it will be deliberating and the opening of the public comment process” and “the appointment of five new trustees.” The cases were almost all referred by Facebook users and the new board is asking for comments on the right way to manage what may be objectionable content. The Oversight Board explained it “prioritizing cases that have the potential to affect lots of users around the world, are of critical importance to public discourse or raise important questions about Facebook’s policies.”
    • The new trustees are:
      • Kristina Arriaga is a globally recognized advocate for freedom of expression, with a focus on freedom of religion and belief. Kristina is president of the advisory firm Intrinsic.
      • Cherine Chalaby is an expert on internet governance, international finance and technology, with extensive board experience. As Chairman of ICANN, he led development of the organization’s five-year strategic plan for 2021 to 2025.
      • Wanda Felton has over 30 years of experience in the financial services industry, including serving as Vice Chair of the Board and First Vice President of the Export-Import Bank of the United States.
      • Kate O’Regan is a former judge of the Constitutional Court of South Africa and commissioner of the Khayelitsha Commission. She is the inaugural director of the Bonavero Institute of Human Rights at the University of Oxford.
      • Robert Post is an American legal scholar and Professor of Law at Yale Law School, where he formerly served as Dean. He is a leading scholar of the First Amendment and freedom of speech.

Coming Events

  • The National Institute of Standards and Technology (NIST) will hold a webinar on the Draft Federal Information Processing Standards (FIPS) 201-3 on 9 December.
  • On 9 December, the Senate Commerce, Science, and Transportation Committee will hold a hearing titled “The Invalidation of the EU-US Privacy Shield and the Future of Transatlantic Data Flows” with the following witnesses:
    • The Honorable Noah Phillips, Commissioner, Federal Trade Commission
    • Ms. Victoria Espinel, President and Chief Executive Officer, BSA – The Software Alliance
    • Mr. James Sullivan, Deputy Assistant Secretary for Services, International Trade Administration, U.S. Department of Commerce
    • Mr. Peter Swire, Elizabeth and Tommy Holder Chair of Law and Ethics, Georgia Tech Scheller College of Business, and Research Director, Cross-Border Data Forum
  • The Senate Judiciary Committee will hold an executive session at which the “Online Content Policy Modernization Act” (S.4632), a bill to narrow the liability shield in 47 USC 230, may be marked up.
  • On 10 December, the Federal Communications Commission (FCC) will hold an open meeting and has released a tentative agenda:
    • Securing the Communications Supply Chain. The Commission will consider a Report and Order that would require Eligible Telecommunications Carriers to remove equipment and services that pose an unacceptable risk to the national security of the United States or the security and safety of its people, would establish the Secure and Trusted Communications Networks Reimbursement Program, and would establish the procedures and criteria for publishing a list of covered communications equipment and services that must be removed. (WC Docket No. 18-89)
    • National Security Matter. The Commission will consider a national security matter.
    • National Security Matter. The Commission will consider a national security matter.
    • Allowing Earlier Equipment Marketing and Importation Opportunities. The Commission will consider a Notice of Proposed Rulemaking that would propose updates to its marketing and importation rules to permit, prior to equipment authorization, conditional sales of radiofrequency devices to consumers under certain circumstances and importation of a limited number of radiofrequency devices for certain pre-sale activities. (ET Docket No. 20-382)
    • Promoting Broadcast Internet Innovation Through ATSC 3.0. The Commission will consider a Report and Order that would modify and clarify existing rules to promote the deployment of Broadcast Internet services as part of the transition to ATSC 3.0. (MB Docket No. 20-145)

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Gerd Altmann from Pixabay

Further Reading, Other Developments, and Coming Events (4 December)

Further Reading

  • How Misinformation ‘Superspreaders’ Seed False Election Theories” By Sheera Frenkel — The New York Times. A significant percentage of lies, misinformation, and disinformation about the legitimacy of the election have been disseminated by a small number of right-wing figures, which are then repeated, reposted, and retweeted. The Times relies on research of how much engagement people like President Donald Trump and Dan Bongino get on Facebook after posting untrue claims about the election and it turns out that such trends and rumors do not start spontaneously.
  • Facebook Said It Would Ban Holocaust Deniers. Instead, Its Algorithm Provided a Network for Them” By Aaron Sankin — The Markup. This news organization still found Holocaust denial material promoted by Facebook’s algorithm even though the platform said it was taking down such material recently. This result may point to the difficulty of policing objectionable material that uses coded language and/or the social media platforms lack of sufficient resources to weed out this sort of content.
  • What Facebook Fed the Baby Boomers” By Charlie Warzel — The New York Times. A dispiriting trip inside two people’s Facebook feeds. This article makes the very good point that comments are not moderated, and these tend to be significant sources of vitriol and disinformation.
  • How to ‘disappear’ on Happiness Avenue in Beijing” By Vincent Ni and Yitsing Wang — BBC. By next year, the People’s Republic of China (PRC) may have as many as 560 million security cameras, and one artist ran an experiment of sorts to see if a group of people could walk down a major street in the capital without being seen by a camera or without their face being seen at places with lots of cameras.
  • Patients of a Vermont Hospital Are Left ‘in the Dark’ After a Cyberattack” By Ellen Barry and Nicole Perlroth — The New York Times. A Russian hacking outfit may have struck back after the Department of Defense’s (DOD) Cyber Command and Microsoft struck them. A number of hospitals were hacked, and care was significantly disrupted. This dynamic may lend itself to arguments that the United States (U.S.) may be wise to curtail its offensive operations.
  • EU seeks anti-China alliance on tech with Biden” By Jakob Hanke Vela and David M. Herszenhorn — Politico. The European Union (EU) is hoping the United States (U.S.) will be more amenable to working together in the realm of future technology policy, especially against the People’s Republic of China (PRC) which has made a concerted effort to drive the adoption of standards that favor its companies (e.g., the PRC pushed for and obtained 5G standards that will favor Huawei). Diplomatically speaking, this is considered low-hanging fruit, and a Biden Administration will undoubtedly be more multilateral than the Trump Administration.
  • Can We Make Our Robots Less Biased Than We Are?” By David Berreby — The New York Times. The bias present in facial recognition technology and artificial intelligence is making its way into robotics, posing the question of how do we change this? Many African American and other minority scientists are calling for the inclusion of people of color inn designing such systems as a countermeasure to the usual bias for white men.

Other Developments

  • The top Democrat on the Senate Homeland Security and Governmental Affairs Committee wrote President Donald Trump and “slammed the Trump Administration for their lack of action against foreign adversaries, including Russia, China, and North Korea, that have sponsored cyber-attacks against American hospitals and research institutions in an effort to steal information related to development of Coronavirus vaccines.” Peters used language that was unusually strong as Members of Congress typically tone down the rhetoric and deploy coded language to signal their level of displeasure about administration action or inaction. Peters could well feel strongly about what he perceives to be Trump Administration indifference to the cyber threats facing institutions researching and developing COVID-19 vaccines, this is an issue on which he may well be trying to split Republicans, placing them in the difficult position of lining up behind a president disinclined to prioritize some cyber issues or breaking ranks with him.
    • Peters stated:
      • I urge you, again, to send a strong message to any foreign government attempting to hack into our medical institutions that this behavior is unacceptable. The Administration should use the tools at its disposal, including the threat of sanctions, to deter future attacks against research institutions. In the event that any foreign government directly threatens the lives of Americans through attacks on medical facilities, other Department of Defense capabilities should be considered to make it clear that there will be consequences for these actions.
  • A United States federal court has ruled against a Trump Administration appointee Michael Pack and the United States Agency for Global Media (USAGM) and their attempts to interfere illegally with the independence of government-funded news organizations such as the Voice of America (VOA). The District Court for the District of Columbia enjoined Pack and the USAGM from a list of actions VOA and USAGM officials claim are contrary to the First Amendment and the organization’s mission.
  • The Federal Trade Commission (FTC) is asking a United States federal court to compel former Trump White House advisor Steve Bannon to appear for questioning per a Civil Investigative Demand (CID) as part of its ongoing probe of Cambridge Analytica’s role in misusing personal data of Facebook users in the 2016 Presidential Election. The FTC noted it “issued the CID to determine, among other things, whether Bannon may be held individually liable for the deceptive conduct of Cambridge Analytica, LLC—the subject of an administrative law enforcement action brought by the Commission.” There had been an interview scheduled in September but the day before it was to take place, Bannon’s lawyers informed the FTC he would not be attending.
    • In 2019, the FTC settled with former Cambridge Analytica CEO Alexander Nix and app developer Aleksandr Kogan in “administrative orders restricting how they conduct any business in the future, and requiring them to delete or destroy any personal information they collected.” The FTC did not, however, settle with the company itself. The agency alleged “that Cambridge Analytica, Nix, and Kogan deceived consumers by falsely claiming they did not collect any personally identifiable information from Facebook users who were asked to answer survey questions and share some of their Facebook profile data.” Facebook settled with the FTC for a record $5 billion for its role in the Cambridge Analytica scandal and for how it violated its 2012 consent order with the agency.
  • Apple responded to a group of human rights and civil liberties organizations about its plans to deploy technology on its operating system that allows users greater control of their privacy. Apple confirmed that its App Tracking Transparency (ATT) would be made part of its iOS early next year and would provide users of Apple products with a prompt with a warning about how their information may be used by the app developer. ATT would stop app developers from tracking users when they use other apps on ta device. Companies like Facebook have objected, claiming that the change is a direct shot at them and their revenue. Apple does not reap a significant revenue stream from collecting, combining, and processing user data whereas Facebook does. Facebook also tracks users across devices and apps on a device through a variety of means.
    • Apple stated:
      • We delayed the release of ATT to early next year to give developers the time they indicated they needed to properly update their systems and data practices, but we remain fully committed to ATT and to our expansive approach to privacy protections. We developed ATT for a single reason: because we share your concerns about users being tracked without their consent and the bundling and reselling of data by advertising networks and data brokers.
      • ATT doesn’t ban the reasonable collection of user data for app functionality or even for advertising. Just as with the other data-access permissions we have added over many software releases, developers will be able to explain why they want to track users both before the ATT prompt is shown and in the prompt itself. At that point, users will have the freedom to make their own choice about whether to proceed. This privacy innovation empowers consumers — not Apple — by simply making it clear what their options are, and giving them the information and power to choose.
    • As mentioned, a number of groups wrote Apple in October “to express our disappointment that Apple is delaying the full implementation of iOS 14’s anti-tracking features until early 2021.” They argued:
      • These features will constitute a vital policy improvement with the potential to strengthen respect for privacy across the industry. Apple should implement these features as expeditiously as possible.
      • We were heartened by Apple’s announcement that starting with the iOS 14 update, all app developers will be required to provide information that will help users understand the privacy implications of an app before they install it, within the App Store interface.
      • We were also pleased that iOS 14 users would be required to affirmatively opt in to app tracking, on an app-by-app basis. Along with these changes, we urge Apple to verify the accuracy of app policies, and to publish transparency reports showing the number of apps that are rejected and/or removed from the App Store due to inadequate or inaccurate policies.
  • The United States (U.S.) Government Accountability Office (GAO) sent its assessment of the privacy notices and practices of U.S. banks and credit unions to the chair of the Senate committee that oversees this issue. Senate Banking, Housing, and Urban Affairs Committee Chair Mike Crapo (R-ID) had asked the GAO “to examine the types of personal information that financial institutions collect, use, and share; how they make consumers aware of their information-sharing practices; and federal regulatory oversight of these activities.” The GAO found that a ten-year-old model privacy disclosure form used across these industries may comply with the prevailing federal requirements but no longer encompasses the breadth and scope of how the personal information of people is collected, processed, and used. The GAO called on the Consumer Financial Protection Bureau (CFPB) to update this form. The GAO explained:
    • Banks and credit unions collect, use, and share consumers’ personal information—such as income level and credit card transactions—to conduct everyday business and market products and services. They share this information with a variety of third parties, such as service providers and retailers.
    • The Gramm-Leach-Bliley Act (GLBA) requires financial institutions to provide consumers with a privacy notice describing their information-sharing practices. Many banks and credit unions elect to use a model form—issued by regulators in 2009—which provides a safe harbor for complying with the law (see figure). GAO found the form gives a limited view of what information is collected and with whom it is shared. Consumer and privacy groups GAO interviewed cited similar limitations. The model form was issued over 10 years ago. The proliferation of data-sharing since then suggests a reassessment of the form is warranted. Federal guidance states that notices about information collection and usage are central to providing privacy protections and transparency.
    • Since Congress transferred authority to the CFPB for implementing GLBA privacy provisions, the agency has not reassessed if the form meets consumer expectations for disclosures of information-sharing. CFPB officials said they had not considered a reevaluation because they had not heard concerns from industry or consumer groups about privacy notices. Improvements to the model form could help ensure that consumers are better informed about all the ways banks and credit unions collect and share personal information
    • The increasing amounts of and changing ways in which industry collects and shares consumer personal information—including from online activities—highlights the importance of clearly disclosing practices for collection, sharing, and use. However, our work shows that banks and credit unions generally used the model form, which was created more than 10 years ago, to make disclosures required under GLBA. As a result, the disclosures often provided a limited view of how banks and credit unions collect, use, and share personal information.
    • We recognize that the model form is required to be succinct, comprehensible to consumers, and allow for comparability across institutions. But, as information practices continue to change or expand, consumer insights into those practices may become even more limited. Improvements and updates to the model privacy form could help ensure that consumers are better informed about all the ways that banks and credit unions collect, use, and share personal information. For instance, in online versions of privacy notices, there may be opportunities for readers to access additional details—such as through hyperlinks—in a manner consistent with statutory requirements.
  • The Australian Competition & Consumer Commission (ACCC) is asking for feedback on Google’s proposed $2.1 billion acquisition of Fitbit. In a rather pointed statement, the chair of the ACCC, Rod Sims, made clear “[o]ur decision to begin consultation should not be interpreted as a signal that the ACCC will ultimately accept the undertaking and approve the transaction.” The buyout is also under scrutiny in the European Union (EU) and may be affected by the suit the United States Department of Justice (DOJ) and some states have brought against the company for anti-competitive behavior. The ACCC released a Statement of Issues in June about the proposed deal.
    • The ACCC explained “[t]he proposed undertaking would require Google to:
      • not use certain user data collected through Fitbit and Google wearables for Google’s advertising purposes for 10 years, with an option for the ACCC to extend this obligation by up to a further 10 years;
      • maintain access for third parties, such as health and fitness apps, to certain user data collected through Fitbit and Google wearable devices for 10 years; and
      • maintain levels of interoperability between third party wearables and Android smartphones for 10 years.
    • In August, the EU “opened an in-depth investigation to assess the proposed acquisition of Fitbit by Google under the EU Merger Regulation.” The European Commission (EC) expressed its concerns “that the proposed transaction would further entrench Google’s market position in the online advertising markets by increasing the already vast amount of data that Google could use for personalisation of the ads it serves and displays.” The EC stated “[a]t this stage of the investigation, the Commission considers that Google:
      • is dominant in the supply of online search advertising services in the EEA countries (with the exception of Portugal for which market shares are not available);
      • holds a strong market position in the supply of online display advertising services at least in Austria, Belgium, Bulgaria, Croatia, Denmark, France, Germany, Greece, Hungary, Ireland, Italy, Netherlands, Norway, Poland, Romania, Slovakia, Slovenia, Spain, Sweden and the United Kingdom, in particular in relation to off-social networks display ads;
      • holds a strong market position in the supply of ad tech services in the EEA.
    • The EC explained that it “will now carry out an in-depth investigation into the effects of the transaction to determine whether its initial competition concerns regarding the online advertising markets are confirmed…[and] will also further examine:
      • the effects of the combination of Fitbit’s and Google’s databases and capabilities in the digital healthcare sector, which is still at a nascent stage in Europe; and
      • whether Google would have the ability and incentive to degrade the interoperability of rivals’ wearables with Google’s Android operating system for smartphones once it owns Fitbit.
    • Amnesty International (AI) sent EC Executive Vice-President Margrethe Vestager a letter, arguing “[t]he merger risks further extending the dominance of Google and its surveillance-based business model, the nature and scale of which already represent a systemic threat to human rights.” AI asserted “[t]he deal is particularly troubling given the sensitive nature of the health data that Fitbit holds that would be acquired by Google.” AI argued “[t]he Commission must ensure that the merger does not proceed unless the two business enterprises can demonstrate that they have taken adequate account of the human rights risks and implemented strong and meaningful safeguards that prevent and mitigate these risks in the future.”
  • Europol, the United Nations Interregional Crime and Justice Research Institute (UNICRI) and Trend Micro have cooperated on a report that looks “into current and predicted criminal uses of artificial intelligence (AI).
    • The organizations argued “AI could be used to support:
      • convincing social engineering attacks at scale;
      • document-scraping malware to make attacks more efficient;
      • evasion of image recognition and voice biometrics;
      • ransomware attacks, through intelligent targeting and evasion;
      • data pollution, by identifying blind spots in detection rules.
    • The organizations concluded:
      • Based on available insights, research, and a structured open-source analysis, this report covered the present state of malicious uses and abuses of AI, including AI malware, AI-supported password guessing, and AI-aided encryption and social engineering attacks. It also described concrete future scenarios ranging from automated content generation and parsing, AI-aided reconnaissance, smart and connected technologies such as drones and autonomous cars, to AI-enabled stock market manipulation, as well as methods for AI-based detection and defense systems.
      • Using one of the most visible malicious uses of AI — the phenomenon of so-called deepfakes — the report further detailed a case study on the use of AI techniques to manipulate or generate visual and audio content that would be difficult for humans or even technological solutions to immediately distinguish from authentic ones.
      • As speculated on in this paper, criminals are likely to make use of AI to facilitate and improve their attacks by maximizing opportunities for profit within a shorter period, exploiting more victims, and creating new, innovative criminal business models — all the while reducing their chances of being caught. Consequently, as “AI-as-a-Service”206 becomes more widespread, it will also lower the barrier to entry by reducing the skills and technical expertise required to facilitate attacks. In short, this further exacerbates the potential for AI to be abused by criminals and for it to become a driver of future crimes.
      • Although the attacks detailed here are mostly theoretical, crafted as proofs of concept at this stage, and although the use of AI to improve the effectiveness of malware is still in its infancy, it is plausible that malware developers are already using AI in more obfuscated ways without being detected by researchers and analysts. For instance, malware developers could already be relying on AI-based methods to bypass spam filters, escape the detection features of antivirus software, and frustrate the analysis of malware. In fact, DeepLocker, a tool recently introduced by IBM and discussed in this paper, already demonstrates these attack abilities that would be difficult for a defender to stop.
      • To add, AI could also enhance traditional hacking techniques by introducing new ways of performing attacks that would be difficult for humans to predict. These could include fully automated penetration testing, improved password-guessing methods, tools to break CAPTCHA security systems, or improved social engineering attacks. With respect to open-source tools providing such functionalities, the paper discussed some that have already been introduced, such as DeepHack, DeepExploit, and XEvil.
      • The widespread use of AI assistants, meanwhile, also creates opportunities for criminals who could exploit the presence of these assistants in households. For instance, criminals could break into a smart home by hijacking an automation system through exposed audio devices.

Coming Events

  • The National Institute of Standards and Technology (NIST) will hold a webinar on the Draft Federal Information Processing Standards (FIPS) 201-3 on 9 December.
  • On 9 December, the Senate Commerce, Science, and Transportation Committee will hold a hearing titled “The Invalidation of the EU-US Privacy Shield and the Future of Transatlantic Data Flows” with the following witnesses:
    • The Honorable Noah Phillips, Commissioner, Federal Trade Commission
    • Ms. Victoria Espinel, President and Chief Executive Officer, BSA – The Software Alliance
    • Mr. James Sullivan, Deputy Assistant Secretary for Services, International Trade Administration, U.S. Department of Commerce
    • Mr. Peter Swire, Elizabeth and Tommy Holder Chair of Law and Ethics, Georgia Tech Scheller College of Business, and Research Director, Cross-Border Data Forum
  • On 10 December, the Federal Communications Commission (FCC) will hold an open meeting and has released a tentative agenda:
    • Securing the Communications Supply Chain. The Commission will consider a Report and Order that would require Eligible Telecommunications Carriers to remove equipment and services that pose an unacceptable risk to the national security of the United States or the security and safety of its people, would establish the Secure and Trusted Communications Networks Reimbursement Program, and would establish the procedures and criteria for publishing a list of covered communications equipment and services that must be removed. (WC Docket No. 18-89)
    • National Security Matter. The Commission will consider a national security matter.
    • National Security Matter. The Commission will consider a national security matter.
    • Allowing Earlier Equipment Marketing and Importation Opportunities. The Commission will consider a Notice of Proposed Rulemaking that would propose updates to its marketing and importation rules to permit, prior to equipment authorization, conditional sales of radiofrequency devices to consumers under certain circumstances and importation of a limited number of radiofrequency devices for certain pre-sale activities. (ET Docket No. 20-382)
    • Promoting Broadcast Internet Innovation Through ATSC 3.0. The Commission will consider a Report and Order that would modify and clarify existing rules to promote the deployment of Broadcast Internet services as part of the transition to ATSC 3.0. (MB Docket No. 20-145)

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Further Reading, Other Developments, and Coming Events (18 November)

Further Reading

  • Trump fires top DHS official who refuted his claims that the election was rigged” By Ellen Nakashima and Nick Miroff — The Washington Post. As rumored, President Donald Trump has decapitated the United States’ (U.S.) Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA). Director Christopher Krebs was fired via Twitter, after he had endorsed a letter by 59 experts on election security who said there was no fraud in the election. Trump tweeted: “The recent statement by Chris Krebs on the security of the 2020 Election was highly inaccurate, in that there were massive improprieties and fraud — including dead people voting, Poll Watchers not allowed into polling locations, ‘glitches’ in the voting machines which changed votes from Trump to Biden, late voting, and many more. Therefore, effective immediately, Chris Krebs has been terminated as Director of the Cybersecurity and Infrastructure Security Agency.” Of course, the statement CISA cosigned and issued last week asserting there was no evidence of fraud or wrongdoing in the election probably did not help his prospects. Additionally, CISA Deputy Director Matthew Travis was essentially forced out when he was informed the normal succession plan would be ignored and he would not become the acting head of CISA. A CISA senior civil servant, Brandon Wales, will helm the agency in an acting basis. Last week, CISA’s Assistant Director for Cybersecurity Bryan Ware was forced out.
  • NSA Spied On Denmark As It Chose Its Future Fighter Aircraft: Report” By Thomas Newdick — The Drive. A Danish media outlet is claiming the United States U.S. National Security Agency (NSA) spied Denmark’s Ministry of Finance, the Ministry of Foreign Affairs, and the defense firm Terma in order to help Lockheed Martin’s bid to sell F-35 Joint Strike Fighters to Denmark. Eurofighter GmbH and Saab were offering their Typhoon and Gripen fighters to replace Denmark’s F-16s. Reportedly, the NSA used an existing arrangement with Denmark to obtain information from a program allowing the NSA access to fiber optics cables in the country. It is likely Denmark did not have such surveillance in mind when it struck this agreement with the U.S. Two whistleblowers reports have been filed with the Forsvarets Efterretningstjeneste (FE), Denmark’s Defense Intelligence Service, and there are allegations that the U.S. surveillance was illegal. However, the surveillance appears not to have influenced the Danish government, which opted for the F-35. Earlier this year, there were allegations the FE was improperly sharing Danish cables containing information on Danish citizens improperly.
  • Facebook Knows That Adding Labels To Trump’s False Claims Does Little To Stop Their Spread” By Craig Silverman and Ryan Mac — BuzzFeed News. These reporters must know half of Facebook’s staff because they always see what is going on internally with the company. In this latest scoop, they say they have seen internal numbers showing that labeling President Donald Trump’s false tweets have done little to slow their spread. In fact, labelling may only slow their spread by 8%. This outcome is contrary to a practice Facebook employed in 2017 under which fact checkers would label untrue posts as false. This reduced their virality by 80%.
  • Apple Halves Its App Store Fee for the Smaller Companies” By Jack Nicas — The New York Times. The holiday spirit must already be afoot in Cupertino, California, for small app developers will now only pay Apple 15% of in-app purchases for the privilege of being in the App Store. Of course, this decision has nothing to do with the antitrust pressure the company is facing in the European Union and United States (U.S.) and will have very little impact on their bottom line since app developers with less than $1 million in revenue (i.e., those entitled to a reduction) account for 2% of App Store revenue. It does give Apple leadership and executive some great talking points when pressed by antitrust investigators, legislators, and the media.
  • Inside the behind-the-scenes fight to convince Joe Biden about Silicon Valley” By Theodore Schleifer — recode. The jockeying among factions in the Democratic party and other stakeholders is fierce and will only grow fiercer when it comes to who will serve where in a Biden Administration. Silicon Valley and those who would reform tech are fighting to get people amenable to their policy goals placed in the new Administration. President-elect Joe Biden and his campaign were ambiguous on many tech policy issues and have flexibility which has been further helped by appointing people respected in both camps like new White House Chief of Staff Ron Klain.
  • Group of 165 Google critics calls for swift EU antitrust action – letter” By Foo Yun Chee — Reuters. A wide-ranging group of companies and industry associations are urging the European Union to investigate and punish what they see as Google’s anti-competitive dominance of online search engines, especially the One Box that now appears at the top of search results that points people to Google sites and products.

Other Developments

  • The European Union (EU) announced a revision of its export control process for allowing the export of dual use items, including cyber surveillance tools. The European Commission (EC) asserted “[t]hanks to the new Regulation, the EU can now effectively protect its interests and values and, in particular, address the risk of violations of human rights associated with trade in cyber-surveillance technologies without prior agreement at multilateral level…[and] also enhances the EU’s capacity to control trade flows in sensitive new and emerging technologies. The EC explained “[t]he new Regulation includes many of the Commission proposals for a comprehensive “system upgrade”, and will make the existing EU Export control system more effective by:
    • introducing a novel ‘human security’ dimension so the EU can respond to the challenges posed by emerging dual-use technologies – especially cyber-surveillance technologies – that pose a risk to national and international security, including protecting human rights;
    • updating key notions and definitions (e.g. definition of an “exporter” to apply to natural persons and researchers involved in dual-use technology transfers);
    • simplifying and harmonising licensing procedures and allowing the Commission to amend – by ‘simplified’ procedure, i.e. delegated act – the list of items or destinations subject to specific forms of control, thereby making the export control system more agile and able to evolve and adjust to circumstances;
    • enhancing information-exchange between licensing authorities and the Commission with a view to increasing transparency of licensing decisions;
    • coordination of, and support for, robust enforcement of controls, including enhancing secure electronic information-exchange between licensing and enforcement agencies;
    • developing an EU capacity-building and training programme for Member States’ licensing and enforcement authorities;
    • outreach to industry and transparency with stakeholders, developing a structured relationship with the private sector through specific consultations of stakeholders by the relevant Commission group of Member-State experts, and;
    • setting up a dialogue with third countries and seeking a level playing field at global level.
    • The European Parliament contended:
      • The reviewed rules, agreed by Parliament and Council negotiators, govern the export of so-called dual use goods, software and technology – for example, high-performance computers, drones and certain chemicals – with civilian applications that might be repurposed to be used in ways which violate human rights.
      • The current update, made necessary by technological developments and growing security risks, includes new criteria to grant or reject export licenses for certain items.
      • The Parliament added its negotiators
        • got agreement on setting up an EU-wide regime to control cyber-surveillance items that are not listed as dual-use items in international regimes, in the interest of protecting human rights and political freedoms;
        • strengthened member states’ public reporting obligations on export controls, so far patchy, to make the cyber-surveillance sector in particular more transparent;
        • increased the importance of human rights as licensing criterion; and
        • agreed on rules to swiftly include emerging technologies in the regulation.
  • The United States House of Representatives passed three technology bills by voice vote yesterday. Two of these bills would address in different ways the United States’ (U.S.) efforts to make up ground on the People’s Republic of China in the race to roll out 5G networks. It is possible but not foreseeable whether the Senate will take up these bills before year’s end and send them to the White House. It is possible given how discrete the bills are in scope. The House Energy and Commerce Committee provided these summaries:
    • The “Utilizing Strategic Allied (USA) Telecommunications Act of 2020” (H.R.6624) creates a new grant program through the National Telecommunications and Information Administration (NTIA) to promote technology that enhances supply chain security and market competitiveness in wireless communications networks.
      • One of the bill’s sponsors, House Energy and Commerce Committee Chair Frank Pallone Jr (D-NJ) stated:
        • Earlier this year, the House passed, and the President signed, my Secure and Trusted Communications Networks Act to create a program to fund the replacement of suspect network equipment. Suspect equipment, including that produced by Huawei and ZTE, could allow foreign adversaries to surveil Americans at home or, worse, disrupt our communications systems.
        • While we are still pushing for Congress to appropriate funds to that end, it is important to recognize that my legislation was only half the battle, even when it is funded. We also need to create and foster competition for trusted network equipment that uses open interfaces so that the United States is not beholden to a market for network equipment that is becoming less competitive. This bill before us today, the Utilizing Strategic Allied Telecommunications Act, or the USA Telecommunications Act, does just that.
        • The bipartisan legislation creates a grant program and authorizes $750 million in funding for the National Telecommunications and Information Administration to help promote and deploy Open Radio Access Network technologies that can spur that type of competition. We must support alternatives to companies like Huawei and ZTE…
    • The “Spectrum IT Modernization Act of 2020” (H.R.7310) requires NTIA – in consultation with the Policy and Plans Steering Group – to submit to Congress a report on its plans to modernize agency information technology systems relating to managing the use of federal spectrum. 
      • A sponsor of the bill, House Energy and Commerce Committee Ranking Member Greg Walden (R-OR) explained:
      • H.R. 7310 would require NTIA to establish a process to upgrade their spectrum management infrastructure for the 21st century. The bill would direct the policy coordination arm of NTIA to submit a plan to Congress as to how they will standardize the data collection across agencies and then directs agencies with Federal spectrum assignments from NTIA to issue an implementation plan to interoperate with NTIA’s plan.
      • This is a good-government bill–it really is–and with continued support and oversight from Congress, we can continue the United States’ leadership in making Federal spectrum available for flexible use by the private sector.
    • The “Reliable Emergency Alert Distribution Improvement (READI) Act of 2020” (H.R.6096) amends the Warning, Alert, and Response Network Act to classify emergency alerts from the Federal Emergency Management Agency as a type of alert that commercial mobile service providers may not allow subscribers to block from their devices. The bill also directs the Federal Communications Commission (FCC) to adopt regulations to facilitate coordination with State Emergency Communications Committees in developing and modernizing State Emergency Alert System plans. Finally, the READI Act directs the FCC to examine the feasibility of modernizing the Emergency Alert System by expanding alert distribution to the internet and streaming services.  
  • The same privacy activists that brought the suits that resulted in the striking down of the Safe Harbor and Privacy Shield agreements have filed complaints in Spain and Germany that Apple has violated the European Union’s (EU) e-Privacy Directive and laws in each nation through its use of IDFA (Apple’s Identifier for Advertisers). Because the General Data Protection Regulation (GDPR) is not the grounds for the complaints, each nation could act without needing to consult other EU nations. Moreover, a similar system used by Google is also being investigated for possible violations. The group none of your business (noyb) asserted:
    • IDFA – the cookie in every iPhone user’s pocket. Each iPhone runs on Apple’s iOS operating system. By default, iOS automatically generates a unique “IDFA” (short for Identifier for Advertisers) for each iPhone. Just like a license plate this unique string of numbers and characters allows Apple and other third parties to identify users across applications and even connect online and mobile behaviour (“cross device tracking”).
    • Tracking without user consent. Apple’s operating system creates the IDFA without user’s knowledge or consent. After its creation, Apple and third parties (e.g. applications providers and advertisers) can access the IDFA to track users’ behaviour, elaborate consumption preferences and provide personalised advertising. Such tracking is strictly regulated by the EU “Cookie Law” (Article 5(3) of the e-Privacy Directive) and requires the users’ informed and unambiguous consent.
    • Insufficient “improvement” on third-party access. Recently Apple announced plans for future changes to the IDFA system. These changes seem to restrict the use of the IDFA for third parties (but not for Apple itself). Just like when an app requests access to the camera or microphone, the plans foresee a new dialog that asks the user if an app should be able to access the IDFA. However, the initial storage of the IDFA and Apple’s use of it will still be done without the users’ consent and therefore in breach of EU law. It is unclear when and if these changes will be implemented by the company.
    • No need for EU cooperation. As the complaint is based on Article 5(3) of the e-Privacy Directive and not the GDPR, the Spanish and German authorities can directly fine Apple, without the need for cooperation among EU Data Protection Authorities as under GDPR.
  • The Federal Trade Commission (FTC) Chair made remarks at antitrust conference on how antitrust law should view “an acquisition of a nascent competitive threat by a monopolist when there is reason to think that the state of competition today may not tell the whole story.” Chair Joseph Simons views are timely for a number of reasons, particularly the extent to which large technology firms have sought and bought smaller, newer companies. Obviously, the acquisitions of WhatsApp and Instagram by Facebook and YouTube and AdSense by Google come to mind as the sorts of acquisitions United States (U.S.) regulators approved, possibly without much thought given to what a future market may look like for competition if the larger, dominant company is allowed to proceed. Simons suggested regulators and courts would be wise to give this aspect of antitrust mush more thought, which could theoretically inform the approach the Biden Department of Justice and FTC take. Simons stated:
    • And if firms are looking to the future, then antitrust enforcers should too. We must be willing and able to recognize that harm to competition might not be obvious from looking at the marketplace as it stands. If we confine ourselves to examining a static picture of the market at the moment we investigate a practice or transaction, without regard to the dynamic business realities at work, then we risk forfeiting the benefits of competition that could arise in the future to challenge the dominant firm, even when this future competition is to some extent uncertain.
    • Simons asserted:
      • A merger or acquisition can of course constitute anticompetitive conduct for purposes of Section 2 [of the Sherman Act]
      • From a competition perspective, a monopolist can “squash” a nascent competitor by buying it, not just by targeting it with anticompetitive actions as Microsoft did. In fact, from the monopolist’s perspective, it may be easier and more effective to buy the nascent threat (even if only to keep it out of the hands of others) than to target it with other types of anticompetitive conduct.
      • A central issue in potential competition cases is the nature and strength of evidence that the parties will become actual competitors in the future. Some cases have applied Section 7 [of the Clayton Act] narrowly in this context: too narrowly, I think, given that the purpose of Section 7 is to prohibit acquisitions that “may” substantially lessen competition or “tend” to create a monopoly.
    • Simons concluded:
      • But uncertainty has always been a feature of the competitive process, even in markets that appear to be simple or traditional, and dealing with uncertainty is all in a day’s work for an antitrust enforcer. I have referred to the Microsoft case repeatedly today, so, in closing, let me remind everyone that there was some uncertainty about the future in Microsoft as well. The court, in holding that the plaintiff does not and should not bear the burden of “reconstruct[ing] a product’s hypothetical development,” observed that the defendant should appropriately be “made to suffer the uncertain consequences of its own undesirable conduct.” The same holds when the monopolist has simply chosen to acquire the threat.
  • The National Institute of Standards and Technology’s (NIST) National Initiative for Cybersecurity Education (NICE) revised the Workforce Framework for Cybersecurity (NICE Framework) that “improves communications about how to identify, recruit, develop, and retain cybersecurity talent ­ – offering a common, consistent lexicon that categorizes and describes cybersecurity work.” NIST explained:
    • The NICE Framework assists organizations with managing cybersecurity risks by providing a way to discuss the work and learners associated with cybersecurity. These cybersecurity risks are an important input into enterprise risk decisions as described in NIST Interagency Report 8286, Integrating Cybersecurity and Enterprise Risk Management (ERM).
    • NIST stated “[r]evisions to the NICE Framework (NIST Special Publication 800-181) provide:
      • A streamlined set of “building blocks” comprised of Task, Knowledge, and Skill Statements;
      • The introduction of Competencies as a mechanism for organizations to assess learners; and
      • A reference to artifacts, such as Work Roles and Knowledge Skills and Abilities statements, that will live outside of the publication to enable a more fluid update process.
  • A left center think tank published a report on how the United States (U.S.) and likeminded nations can better fight cybercrime. In the report addressed to President-elect Joe Biden and Vice President-elect Kamala Harris, the Third Way presented the results of a “multiyear effort to define concrete steps to improve the government’s ability to tackle the scourge of cybercrime by better identifying unlawful perpetrators and imposing meaningful consequences on them and those behind their actions.” In “A Roadmap to Strengthen US Cyber Enforcement: Where Do We Go From Here?,” the Third Way made a list of detailed recommendations on how the Biden Administration could better fight cybercrime, but in the cover letter to the report, there was a high level summary of these recommendations:
    • In this roadmap, we identify the challenges the US government faces in investigating and prosecuting these crimes and advancing the level of international cooperation necessary to do so. Cyberattackers take great pains to hide their identity, using sophisticated tools that require technical investigative and forensic expertise to attribute the attacks. The attacks are often done at scale, where perpetrators prey on multiple victims across many jurisdictions and countries, requiring coordination across criminal justice agencies. The skills necessary to investigate these crimes are in high demand in the private sector, making it difficult to retain qualified personnel. A number of diplomatic barriers make cross-border cooperation difficult, a challenge exacerbated often by blurred lines line between state and non-state actors in perpetrating these crimes.
    • This roadmap recommends actions that your administration can take to develop a comprehensive strategy to reduce cybercrime and minimize its impact on the American people by identifying the perpetrators and imposing meaningful consequences on them. We propose you make clear at the outset to the American public and global partners that cyber enforcement will be a top priority for your administration. In reinstating a White House cybersecurity position, we have extensive recommendations on how that position should address cybercrime. And, to make policy from an intelligence baseline, we believe you should request a National Intelligence Estimate on the linkages between cybercrime and nation-state cyber actors to understand the scope of the problem.
    • Our law enforcement working group has detailed recommendations to improve and modernize law enforcement’s ability to track and respond to cybercrime. And our global cooperation working group has detailed recommendations on creating a cohesive international cyber engagement strategy; assessing and improving the capacity of foreign partners on cybercrime; and improving the process for cross-border data requests that are critical to solving these crimes. We believe that with these recommendations, you can make substantial strides in bringing cybercriminals to justice and deterring future cybercriminals from victimizing Americans.

Coming Events

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.