Further Reading, Other Development, and Coming Events (20 and 21 January 2021)

Further Reading

  • Amazon’s Ring Neighbors app exposed users’ precise locations and home addresses” By Zack Whittaker — Tech Crunch. Again Amazon’s home security platform suffers problems by way of users data being exposed or less than protected.
  • Harassment of Chinese dissidents was warning signal on disinformation” By Shawna Chen and Bethany Allen-Ebrahimian — Axios. In an example of how malicious online activities can spill into the real world as a number of Chinese dissidents were set upon by protestors.
  • How Social Media’s Obsession with Scale Supercharged Disinformation” By Joan Donovan — Harvard Business Review. Companies like Facebook and Twitter emphasized scale over safety in trying to grow as quickly as possible. This lead to a proliferation of fake accounts and proved welcome ground for the seeds of misinformation.
  • The Moderation War Is Coming to Spotify, Substack, and Clubhouse” By Alex Kantrowitz — OneZero. The same issues with objectionable and abusive content plaguing Twitter, Facebook, YouTube and others will almost certainly become an issue for the newer platforms, and in fact already are.
  • Mexican president mounts campaign against social media bans” By Mark Stevenson — The Associated Press. The leftist President of Mexico President Andrés Manuel López Obrador is vowing to lead international efforts to stop social media companies from censoring what he considers free speech. Whether this materializes into something substantial is not clear.
  • As Trump Clashes With Big Tech, China’s Censored Internet Takes His Side” By Li Yuan — The New York Times. The government in Beijing is framing the ban of former President Donald Trump after the attempted insurrection by social media platforms as proof there is no untrammeled freedom of speech. This position helps bolster the oppressive policing of online content the People’s Republic of China (PRC) wages against its citizens. And quite separately many Chinese people (or what appear to be actual people) are questioning what is often deemed the censoring of Trump in the United States (U.S.), a nation ostensibly committed to free speech. There is also widespread misunderstanding about the First Amendment rights of social media platforms not to host content with which they disagree and the power of platforms to make such determinations without fear that the U.S. government will punish them as is often the case in the PRC.
  • Trump admin slams China’s Huawei, halting shipments from Intel, others – sources” By Karen Freifeld and Alexandra Alper — Reuters. On its way out of the proverbial door, the Trump Administration delivered parting shots to Huawei and the People’s Republic of China by revoking one license and denying others to sell the PRC tech giant semiconductors. Whether the Biden Administration will reverse or stand by these actions remains to be seen. The companies, including Intel, could appeal. Additionally, there are an estimated $400 million worth of applications for similar licenses pending at the Department of Commerce that are now the domain of the new regime in Washington. It is too early to discern how the Biden Administration will maintain or modify Trump Administration policy towards the PRC.
  • Behind a Secret Deal Between Google and Facebook” By Daisuke Wakabayashi and Tiffany Hsu — The New York Times. The newspaper got its hands on an unredacted copy of the antitrust suit Texas Attorney General Ken Paxton and other attorneys general filed against Google, and it has details on the deal Facebook and Google allegedly struck to divide the online advertising world. Not only did Facebook ditch an effort launched by publishers to defeat Google’s overwhelming advantages in online advertising bidding, it joined Google’s rival effort with a guarantee that it would win a specified number of bids and more time to bid on ads. Google and Facebook naturally deny any wrongdoing.
  • Biden and Trump Voters Were Exposed to Radically Different Coverage of the Capitol Riot on Facebook” By Colin Lecher and Jon Keegan — The Markup. Using a tool on browsers the organization pays Facebook users to have, the Markup can track the type of material they see in their feed. Facebook’s algorithm fed people material about the 6 January 2021 attempted insurrection based on their political views. Many have pointed out that this very dynamic creates filter bubbles that poison democracy and public discourse.
  • Banning Trump won’t fix social media: 10 ideas to rebuild our broken internet – by experts” By Julia Carrie Wong — The Guardian. There are some fascinating proposals in this piece that could help address the problems of social media.
  • Misinformation dropped dramatically the week after Twitter banned Trump and some allies” By Elizabeth Dwoskin and Craig Timberg — The Washington Post. Research showed that lies, misinformation, and disinformation about election fraud dropped by three-quarters after former President Donald Trump was banned from Twitter and other platforms. Other research showed that a small group of conservatives were responsible for up to 20% of misinformation on this and other conspiracies.
  • This Was WhatsApp’s Plan All Along” By Shoshana Wodinsky — Gizmodo. This piece does a great job of breaking down into plain English the proposed changes to terms of service on WhatsApp that so enraged users that competitors Signal and Telegram have seen record-breaking downloads. Basically, it is all about reaping advertising dollars for Facebook through businesses and third-party partners using user data from business-related communications. Incidentally, WhatsApp has delayed changes until March because of the pushback.
  • Brussels eclipsed as EU countries roll out their own tech rules” By By Laura Kayali and Mark Scott — Politico EU. The European Union (EU) had a hard-enough task in trying to reach final language on a Digital Services Act and Digital Markets Act without nations like France, Germany, Poland, and others picking and choosing text from draft bills and enacting them into law. Brussels is not happy with this trend.

Other Developments

  • Federal Trade Commission (FTC) Chair Joseph J. Simons announced his resignation from the FTC effective on 29 January 2021 in keeping with tradition and past practice. This resignation clears the way for President Joe Biden to name the chair of the FTC, and along with FTC Commissioner Rohit Chopra’s nomination to head the Consumer Financial Protection Bureau (CFPB), the incoming President will get to nominate two Democratic FTC Commissioners, tipping the political balance of the FTC and likely ushering in a period of more regulation of the technology sector.
    • Simons also announced the resignation of senior staff: General Counsel Alden F. Abbott; Bureau of Competition Director Ian Conner; Bureau of Competition Deputy Directors Gail Levine and Daniel Francis; Bureau of Consumer Protection Director Andrew Smith; Bureau of Economics Director Andrew Sweeting; Office of Public Affairs Director Cathy MacFarlane; and Office of Policy Planning Director Bilal Sayyed.
  • In a speech last week before he sworn in, President Joe Biden announced his $1.9 trillion American Rescue Plan, and according to a summary, Biden will ask Congress to provide $10 billion for a handful of government facing programs to improve technology. Notably, Biden “is calling on Congress to launch the most ambitious effort ever to modernize and secure federal IT and networks.” Biden is proposing to dramatically increase funding for a fund that would allow agencies to borrow and then pay back funds to update their technology. Moreover, Biden is looking to push more money to a program to aid officials at agencies who oversee technology development and procurement.
    • Biden stated “[t]o remediate the SolarWinds breach and boost U.S. defenses, including of the COVID-19 vaccine process, President-elect Biden is calling on Congress to:
      • Expand and improve the Technology Modernization Fund. ​A $9 billion investment will help the U.S. launch major new IT and cybersecurity shared services at the Cyber Security and Information Security Agency (CISA) and the General Services Administration and complete modernization projects at federal agencies. ​In addition, the president-elect is calling on Congress to change the fund’s reimbursement structure in order to fund more innovative and impactful projects.
      • Surge cybersecurity technology and engineering expert hiring​. Providing the Information Technology Oversight and Reform fund with $200 million will allow for the rapid hiring of hundreds of experts to support the federal Chief Information Security Officer and U.S. Digital Service.
      • Build shared, secure services to drive transformational projects. ​Investing$300 million in no-year funding for Technology Transformation Services in the General Services Administration will drive secure IT projects forward without the need of reimbursement from agencies.
      • Improving security monitoring and incident response activities. ​An additional $690M for CISA will bolster cybersecurity across federal civilian networks, and support the piloting of new shared security and cloud computing services.
  • The United States (U.S.) Department of Commerce issued an interim final rule pursuant to an executive order (EO) issued by former President Donald Trump to secure the United States (U.S.) information and communications supply chain. This rule will undoubtedly be reviewed by the Biden Administration and may be withdrawn or modified depending on the fate on the EO on which the rule relies.
    • In the interim final rule, Commerce explained:
      • These regulations create the processes and procedures that the Secretary of Commerce will use to identify, assess, and address certain transactions, including classes of transactions, between U.S. persons and foreign persons that involve information and communications technology or services designed, developed, manufactured, or supplied, by persons owned by, controlled by, or subject to the jurisdiction or direction of a foreign adversary; and pose an undue or unacceptable risk. While this interim final rule will become effective on March 22, 2021, the Department of Commerce continues to welcome public input and is thus seeking additional public comment. Once any additional comments have been evaluated, the Department is committed to issuing a final rule.
      • On November 27, 2019, the Department of Commerce (Department) published a proposed rule to implement the terms of the Executive Order. (84 FR 65316). The proposed rule set forth processes for (1) how the Secretary would evaluate and assess transactions involving ICTS to determine whether they pose an undue risk of sabotage to or subversion of the ICTS supply chain, or an unacceptable risk to the national security of the United States or the security and safety of U.S. persons; (2) how the Secretary would notify parties to transactions under review of the Secretary’s decision regarding the ICTS Transaction, including whether the Secretary would prohibit or mitigate the transaction; and (3) how parties to transactions reviewed by the Secretary could comment on the Secretary’s preliminary decisions. The proposed rule also provided that the Secretary could act without complying with the proposed procedures where required by national security. Finally, the Secretary would establish penalties for violations of mitigation agreements, the regulations, or the Executive Order.
      • In addition to seeking general public comment, the Department requested comments from the public on five specific questions: (1) Whether the Secretary should consider categorical exclusions or whether there are classes of persons whose use of ICTS cannot violate the Executive Order; (2) whether there are categories of uses or of risks that are always capable of being reliably and adequately mitigated; (3) how the Secretary should monitor and enforce any mitigation agreements applied to a transaction; (4) how the terms, “transaction,” “dealing in,” and “use of” should be clarified in the rule; and (5) whether the Department should add record-keeping requirements for information related to transactions.
      • The list of “foreign adversaries” consists of the following foreign governments and non-government persons: The People’s Republic of China, including the Hong Kong Special Administrative Region (China); the Republic of Cuba (Cuba); the Islamic Republic of Iran (Iran); the Democratic People’s Republic of Korea (North Korea); the Russian Federation (Russia); and Venezuelan politician Nicolás Maduro (Maduro Regime).
  • The Federal Trade Commission (FTC) adjusted its penalty amounts for inflation, including a boost to the per violation penalty virtually all the privacy bills introduced in the last Congress would allow the agency to wield against first-time violators. The penalty for certain unfair and deceptive acts or practices was increased from $43,280 to $43,792.
  • The United States (U.S.) Department of State stood up its new Bureau of Cyberspace Security and Emerging Technologies (CSET) as it has long planned. At the beginning of the Trump Administration, the Department of State dismantled the Cyber Coordinator Office and gave its cybersecurity portfolio to the Bureau of Economic Affairs, which displeased Congressional stakeholders. In 2019, the department notified Congress of its plan to establish CSET. The department asserted:
    • The need to reorganize and resource America’s cyberspace and emerging technology security diplomacy through the creation of CSET is critical, as the challenges to U.S. national security presented by China, Russia, Iran, North Korea, and other cyber and emerging technology competitors and adversaries have only increased since the Department notified Congress in June 2019 of its intent to create CSET.
    • The CSET bureau will lead U.S. government diplomatic efforts on a wide range of international cyberspace security and emerging technology policy issues that affect U.S. foreign policy and national security, including securing cyberspace and critical technologies, reducing the likelihood of cyber conflict, and prevailing in strategic cyber competition.  The Secretary’s decision to establish CSET will permit the Department to posture itself appropriately and engage as effectively as possible with partners and allies on these pressing national security concerns.
    • The Congressional Members of the Cyberspace Solarium Commission made clear their disapproval of the decision. Senators Angus King (I-ME) and Ben Sasse, (R-NE) and Representatives Mike Gallagher (R-WI) and Jim Langevin (D-RI) said:
      • In our report, we emphasize the need for a greater emphasis on international cyber policy at State. However, unlike the bipartisan Cyber Diplomacy Act, the State Department’s proposed Bureau will reinforce existing silos and […] hinder the development of a holistic strategy to promote cyberspace stability on the international stage. We urge President-elect Biden to pause this reorganization when he takes office in two weeks and work with Congress to enact meaningful reform to protect our country in cyberspace.
  • The Australian Cyber Security Centre (ACSC) the Risk Identification Guidance “developed to assist organisations in identifying risks associated with their use of suppliers, manufacturers, distributors and retailers (i.e. businesses that constitute their cyber supply chain)” and the Risk Management Guidance because “[c]yber supply chain risk management can be achieved by identifying the cyber supply chain, understanding cyber supply chain risk, setting cyber security expectations, auditing for compliance, and monitoring and improving cyber supply chain security practices.”
  • The United Kingdom’s Surveillance Camera Commissioner (SCC), issued “best practice guidance, ‘Facing the Camera’, to all police forces in England and Wales” The SCC explained that “The provisions of this document only apply to the use of facial recognition technology and the inherent processing of images by the police where such use is integral to a surveillance camera system being operated in ‘live time’ or ‘near real time’ operational scenarios.” Last summer, a British appeals court overturned a decision that found that a police force’s use of facial recognition technology in a pilot program that utilized live footage to be legal. The appeals court found the use of this technology by the South Wales Police Force a violation of “the right to respect for private life under Article 8 of the European  Convention  on  Human  Rights,  data  protection  legislation,  and  the  Public  Sector Equality Duty (“PSED”) under section 149 of the Equality Act 2010.” The SCC stated:
    • The SCC considers surveillance to be an intrusive investigatory power where it is conducted by the police which impacts upon those fundamental rights and freedoms of people, as set out by the European Convention of Human Rights (ECHR) and the Human Rights Act 1998. In the context of surveillance camera systems which make use of facial recognition technology, the extent of state intrusion in such matters is significantly increased by the capabilities of algorithms which are in essence, integral to the surveillance conduct seeking to harvest information, private information, metadata, data, personal data, intelligence and evidence. Each of the aforementioned are bound by laws and rules which ought to be separately and jointly considered and applied in a manner which is demonstrably lawful and ethical and engenders public trust and confidence.
    • Whenever the police seek to use technology in pursuit of a legitimate aim, the key question arises as to whether the degree of intrusion which is caused to the fundamental freedoms of citizens by the police surveillance conduct using surveillance algorithms (biometric or otherwise) is necessary in a democratic society when considered alongside the legality and proportionality of their endeavours and intent. The type of equipment/technology/modality which they choose to use to that end (e.g. LFR, ANPR, thermal imaging, gait analysis, movement sensors etc), the manner in which such technological means are deployed, (such as using static cameras at various locations, used with body worn cameras or other mobile means), and whether such technology is used overtly alongside or networked with other surveillance technologies, are all factors which may significantly influence the depth of intrusion caused by police conduct upon citizen’s rights.
  • The Senate confirmed the nomination of Avril Haines to be the new Director of National Intelligence by an 89-10 vote after Senator Tom Cotton (R-AK) removed his hold on her nomination. However, Josh Hawley (R-MO) placed a hold on the nomination of Alejandro Mayorkas to be the next Secretary of Homeland Security and explained his action this way:
    • On Day 1 of his administration, President-elect Biden has said he plans to unveil an amnesty plan for 11 million immigrants in this nation illegally. This comes at a time when millions of American citizens remain out of work and a new migrant caravan has been attempting to reach the United States. Mr. Mayorkas has not adequately explained how he will enforce federal law and secure the southern border given President-elect Biden’s promise to roll back major enforcement and security measures. Just today, he declined to say he would enforce the laws Congress has already passed to secure the border wall system. Given this, I cannot consent to skip the standard vetting process and fast-track this nomination when so many questions remain unanswered.
  • Former Trump White House Cyber Coordinator Rob Joyce will replace the National Security Agency’s (NSA) Director of Cybersecurity Anne Neuberger who has been named the Biden White House’s Deputy National Security Advisor for Cyber and Emerging Technology. Anne Neuberger’s portfolio at the NSA included “lead[ing] NSA’s cybersecurity mission, including emerging technology areas like quantum-resistant cryptography.” Joyce was purged when former National Security Advisor John Bolton restructured the NSC in 2018, forcing out Joyce and his boss, former Homeland Security Advisor Tom Bossert. Presumably Joyce would have the same responsibilities. At the National Security Council, Neuberger would will work to coordinate cybersecurity and emerging technology policy across agencies and funnel policy options up to the full NSC and ultimately the President. This work would include Joyce.
  • The Supreme Court of the United States (SCOTUS) heard oral arguments on whether the Federal Trade Commission (FTC) Act gives the agency the power to seek monetary damages and restitution alongside permanent injunctions under Section 13(b). In AMG Capital Management, LLC v. FTC, the parties opposing the FTC argue the plain language of the statute does not allow for the seeking of restitution and monetary damages under this specific section of the FTC Act while the agency argues long accepted past practice and Congressional intent do, in fact, allow this relief to be sought when the FTC is seeking to punish violators of Section 5. The FTC is working a separate track to get a fix from Congress which could rewrite the FTC Act to make clear this sort of relief is legal. However, some stakeholders in the debate over privacy legislation may be using the case as leverage.
    • In October 2020, the FTC wrote the House and Senate committees with jurisdiction over the agency, asking for language to resolve the litigation over the power to seek and obtain restitution for victims of those who have violated Section 5 of the FTC Act and disgorgement of ill-gotten gains. The FTC is also asking that Congress clarify that the agency may act against violators even if their conduct has stopped as it has for more than four decades. Two federal appeals courts have ruled in ways that have limited the FTC’s long used powers, and now the Supreme Court of the United States is set to rule on these issues sometime next year. The FTC is claiming, however, that defendants are playing for time in the hopes that the FTC’s authority to seek and receive monetary penalties will ultimately be limited by the United States (U.S.) highest court. Judging by language tucked into a privacy bill introduced by the former chair of one of the committees, Congress may be willing to act soon.
    • The FTC asked the House Energy and Commerce and Senate Commerce, Science, and Transportation Committees “to take quick action to amend Section 13(b) [of the FTC Act i.e. 15 U.S.C. § 53(b)] to make clear that the Commission can bring actions in federal court under Section 13(b) even if conduct is no longer ongoing or impending when the suit is filed and can obtain monetary relief, including restitution and disgorgement, if successful.” The agency asserted “[w]ithout congressional action, the Commission’s ability to use Section 13(b) to provide refunds to consumer victims and to enjoin illegal activity is severely threatened.” All five FTC Commissioners signed the letter.
    • The FTC explained that adverse rulings by two federal appeals courts are constraining the agency from seeking relief for victims and punishment for violators of the FTC Act in federal courts below those two specific courts, but elsewhere defendants are either asking courts for a similar ruling or using delaying tactics in the hopes the Supreme Court upholds the two federal appeals courts:
      • …[C]ourts of appeals in the Third and Seventh Circuits have recently ruled that the agency cannot obtain any monetary relief under Section 13(b). Although review in the Supreme Court is pending, these lower court decisions are already inhibiting our ability to obtain monetary relief under 13(b). Not only do these decisions already prevent us from obtaining redress for consumers in the circuits where they issued, prospective defendants are routinely invoking them in refusing to settle cases with agreed-upon redress payments.
      • Moreover, defendants in our law enforcement actions pending in other circuits are seeking to expand the rulings to those circuits and taking steps to delay litigation in anticipation of a potential Supreme Court ruling that would allow them to escape liability for any monetary relief caused by their unlawful conduct. This is a significant impediment to the agency’s effectiveness, its ability to provide redress to consumer victims, and its ability to prevent entities who violate the law from profiting from their wrongdoing.
  • The United Kingdom’s Information Commissioner’s Office (ICO) issued guidance for British entities that may be affected by the massive SolarWinds hack that has compromised many key systems in the United States. The ICO advised:
    • Organisations should immediately check whether they are using a version of the software that has been compromised. These are versions 2019.4 HF 5, 2020.2 with no hotfix installed, and 2020.2 HF 1.
    • Organisations must also determine if the personal data they hold has been affected by the cyber-attack. If a reportable personal data breach is found, UK data controllers are required to inform the ICO within 72 hours of discovering the breach. Reports can be submitted online or organisations can call the ICO’s personal data breach helpline for advice on 0303 123 1113, option 2.
    • Organisations subject to the NIS Regulation will also need to determine if this incident has led to a “substantial impact on the provision’ of its digital services and report to the ICO.
  • Europol announced the takedown of “the world’s largest illegal marketplace on the dark web” in an operation coordinated by the following nations: “Germany, Australia, Denmark, Moldova, Ukraine, the United Kingdom (the National Crime Agency), and the USA (DEA, FBI, and IRS).” Europol added:
    • The Central Criminal Investigation Department in the German city of Oldenburg arrested an Australian citizen who is the alleged operator of DarkMarket near the German-Danish border over the weekend. The investigation, which was led by the cybercrime unit of the Koblenz Public Prosecutor’s Office, allowed officers to locate and close the marketplace, switch off the servers and seize the criminal infrastructure – more than 20 servers in Moldova and Ukraine supported by the German Federal Criminal Police office (BKA). The stored data will give investigators new leads to further investigate moderators, sellers, and buyers. 
  • The Enforcement Bureau (Bureau) of the Federal Communications Commission (FCC) issued an enforcement advisory intended to remind people that use of amateur and personal radios to commit crimes is itself a criminal offense that could warrant prosecution. The notice was issued because the FCC is claiming it is aware of discussion by some of how these means of communications may be superior to social media, which has been cracking down on extremist material since the attempted insurrection at the United States Capitol on 6 January. The Bureau stated:
    • The Bureau has become aware of discussions on social media platforms suggesting that certain radio services regulated by the Commission may be an alternative to social media platforms for groups to communicate and coordinate future activities.  The Bureau recognizes that these services can be used for a wide range of permitted purposes, including speech that is protected under the First Amendment of the U.S. Constitution.  Amateur and Personal Radio Services, however, may not be used to commit or facilitate crimes. 
    • Specifically, the Bureau reminds amateur licensees that they are prohibited from transmitting “communications intended to facilitate a criminal act” or “messages encoded for the purpose of obscuring their meaning.” Likewise, individuals operating radios in the Personal Radio Services, a category that includes Citizens Band radios, Family Radio Service walkie-talkies, and General Mobile Radio Service, are prohibited from using those radios “in connection with any activity which is against Federal, State or local law.” Individuals using radios in the Amateur or Personal Radio Services in this manner may be subject to severe penalties, including significant fines, seizure of the offending equipment, and, in some cases, criminal prosecution.
  • The European Data Protection Board (EDPB) issued its “Strategy for 2021-2023” in order “[t]o be effective in confronting the main challenges ahead.” The EDPB cautioned:
    • This Strategy does not provide an exhaustive overview of the work of the EDPB in the years to come. Rather it sets out the four main pillars of our strategic objectives, as well as set of key actions to help achieve those objectives. The EDPB will implement this Strategy within its Work Program, and will report on the progress achieved in relation to each Pillar as part of its annual reports.
    • The EDPB listed and explained the four pillars of its strategy:
      • PILLAR 1: ADVANCING HARMONISATION AND FACILITATING COMPLIANCE. The EDPB will continue to strive for a maximum degree of consistency in the application of data protection rules and limit fragmentation among Member States. In addition to providing practical, easily understandable and accessible guidance, the EDPB will develop and promote tools that help to implement data protection into practice, taking into account practical experiences of different stakeholders on the ground.
      • PILLAR 2: SUPPORTING EFFECTIVE ENFORCEMENT AND EFFICIENT COOPERATION BETWEEN NATIONAL SUPERVISORY AUTHORITIES. The EDPB is fully committed to support cooperation between all national supervisory authorities that work together to enforce European data protection law. We will streamline internal processes, combine expertise and promote enhanced coordination. We intend not only to ensure a more efficient functioning of the cooperation and consistency mechanisms, but also to strive for the development of a genuine EU-wide enforcement culture among supervisory authorities.
      • PILLAR 3: A FUNDAMENTAL RIGHTS APPROACH TO NEW TECHNOLOGIES. The protection of personal data helps to ensure that technology, new business models and society develop in accordance with our values, such as human dignity, autonomy and liberty. The EDPB will continuously monitor new and emerging technologies and their potential impact on the fundamental rights and daily lives of individuals. Data protection should work for all people, particularly in the face of processing activities presenting the greatest risks to individuals’ rights and freedoms (e.g. to prevent discrimination). We will help to shape Europe’s digital future in line with our common values and rules. We will continue to work with other regulators and policymakers to promote regulatory coherence and enhanced protection for individuals.
      • PILLAR 4: THE GLOBAL DIMENSION. The EDPB is determined to set and promote high EU and global standards for international data transfers to third countries in the private and the public sector, including in the law enforcement sector. We will reinforce our engagement with the international community to promote EU data protection as a global model and to ensure effective protection of personal data beyond EU borders.
  • The United Kingdom’s (UK) Information Commissioner’s Office (ICO) revealed that all but one of the videoconferencing platforms it and other data protection authorities’ (DPA) July 2020 letter urging them to “adopt principles to guide them in addressing some key privacy risks.” The ICO explained:
    • Microsoft, Cisco, Zoom and Google replied to the open letter. The joint signatories thank these companies for engaging on this important matter and for acknowledging and responding to the concerns raised. In their responses the companies highlighted various privacy and security best practices, measures, and tools that they advise are implemented or built-in to their video teleconferencing services.
    • The information provided by these companies is encouraging. It is a constructive foundation for further discussion on elements of the responses that the joint signatories feel would benefit from more clarity and additional supporting information.
    • The ICO stated:
      • The joint signatories have not received a response to the open letter from Houseparty. They strongly encourage Houseparty to engage with them and respond to the open letter to address the concerns raised.
  • The European Union Agency for Cybersecurity (ENISA) “launched a public consultation, which runs until 7 February 2021, on its draft of the candidate European Union Cybersecurity Certification Scheme on Cloud Services (EUCS)…[that] aims to further improve the Union’s internal market conditions for cloud services by enhancing and streamlining the services’ cybersecurity guarantees.” ENISA stated:
    • There are challenges to the certification of cloud services, such as a diverse set of market players, complex systems and a constantly evolving landscape of cloud services, as well as the existence of different schemes in Member States. The draft EUCS candidate scheme tackles these challenges by calling for cybersecurity best practices across three levels of assurance and by allowing for a transition from current national schemes in the EU. The draft EUCS candidate scheme is a horizontal and technological scheme that intends to provide cybersecurity assurance throughout the cloud supply chain, and form a sound basis for sectoral schemes.
    • More specifically, the draft EUCS candidate scheme:
      • Is a voluntary scheme;
      • The scheme’s certificates will be applicable across the EU Member States;
      • Is applicable for all kinds of cloud services – from infrastructure to applications;
      • Boosts trust in cloud services by defining a reference set of security requirements;
      • Covers three assurance levels: ‘Basic’, ‘Substantial’ and ‘High’;
      • Proposes a new approach inspired by existing national schemes and international standards;
      • Defines a transition path from national schemes in the EU;
      • Grants a three-year certification that can be renewed;
      • Includes transparency requirements such as the location of data processing and storage.

Coming Events

  • The Commerce, Science, and Transportation Committee will hold a hearing on the nomination of Gina Raimondo to be the Secretary of Commerce on 26 January.
  • On 27 July, the Federal Trade Commission (FTC) will hold PrivacyCon 2021.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2021. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Peggy und Marco Lachmann-Anke from Pixabay

Further Reading, Other Developments, and Coming Events (13 and 14 January 2021)

Further Reading

  • YouTube Suspends Trump’s Channel for at Least Seven Days” By Daisuke Wakabayashi — The New York Times. Even Google is getting further into the water. Its YouTube platform flagged a video of President Donald Trump’s for inciting violence and citing the “ongoing potential for violence,” Trump and his team will not be able to upload videos for seven days and the comments section would be permanently disabled. YouTube has been the least inclined of the major platforms to moderate content and has somehow escaped the scrutiny and opprobrium Facebook and Twitter have faced even though those platforms have been more active in policing offensive content.
  • Online misinformation that led to Capitol siege is ‘radicalization,’ say researchers” By Elizabeth Culliford — Reuters. Experts in online disinformation are saying that the different conspiracy movements that impelled followers to attack the United States (U.S.) Capitol are the result of radicalization. Online activities translated into real world violence, they say. The also decried the responsive nature of social media platforms in acting, waiting for an insurrection to take steps experts and others have been begging them to take.
  • Uganda orders all social media to be blocked – letter” — Reuters. In response to Facebook blocking a number of government related accounts for Coordinated Inauthentic Behaviour” (CIB), the Ugandan government has blocked all access to social media ahead of its elections. In a letter seen by Reuters, the Uganda Communications Commission directed telecommunications providers “to immediately suspend any access and use, direct or otherwise, of all social media platforms and online messaging applications over your network until further notice.” This may become standard practice for many regimes around the world if social media companies crack down on government propaganda.
  • BlackBerry sells 90 patents to Huawei, covering key smartphone technology advances” By Sean Silcoff — The Globe and Mail. Critics of a deal to assign 90 key BlackBerry patents to Huawei are calling on the government of Prime Minister Justin Trudeau to be more involved in protecting Canadian intellectual property and innovations.
  • ‘Threat to democracy is real’: MPs call for social media code of conduct” By David Crowe and Nick Bonyhady — The Sydney Morning Herald. There has been mixed responses in Australia’s Parliament on social media platforms banning President Donald Trump after his role in inciting the violence at the United States (U.S.) Capitol. Many agree with the platforms, some disagree strenuously in light of other inflammatory content that is not taken down, and many want greater rationality and transparency in how platforms make these decisions. And since Canberra has been among the most active governments in regulating technology, it may inform the process of drafting its “Online Safety Bill,” which may place legal obligations on social media platforms.
  • Poland plans to make censoring of social media accounts illegal” By Shaun Walker — The Guardian. Governments around the world continue to respond to a number of social media companies deciding to deplatform United States (U.S.) President Donald Trump. In Warsaw there is a draft bill that would make deplatforming a person illegal unless the offense is also contrary to Polish law. The spin is that the right wing regime in Warsaw is less interested in protecting free speech and more interested in propagating the same grievances the right wing in the United States is. Therefore, this push in Poland may be more about messaging and trying to cow social media companies and less about protecting free speech, especially speech with which the government disagrees (e.g. advocates for LGBTQI rights have been silenced in Poland.)
  • Facebook, Twitter could face punishing regulation for their role in U.S. Capitol riot, Democrats say” By Tony Romm — The Washington Post. Democrats were already furious with social media companies for what they considered their lacking governance of content that clearly violated terms of service and policies. These companies are bracing for an expected barrage of hearings and legislation with the Democrats controlling the White House, House, and Senate.
  • Georgia results sweep away tech’s regulatory logjam” By Margaret Harding McGill and Ashley Gold — Axios. This is a nice survey of possible policy priorities at the agencies and in the Congress over the next two years with the Democrats in control of both.
  • The Capitol rioters put themselves all over social media. Now they’re getting arrested.” By Sara Morrison — Recode. Will the attack on the United States (U.S.) Capitol be the first time a major crime is solved by the evidence largely provided by the accused? It is sure looking that way as law enforcement continues to use the posts of the rioters to apprehend, arrest, and charge them. Additionally, in the same way people who acted in racist and entitled ways (e.g. Amy Cooper in Central Park threatening an African American gentleman with calling the police even though he had asked her to put her dog on a leash) were caught through crowd-sourced identification pushes, rioters are also being identified.
  • CISA: SolarWinds Hackers Got Into Networks by Guessing Passwords” By Mariam Baksh — Nextgov. The Cybersecurity and Infrastructure Security Agency (CISA) has updated its alert on the SolarWinds hack to reflect its finding. CISA explained:
    • CISA incident response investigations have identified that initial access in some cases was obtained by password guessing [T1101.001], password spraying [T1101.003], and inappropriately secured administrative credentials [T1078] accessible via external remote access services [T1133]. Initial access root cause analysis is still ongoing in a number of response activities and CISA will update this section as additional initial vectors are identified.
  •  “A Facial Recognition Company Says That Viral Washington Times “Antifa” Story Is False” By Craig Silverman — BuzzFeed News. XRVIsion denied the Washington Times’ account that the company had identified antifa protestors among the rioters at the United States (U.S. Capitol) (archived here.) The company said it had identified two Neo-Nazis and a QAnon adherent. Even though the story was retracted and a corrected version issued, some still claimed the original story had merit such as Trump supporter Representative Matt Gaetz (R-FL).

Other Developments

  • The United States (U.S.) Trade Representative (USTR) announced that it would not act on the basis of three completed reports on Digital Services Taxes (DST) three nations have put in place and also that it would not proceed with tariffs in retaliation against France, one of the first nations in the world to enact a DST. Last year, the Organization for Economic Co-operation and Development convened multi-lateral talks to resolve differences on how a global digital services tax will ideally function with most of the nations involved arguing for a 2% tax to be assessed in the nation where the transaction occurs as opposed to where the company is headquartered. European Union (EU) officials claimed an agreement was possible, but the U.S. negotiators walked away from the table. It will fall to the Biden Administration to act on these USTR DST investigations if they choose.
    • In its press release, the USTR stated it would “suspend the tariff action in the Section 301 investigation of France’s Digital Services Tax (DST).”
      • The USTR added:
        • The additional tariffs on certain products of France were announced in July 2020, and were scheduled to go into effect on January 6, 2021.  The U.S. Trade Representative has decided to suspend the tariffs in light of the ongoing investigation of similar DSTs adopted or under consideration in ten other jurisdictions.  Those investigations have significantly progressed, but have not yet reached a determination on possible trade actions.  A suspension of the tariff action in the France DST investigation will promote a coordinated response in all of the ongoing DST investigations.
      • In its December 2019 report, the USTR determined “that France’s DST is unreasonable or discriminatory and burdens or restricts U.S. commerce, and therefore is actionable under sections 301(b) and 304(a) of the Trade Act (19 U.S.C. 2411(b) and 2414(a))” and proposed a range of measures in retaliation.
    • The USTR also “issued findings in Section 301 investigations of Digital Service Taxes (DSTs) adopted by India, Italy, and Turkey, concluding that each of the DSTs discriminates against U.S. companies, is inconsistent with prevailing principles of international taxation, and burden or restricts U.S. commerce.” The USTR stated it “is not taking any specific actions in connection with the findings at this time but will continue to evaluate all available options.” The USTR added:
      • The Section 301 investigations of the DSTs adopted by India, Italy, and Turkey were initiated in June 2020, along with investigations of DSTs adopted or under consideration by Austria, Brazil, the Czech Republic, the European Union, Indonesia, Spain, and the United Kingdom.  USTR expects to announce the progress or completion of additional DST investigations in the near future. 
  • The United Kingdom’s Competition and Markets Authority (CMA) has started investigating Google’s Privacy Sandbox’ project to “assess whether the proposals could cause advertising spend to become even more concentrated on Google’s ecosystem at the expense of its competitors.” The CMA asserted:
    • Third party cookies currently play a fundamental role online and in digital advertising. They help businesses target advertising effectively and fund free online content for consumers, such as newspapers. But there have also been concerns about their legality and use from a privacy perspective, as they allow consumers’ behaviour to be tracked across the web in ways that many consumers may feel uncomfortable with and may find difficult to understand.
    • Google’s announced changes – known collectively as the ‘Privacy Sandbox’ project – would disable third party cookies on the Chrome browser and Chromium browser engine and replace them with a new set of tools for targeting advertising and other functionality that they say will protect consumers’ privacy to a greater extent. The project is already under way, but Google’s final proposals have not yet been decided or implemented. In its recent market study into online platforms digital advertising, the CMA highlighted a number of concerns about their potential impact, including that they could undermine the ability of publishers to generate revenue and undermine competition in digital advertising, entrenching Google’s market power.
  • Facebook took down coordinated inauthentic behavior (CIB) originating from France and Russia, seeking to allegedly influence nations in Africa and the Middle East. Facebook asserted:
    • Each of the networks we removed today targeted people outside of their country of origin, primarily targeting Africa, and also some countries in the Middle East. We found all three of them as a result of our proactive internal investigations and worked with external researchers to assess the full scope of these activities across the internet.
    • While we’ve seen influence operations target the same regions in the past, this was the first time our team found two campaigns — from France and Russia — actively engage with one another, including by befriending, commenting and criticizing the opposing side for being fake. It appears that this Russian network was an attempt to rebuild their operations after our October 2019 takedown, which also coincided with a notable shift in focus of the French campaign to begin to post about Russia’s manipulation campaigns in Africa.
    • Unlike the operation from France, both Russia-linked networks relied on local nationals in the countries they targeted to generate content and manage their activity across internet services. This is consistent with cases we exposed in the past, including in Ghana and the US, where we saw the Russian campaigns co-opt authentic voices to join their influence operations, likely to avoid detection and help appear more authentic. Despite these efforts, our investigation identified some links between these two Russian campaigns and also with our past enforcements.
  • Two of the top Democrats on the House Energy and Committee along with another Democrat wrote nine internet service providers (ISP) “questioning their commitment to consumers amid ISPs raising prices and imposing data caps during the COVID-19 pandemic.” Committee Chair Frank Pallone, Jr. (D-NJ), Communications and Technology Subcommittee Chairman Mike Doyle (D-PA), and Representative Jerry McNerney (D-CA) wrote the following ISPs:
    • Pallone, Doyle, and McNerney took issue with the companies raising prices and imposing data caps after having pledged not to do so at the behest of the Federal Communications Commission (FCC). They asked the companies to answer a series of questions:
      • Did the company participate in the FCC’s “Keep Americans Connected” pledge?
      • Has the company increased prices for fixed or mobile consumer internet and fixed or phone service since the start of the pandemic, or do they plan to raise prices on such plans within the next six months? 
      • Prior to March 2020, did any of the company’s service plans impose a maximum data consumption threshold on its subscribers?
      • Since March 2020, has the company modified or imposed any new maximum data consumption thresholds on service plans, or do they plan to do so within the next six months? 
      • Did the company stop disconnecting customers’ internet or telephone service due to their inability to pay during the pandemic? 
      • Does the company offer a plan designed for low-income households, or a plan established in March or later to help students and families with connectivity during the pandemic?
      • Beyond service offerings for low-income customers, what steps is the company currently taking to assist individuals and families facing financial hardship due to circumstances related to COVID-19? 
  • The United States (U.S.) Department of Homeland Security (DHS) issued a “Data Security Business Advisory: Risks and Considerations for Businesses Using Data Services and Equipment from Firms Linked to the People’s Republic of China,” that “describes the data-related risks American businesses face as a result of the actions of the People’s Republic of China (PRC) and outlines steps that businesses can take to mitigate these risks.” DHS generally recommended:
    • Businesses and individuals that operate in the PRC or with PRC firms or entities should scrutinize any business relationship that provides access to data—whether business confidential, trade secrets, customer personally identifiable information (PII), or other sensitive information. Businesses should identify the sensitive personal and proprietary information in their possession. To the extent possible, they should minimize the amount of at-risk data being stored and used in the PRC or in places accessible by PRC authorities. Robust due diligence and transaction monitoring are also critical for addressing potential legal exposure, reputation risks, and unfair advantage that data and intellectual property theft would provide competitors. Businesses should seek to acquire a thorough understanding of the ownership of data service providers, location of data infrastructure, and any tangential foreign business relationships and significant foreign investors.
  • The Federal Communications Commission (FCC) is asking for comments on the $3.2 billion Emergency Broadband Benefit Program established in the “Consolidated Appropriations Act, 2021” (H.R. 133). Comments are due by 16 February 2021. The FCC noted “eligible households may receive a discount off the cost of broadband service and certain connected devices during an emergency period relating to the COVID-19 pandemic, and participating providers can receive a reimbursement for such discounts.” The FCC explained the program in further detail:
    • Pursuant to the Consolidated Appropriations Act, the Emergency Broadband Benefit Program will use available funding from the Emergency Broadband Connectivity Fund to support participating providers’ provision of certain broadband services and connected devices to qualifying households.
    • To participate in the program, a provider must elect to participate and either be designated as an eligible telecommunications carrier or be approved by the Commission. Participating providers will make available to eligible households a monthly discount off the standard rate for an Internet service offering and associated equipment, up to $50.00 per month.
    • On Tribal lands, the monthly discount may be up to $75.00 per month. Participating providers will receive reimbursement from the Emergency Broadband Benefit Program for the discounts provided.
    • Participating providers that also supply an eligible household with a laptop, desktop computer, or tablet (connected device) for use during the emergency period may receive a single reimbursement of up to $100.00 for the connected device, if the charge to the eligible household for that device is more than $10.00 but less than $50.00.  An eligible household may receive only one supported device.  Providers must submit certain certifications to the Commission to receive reimbursement from the program, and the Commission is required to adopt audit requirements to ensure provider compliance and prevent waste, fraud, and abuse.
  • The Biden-Harris transition team named National Security Agency’s (NSA) Director of Cybersecurity as the Biden White House’s Deputy National Security Advisor for Cyber and Emerging Technology. Anne Neuberger’s portfolio at the NSA included “lead[ing] NSA’s cybersecurity mission, including emerging technology areas like quantum-resistant cryptography.” At the National Security Council, Neuberger would will work to coordinate cybersecurity and emerging technology policy across agencies and funnel policy options up to the full NSC and ultimately the President. It is not clear how Neuberger’s portfolio will interact with the newly created National Cybersecurity Director, a position that, thus far, has remained without a nominee.
    • The transition noted “[p]rior to this role, she led NSA’s Election Security effort and served as Assistant Deputy Director of NSA’s Operations Directorate, overseeing foreign intelligence and cybersecurity operations…[and] also previously served as NSA’s first Chief Risk Officer, as Director of NSA’s Commercial Solutions Center, as Director of the Enduring Security Framework cybersecurity public-private partnership, as the Navy’s Deputy Chief Management Officer, and as a White House Fellow.” The transition stated that “[p]rior to joining government service, Neuberger was Senior Vice President of Operations at American Stock Transfer & Trust Company (AST), where she directed technology and operations.”
  • The Federal Communications Commission (FCC) published a final rule in response to the United States (U.S.) Court of Appeals for the District of Columbia’s decision striking down three aspects of the FCC’s rollback of net neutrality, “Restoring Internet Freedom Order.” The FCC explained the final rule:
    • responds to a remand from the U.S. Court of Appeals for the D.C. Circuit directing the Commission to assess the effects of the Commission’s Restoring Internet Freedom Order on public safety, pole attachments, and the statutory basis for broadband internet access service’s inclusion in the universal service Lifeline program. This document also amends the Commission’s rules to remove broadband internet service from the list of services supported by the universal service Lifeline program, while preserving the Commission’s authority to fund broadband internet access service through the Lifeline program.
    • In 2014, the U.S. Court of Appeals for the District of Columbia struck down a 2010 FCC net neutrality order in Verizon v. FCC, but the court did suggest a path forward. The court held the FCC “reasonably interpreted section 706 to empower it to promulgate rules governing broadband providers’ treatment of Internet traffic, and its justification for the specific rules at issue here—that they will preserve and facilitate the “virtuous circle” of innovation that has driven the explosive growth of the Internet—is reasonable and supported by substantial evidence.” The court added that “even though the Commission has general authority to regulate in this arena, it may not impose requirements that contravene express statutory mandates…[and] [g]iven that the Commission has chosen to classify broadband providers in a manner that exempts them from treatment as common carriers, the Communications Act expressly prohibits the Commission from nonetheless regulating them as such.” However, in 2016, the same court upheld the 2015 net neutrality regulations in U.S. Telecom Association v. FCC, and then upheld most of the Trump Administration’s FCC’s repeal of the its earlier net neutrality rule.
    • However, the D.C. Circuit declined to accept the FCC’s attempt to preempt all contrary state laws and struck down this part of the FCC’s rulemaking. Consequently, states and local jurisdictions may now be free to enact regulations of internet services along the lines of the FCC’s now repealed Open Internet Order. The D.C. Circuit also sent the case back to the FCC for further consideration on three points.
    • In its request for comments on how to respond to the remand, the FCC summarized the three issues: public safety, pole attachments, and the Lifeline Program:
      • Public Safety.  First, we seek to refresh the record on how the changes adopted in the Restoring Internet Freedom Order might affect public safety. In the Restoring Internet Freedom Order, the Commission predicted, for example, that permitting paid prioritization arrangements would “increase network innovation,” “lead[] to higher investment in broadband capacity as well as greater innovation on the edge provider side of the market,” and “likely . . . be used to deliver enhanced service for applications that need QoS [i.e., quality of service] guarantees.” Could the network improvements made possible by prioritization arrangements benefit public safety applications—for example, by enabling the more rapid, reliable transmission of public safety-related communications during emergencies? 
      • Pole Attachments.  Second, we seek to refresh the record on how the changes adopted in the Restoring Internet Freedom Order might affect the regulation of pole attachments in states subject to federal regulation.  To what extent are ISPs’ pole attachments subject to Commission authority in non-reverse preemption states by virtue of the ISPs’ provision of cable or telecommunications services covered by section 224?  What impact would the inapplicability of section 224 to broadband-only providers have on their access to poles?  Have pole owners, following the Order, “increase[d] pole attachment rates or inhibit[ed] broadband providers from attaching equipment”?  How could we use metrics like increases or decreases in broadband deployment to measure the impact the Order has had on pole attachment practices?  Are there any other impacts on the regulation of pole attachments from the changes adopted in the Order?  Finally, how do any potential considerations about pole attachments bear on the Commission’s underlying decision to classify broadband as a Title I information service?
      • Lifeline Program.  Third, we seek to refresh the record on how the changes adopted in the Restoring Internet Freedom Order might affect the Lifeline program.  In particular, we seek to refresh the record on the Commission’s authority to direct Lifeline support to eligible telecommunications carriers (ETCs) providing broadband service to qualifying low-income consumers.  In the 2017 Lifeline NPRM, the Commission proposed that it “has authority under Section 254(e) of the Act to provide Lifeline support to ETCs that provide broadband service over facilities-based broadband-capable networks that support voice service,” and that “[t]his legal authority does not depend on the regulatory classification of broadband Internet access service and, thus, ensures the Lifeline program has a role in closing the digital divide regardless of the regulatory classification of broadband service.”  How, if at all, does the Mozilla decision bear on that proposal, and should the Commission proceed to adopt it? 
  • The Federal Trade Commission (FTC) reached a settlement with a photo app company that allegedly did not tell users their photos would be subject to the company’s facial recognition technology. The FTC deemed this a deceptive business practice in violation of Section 5 of the FTC Act and negotiated a settlement the Commissioners approved in a 5-0 vote. The consent order includes interesting, perhaps even new language, requiring the company “to delete models and algorithms it developed by using the photos and videos uploaded by its users” according to the FTC’s press release.
    • In the complaint, the FTC asserted:
      • Since 2015, Everalbum has provided Ever, a photo storage and organization application, to consumers.
      • In February 2017, Everalbum launched its “Friends” feature, which operates on both the iOS and Android versions of the Ever app. The Friends feature uses face recognition to group users’ photos by faces of the people who appear in the photos. The user can choose to apply “tags” to identify by name (e.g., “Jane”) or alias (e.g., “Mom”) the individuals who appear in their photos. These tags are not available to other Ever users. When Everalbum launched the Friends feature, it enabled face recognition by default for all users of the Ever mobile app. At that time, Everalbum did not provide users of the Ever mobile app an option to turn off or disable the feature.
      • However, prior to April 2019, Ever mobile app users who were located anywhere other than Texas, Illinois, Washington, and the European Union did not need to, and indeed could not, take any affirmative action to “let[ Everalbum] know” that it should apply face recognition to the users’ photos. In fact, for those users, face recognition was enabled by default and the users lacked the ability to disable it. Thus, the article was misleading for Ever mobile app users located outside of Texas, Illinois, Washington, and the European Union.
      • Between September 2017 and August 2019, Everalbum combined millions of facial images that it extracted from Ever users’ photos with facial images that Everalbum obtained from publicly available datasets in order to create four new datasets to be used in the development of its face recognition technology. In each instance, Everalbum used computer scripts to identify and compile from Ever users’ photos images of faces that met certain criteria (i.e., not associated with a deactivated Ever account, not blurry, not too small, not a duplicate of another image, associated with a specified minimum number of images of the same tagged identity, and, in three of the four instances, not identified by Everalbum’s machines as being an image of someone under the age of thirteen).
      • The FTC summarized its settlement:
        • The proposed settlement requires Everalbum to delete:
          • the photos and videos of Ever app users who deactivated their accounts;
          • all face embeddings—data reflecting facial features that can be used for facial recognition purposes—the company derived from the photos of Ever users who did not give their express consent to their use; and
          • any facial recognition models or algorithms developed with Ever users’ photos or videos.
        • In addition, the proposed settlement prohibits Everalbum from misrepresenting how it collects, uses, discloses, maintains, or deletes personal information, including face embeddings created with the use of facial recognition technology, as well as the extent to which it protects the privacy and security of personal information it collects. Under the proposed settlement, if the company markets software to consumers for personal use, it must obtain a user’s express consent before using biometric information it collected from the user through that software to create face embeddings or develop facial recognition technology.
      • FTC Commissioner Rohit Chopra issued a statement, explaining his view on facial recognition technology and he settlement:
        • As outlined in the complaint, Everalbum made promises that users could choose not to have facial recognition technology applied to their images, and that users could delete the images and their account. In addition to those promises, Everalbum had clear evidence that many of the photo app’s users did not want to be roped into facial recognition. The company broke its promises, which constitutes illegal deception according to the FTC’s complaint. This matter and the FTC’s proposed resolution are noteworthy for several reasons.
        • First, the FTC’s proposed order requires Everalbum to forfeit the fruits of its deception. Specifically, the company must delete the facial recognition technologies enhanced by any improperly obtained photos. Commissioners have previously voted to allow data protection law violators to retain algorithms and technologies that derive much of their value from ill-gotten data. This is an important course correction.
        • Second, the settlement does not require the defendant to pay any penalty. This is unfortunate. To avoid this in the future, the FTC needs to take further steps to trigger penalties, damages, and other relief for facial recognition and data protection abuses. Commissioners have voted to enter into scores of settlements that address deceptive practices regarding the collection, use, and sharing of personal data. There does not appear to be any meaningful dispute that these practices are illegal. However, since Commissioners have not restated this precedent into a rule under Section 18 of the FTC Act, we are unable to seek penalties and other relief for even the most egregious offenses when we first discover them.
        • Finally, the Everalbum matter makes it clear why it is important to maintain states’ authority to protect personal data. Because the people of Illinois, Washington, and Texas passed laws related to facial recognition and biometric identifiers, Everalbum took greater care when it came to these individuals in these states. The company’s deception targeted Americans who live in states with no specific state law protections.
  • The Trump Administration issued the “National Maritime Cybersecurity Plan” that “sets forth how the United States government will defend the American economy through enhanced cybersecurity coordination, policies and practices, aimed at mitigating risks to the maritime sub-sector, promoting prosperity through information and intelligence sharing, and preserving and increasing the nation’s cyber workforce” according to the National Security Advisor Robert O’Brien. It will be up to the Biden Administration to implement, revise, or discard this strategy, but strategy documents such as this that complain anodyne recommendations tend to stay in place for the short-term, at least. It bears note that the uneven margins to the columns in the document suggests a rush to issue this document before the end of the Trump Administration. Nevertheless, O’Brien added:
    • President [Donald] Trump designated the cybersecurity of the Maritime Transportation System (MTS) as a top priority for national defense, homeland security, and economic competitiveness in the 2017 National Security Strategy. The MTS contributes to one quarter of all United States gross domestic product, or approximately $5.4 trillion. MTS operators are increasingly reliant on information technology (IT) and operational technology (OT) to maximize the reliability and efficiency of maritime commerce. This plan articulates how the United States government can buy down the potential catastrophic risks to our national security and economic prosperity created by technology innovations to strengthen maritime commerce efficiency and reliability.
    • The strategy lists a number of priority actions for the executive branch, including:
      • The United States will de- conflict government roles and responsibilities.
      • The United States will develop risk modeling to inform maritime cybersecurity standards and best practices.
      • The United States will strengthen cybersecurity requirements in port services contracts and leasing.
      • The United States will develop procedures to identify, prioritize, mitigate, and investigate cybersecurity risks in critical ship and port systems.
      • Exchange United States government information with the maritime industry.
      • Share cybersecurity intelligence with appropriate non- government entities.
      • Prioritize maritime cybersecurity intelligence collection.
  • The National Security Agency’s NSA Cybersecurity Directorate has issued its very annual review, the “2020 NSA Cybersecurity Year in Review” that encapsulates the first year of operation for the newly created part of the NSA.
    • Highlights include:
      • In 2020, NSA focused on modernizing encryption across the Department of Defense (DOD). It began with a push to eliminate cryptography that is at risk from attack due to adversarial computational advances. This applied to several systems commonly used by the Armed Services today to provide command and control, critical communications, and battlefield awareness. It also applied to operational practices concerning the handling of cryptographic keys and the implementation of modern suites of cryptography in network communications devices.
      • 2020 was notable for the number of Cybersecurity Advisories (CSAs) and other products NSA cybersecurity produced and released. These products are intended to alert network owners, specifically National Security System (NSS), Department of Defense (DOD), and Defense Industrial Base (DIB), of cyber threats and enable defenders to take immediate action to secure their systems.
      • 2020 was notable not just because it was the NSA Cybersecurity Directorate’s first year nor because of COVID-19, but also because it was an election year in the United States. Drawing on lessons learned from the 2016 presidential election and the 2018 mid-term elections, NSA was fully engaged in whole-of-government efforts to protect 2020 election from foreign interference and influence. Cybersecurity was a foundational component of NSA’s overall election defense effort.
      • This past year, NSA cybersecurity prioritized public-private collaboration, invested in cybersecurity research, and made a concerted effort to build trusted partnerships with the cybersecurity community.
      • The NSA touted the following achievements:
        • In November 2019, NSA began laying the groundwork to conduct a pilot with the Defense Cyber Crime Center and five DIB companies to monitor and block malicious network traffic based on continuous automated analysis of the domain names these companies’ networks were contacting. The pilot’s operational phase commenced in March 2020. Over six months, the Protective Domain Name Service (PDNS) examined more than 4 billion DNS queries to and from these companies. The PDNS provider identified callouts to 3,519 malicious domains and blocked upwards of 13 million connections to those domains. The pilot proved the value of DoD expanding the PDNS service to all DIB entities at scale
        • How cyber secure is cyber “ready” for combat? In response to legislation that recognized the imperative of protecting key weapons and space systems from adversary cyber intrusions, NSA partnered closely with the DoD CIO, Joint Staff, Undersecretary of Defense for Acquisition & Sustainment, and the Military Services to structure, design, and execute a new cybersecurity program, focused on the most important weapons and space systems, known as the Strategic Cybersecurity Program (SCP), with the mindset of “stop assessing and start addressing.”The program initially identified 12 key weapons and space systems that must be evaluated for cybersecurity vulnerabilities that need to be mitigated. This is either due to the existence of intelligence indicating they are being targeted by cyber adversaries or because the systems are particularly important to warfighting. These systems cover all warfighting domains (land, sea, air, cyber, and space). Under the auspices of the SCP, NSA and military service partners will conduct cybersecurity evaluations, and, most importantly, maintain cyber risk scoreboards and mitigation plans accountability in reducing cyber risk to acceptable levels
      • The NSA sees the following issue son the horizon:
        • In October 2020, NSA launched an expansive effort across the Executive Branch to understand how we can better inform, drive, and understand the activities of NSS owners to prevent, or respond to, critical cybersecurity events, and cultivate an operationally-aligned community resilient against the most advanced threats. These efforts across the community will come to fruition during the first quarter of 2021 and are expected to unify disparate elements across USG for stronger cybersecurity at scale.
        • NSA Cybersecurity is also focused on combating ransomware, a significant threat to NSS and critical infrastructure. Ransomware activity has become more destructive and impactful in nature and scope. Malicious actors target critical data and propagate ransomware across entire networks, alarmingly focusing recent attacks against U.S. hospitals. In 2020, NSA formed multiple working groups with U.S. Government agencies and other partners to identify ways to make ransomware operations more difficult for our adversaries, less scalable, and less lucrative. While the ransomware threat remains significant, NSA will continue to develop innovative ways to keep the activity at bay.
  • This week, Parler sued Amazon after it rescinded its web hosting services to the social media platform billed as the conservative, unbiased alternative to Twitter. Amazon has responded with an extensive list of the inflammatory, inciting material upon which it based its decision.
    • In its 11 January complaint, Parler asked a federal court “for injunctive relief, including a temporary restraining order and preliminary injunctive relief, and damages” because mainly “AWS’s decision to effectively terminate Parler’s account is apparently motivated by political animus…[and] is also apparently designed to reduce competition in the microblogging services market to the benefit of Twitter” in violation of federal antitrust law.
    • In its 12 January response, Amazon disagreed:
      • This case is not about suppressing speech or stifling viewpoints. It is not about a conspiracy to restrain trade. Instead, this case is about Parler’s demonstrated unwillingness and inability to remove from the servers of Amazon Web Services (“AWS”) content that threatens the public safety, such as by inciting and planning the rape, torture, and assassination of named public officials and private citizens. There is no legal basis in AWS’s customer agreements or otherwise to compel AWS to host content of this nature. AWS notified Parler repeatedly that its content violated the parties’ agreement, requested removal, and reviewed Parler’s plan to address the problem, only to determine that Parler was both unwilling and unable to do so. AWS suspended Parler’s account as a last resort to prevent further access to such content, including plans for violence to disrupt the impending Presidential transition.
    • Amazon offered a sampling of the content on Parler that caused AWS to pull the plug on the platform:
      • “Fry’em up. The whole fkn crew. #pelosi #aoc #thesquad #soros #gates #chuckschumer #hrc #obama #adamschiff #blm #antifa we are coming for you and you will know it.”
      • “#JackDorsey … you will die a bloody death alongside Mark Suckerturd [Zuckerberg]…. It has been decided and plans are being put in place. Remember the photographs inside your home while you slept? Yes, that close. You will die a sudden death!”
      • “We are going to fight in a civil War on Jan.20th, Form MILITIAS now and acquire targets.”
      • “On January 20th we need to start systematicly [sic] assassinating [sic] #liberal leaders, liberal activists, #blm leaders and supporters, members of the #nba #nfl #mlb #nhl #mainstreammedia anchors and correspondents and #antifa. I already have a news worthy event planned.”
      • Shoot the police that protect these shitbag senators right in the head then make the senator grovel a bit before capping they ass.”

Coming Events

  • On 13 January, the Federal Communications Commission (FCC) will hold its monthly open meeting, and the agency has placed the following items on its tentative agenda “Bureau, Office, and Task Force leaders will summarize the work their teams have done over the last four years in a series of presentations:
    • Panel One. The Commission will hear presentations from the Wireless Telecommunications Bureau, International Bureau, Office of Engineering and Technology, and Office of Economics and Analytics.
    • Panel Two. The Commission will hear presentations from the Wireline Competition Bureau and the Rural Broadband Auctions Task Force.
    • Panel Three. The Commission will hear presentations from the Media Bureau and the Incentive Auction Task Force.
    • Panel Four. The Commission will hear presentations from the Consumer and Governmental Affairs Bureau, Enforcement Bureau, and Public Safety and Homeland Security Bureau.
    • Panel Five. The Commission will hear presentations from the Office of Communications Business Opportunities, Office of Managing Director, and Office of General Counsel.
  • On 15 January, the Senate Intelligence Committee will hold a hearing on the nomination of Avril Haines to be the Director of National Intelligence.
  • The Senate Homeland Security and Governmental Affairs Committee will hold a hearing on the nomination of Alejandro N. Mayorkas to be Secretary of Homeland Security on 19 January.
  • On 19 January, the Senate Armed Services Committee will hold a hearing on former General Lloyd Austin III to be Secretary of Defense.
  • On 27 July, the Federal Trade Commission (FTC) will hold PrivacyCon 2021.

Social Media Reckoning

The U.S. President was widely banned on social media platforms after his role in the attack on the U.S. Capitol during the certification of President-elect Joe Biden’s win.

Because of the role President Donald Trump played directly and indirectly in the events of 6 January 2021 at the United States (U.S.) Capitol, Twitter, Facebook, and other technology companies took steps to limit Trump’s usage of their platforms, some temporarily and some permanently. Other right wing and extremist figures were also banned in a flurry over a period of a few days as well. To be sure, these decisions were greeted with praise and criticism across the political spectrum in the U.S. in much the same ways as decisions to moderate, comment upon, or block materials in the run up to the election were. These decisions will alternately be seen as further censoring of views from the right, too little too late to the many on the left, and as social media platforms seeking favor with the new government coming into power on 20 January that will be able to get its nominees through the Senate allowing for potentially greater regulation.

As a legal matter, it seems to be settled law that 47 USC 230 gives the platforms complete legal protection for removing and moderating content and users. Moreover, recent Supreme Court jurisprudence makes clear that private actors are not bound by the First Amendment’s guarantee of free speech that is binding on government entities and actors. Whether this is the proper construction of the First Amendment is a different issue. A number of precedents from the mid-20th Century allowed for the exercise and protection of free speech on private property that was functionally considered public property (e.g. neighborhoods or shopping malls.) Perhaps this is where U.S. policy and law will go, but, for now, it seems entirely legal for Twitter, Facebook, and others to ban users if they choose as they are not government actors.

As mentioned, these actions are sure to inform any action Congress and the Biden Administration consider in respect to reform of 47 USC 230 (Section 230.) There may be shared concern about the power that tech giants have in deciding which people can post material on widely used platforms. There is also likely to be disagreement about Section 230 should be reformed with conservatives consistently claiming without substantial evidence that the Twitters of the world unfairly discriminate against them and liberals decrying the widespread lack of action taken by platforms about the abuse women, minorities, and  others suffer online. It remains to be seen whether and how Democrats and Republicans can bridge their differences. In any event, President-elect Joe Biden famously asserted in an interview that he believes that Section 230 should be entirely repealed. Whether this is his White House’s policy position is not clear at this point. However, the decisions of these platforms this past week will definitely be part of any policy and political debate.

In response to the lies President Donald Trump told about the 2020 Presidential Election in a video ostensibly meant to calm the rioters who took over the U.S. Capitol, Twitter took the unprecedented step of blocking Trump for 12 hours from his account and possibly longer until he takes down the untrue, inflammatory content. Twitter’s Safety account tweeted:

As a result of the unprecedented and ongoing violent situation in Washington, D.C., we have required the removal of three @realDonaldTrump Tweets that were posted earlier today for repeated and severe violations of our Civic Integrity policy. This means that the account of @realDonaldTrump will be locked for 12 hours following the removal of these Tweets. If the Tweets are not removed, the account will remain locked.

Early on 7 January, Trump removed the tweets that led to his suspension.

In a blog posting, Twitter announced a permanent ban of Trump’s account

After close review of recent Tweets from the @realDonaldTrump account and the context around them — specifically how they are being received and interpreted on and off Twitter — we have permanently suspended the account due to the risk of further incitement of violence. 

Twitter cited these two tweets as violating their policies when read in the context of events on 6 January 2021:

  • “The 75,000,000 great American Patriots who voted for me, AMERICA FIRST, and MAKE AMERICA GREAT AGAIN, will have a GIANT VOICE long into the future. They will not be disrespected or treated unfairly in any way, shape or form!!!”
  • “To all of those who have asked, I will not be going to the Inauguration on January 20th.”

Twitter concluded: “our determination is that the two Tweets above are likely to inspire others to replicate the violent acts that took place on January 6, 2021, and that there are multiple indicators that they are being received and understood as encouragement to do so.”

Twitter also moved to permanently ban Trump allies former Lieutenant General Michael Flynn and lawyer Sidney Powell. Additionally, Google’s YouTube banned former White House advisor Steve Bannon for violating its policy that affords a certain number of strikes within a 90-day period. Bannon had hosted Rudy Giuliani after the 6 January attack, and hours later YouTube acted. The platform explained: “In accordance with our strikes system, we have terminated Steve Bannon’s channel ‘War room’ and one associated channel for repeatedly violating our Community Guidelines.”

Snapchat also announced it had locked Trump’s account. In mid 2020, this platform had started not promoting Trump’s snaps after he made statements in opposition to the protests against police brutality on Snapchat.

Facebook has also banned Trump. At first, on the day rioters stormed the Capitol, Facebook “removed from Facebook and Instagram the recent video of President Trump speaking about the protests and his subsequent post about the election results…[on the rationale that] these posts contribute to, rather than diminish, the risk of ongoing violence.” Later that day, Facebook explained “[w]e’ve assessed two policy violations against President Trump’s Page which will result in a 24-hour feature block, meaning he will lose the ability to post on the platform during that time.” On the morning of 7 January, CEO and Chairman of the Board Mark Zuckerberg extended the ban for the duration of Trump’s tenure as President:

We believe the risks of allowing President Trump to continue to use our service during this period are simply too great, so we are extending the block we have placed on his Facebook and Instagram accounts indefinitely and for at least the next two weeks.

Reddit also shut down its largest Donald Trump subreddit “r/donaldtrump” for promoting and inciting violence.

Thereafter, the app, Parler, that fancies itself as the conservative version of Twitter, was essentially shut down by Apple, Google, and Amazon for its role in the attack on the U.S. Capitol. Its estimated 8-10 million users in the U.S. skew right, a significant number of whom are white supremacists, and Many news accounts of 6 January claim the insurgents used Parler and Gab Over the weekend, Google first removed the app from its Play Store, and then Apple warned the app had 24 hours to address its terms of service violations. Thereafter Apple followed Google’s lead and banned the app. Of course, just because an app is banned from the two major app stores does not mean it cannot be used; rather, it just means new users cannot download it from the Apple and Google, but more crucially, the currently installed Parler apps cannot be updated. However, the Parler operation has been shut down by another tech giant for an indefinite amount of time.

On 9, January, Amazon Web Services (AWS) emailed Parler, letting them know they had violated the terms of service under which the former was hosting the latter’s website. Consequently, Amazon informed Parler “[r]ecently, we’ve seen a steady increase in this violent content on your website, all of which violates our terms.” Amazon added “[i]t’s clear that Parler does not have an effective process to comply with the AWS terms of service.” Parler may be able to find another server to host their operations, but this would take time. On 11 January, Parler sued Amazon, alleging AWS engaged in anti-competitive conduct in pulling its web-hosting services.

The reception in Congress has split along partisan lines. The chair of the House subcommittee with primary jurisdiction over Section 230 made clear Facebook and Twitter acted too late. Representative Jan Schakowsky (D-IL) claimed in her statement:

Today’s actions by Facebook and Twitter are too little, too late, in light of yesterday’s violence, as is almost always the case. Think–only after the Senate and White House flip, and Trump has two weeks left in office–does Facebook pretend to show the minimal amount of bravery.

Schakowsky chairs the Consumer Protection and Commerce Subcommittee of the House Energy and Commerce Committee.

The outgoing chair of the Senate Judiciary Committee Senator Lindsey Graham (R-SC) tweeted:

I’m more determined than ever to strip Section 230 protections from Big Tech (Twitter) that let them be immune from lawsuits. Big Tech are the only companies in America that virtually have absolute immunity from being sued for their actions, and it’s only because Congress gave them that protection.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2021. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Tibor Janosi Mozes from Pixabay

Further Reading, Other Development, and Coming Events (4 January 2021)

Further Reading

  • Microsoft Says Russian Hackers Viewed Some of Its Source Code” By Nicole Perlroth — The New York Times. The Sluzhba vneshney razvedki Rossiyskoy Federatsii’s (SVR) hack keeps growing and growing with Microsoft admitting its source code was viewed through an employee account. It may be that authorized Microsoft resellers were one of the vectors by which the SVR accessed SolarWinds, FireEye, and ultimately a number of United States (U.S.) government agencies. Expect more revelations to come about the scope and breadth of entities and systems the SVR compromised.
  • In 2020, we reached peak Internet. Here’s what worked — and what flopped.” By Geoffrey Fowler — The Washington Post. The newspaper’s tech columnist reviews the technology used during the pandemic and what is likely to stay with us when life returns to some semblance of normal.
  • Facebook Says It’s Standing Up Against Apple For Small Businesses. Some Of Its Employees Don’t Believe It.” By Craig Silverman and Ryan Mac — BuzzFeed News. Again, two of the best-sourced journalists when it comes to Facebook have exposed employee dissent within the social media and advertising giant, and this time over the company’s advertising blitz positioning it as the champion of small businesses that allegedly stand to be hurt when Apple rolls out iOS 14 that will allow users to block the type of tracking across apps and the internet Facebook thrives on. The company’s PR campaign stands in contrast to the anecdotal stories about errors that harmed and impeded small companies in using Facebook to advertise and sell products and services to cusstomers.
  • SolarWinds hack spotlights a thorny legal problem: Who to blame for espionage?” By Tim Starks — cyberscoop. This piece previews possible and likely inevitable litigation to follow from the SolarWinds hack, including possible securities action on the basis of fishy dumps of stock by executive, breach of contract, and negligence for failing to patch and address vulnerabilities in a timely fashion. Federal and state regulators will probably get on the field, too. But this will probably take years to play out as Home Depot settled claims arising from its 2014 breach with state attorneys general in November 2020.
  • The Tech Policies the Trump Administration Leaves Behind” By Aaron Boyd — Nextgov. A look back at the good, the bad, and the ugly of the Trump Administration’s technology policies, some of which will live on in the Biden Administration.

Other Developments

  • In response to the SolarWinds hack, the Federal Bureau of Investigation (FBI), the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA), and the Office of the Director of National Intelligence (ODNI) issued a joint statement indicating that the process established in Pursuant to Presidential Policy Directive (PPD) 41, an Obama Administration policy has been activated and a Cyber Unified Coordination Group (UCG) has been formed “to coordinate a whole-of-government response to this significant cyber incident.” The agencies explained “[t]he UCG is intended to unify the individual efforts of these agencies as they focus on their separate responsibilities.”
    • In PPD-41 it is explained that a UCG “shall serve as the primary method for coordinating between and among Federal agencies in response to a significant cyber incident as well as for integrating private sector partners into incident response efforts, as appropriate.” Moreover, “[t]he Cyber UCG is intended to result in unity of effort and not to alter agency authorities or leadership, oversight, or command responsibilities.”
  • Following the completion of its “in-depth” investigation, the European Commission (EC) cleared Google’s acquisition of Fitbit with certain conditions, removing a significant hurdle for the American multinational in buying the wearable fitness tracker company. In its press release, the EC explained that after its investigation, “the Commission had concerns that the transaction, as initially notified, would have harmed competition in several markets.” To address and allay concerns, Google bound itself for ten years to a set of commitments that can be unilaterally extended by the EC and will be enforced, in part, by the appointment of a trustee to oversee compliance.
    • The EC was particularly concerned about:
      • Advertising: By acquiring Fitbit, Google would acquire (i) the database maintained by Fitbit about its users’ health and fitness; and (ii) the technology to develop a database similar to that of Fitbit. By increasing the already vast amount of data that Google could use for the personalisation of ads, it would be more difficult for rivals to match Google’s services in the markets for online search advertising, online display advertising, and the entire “ad tech” ecosystem. The transaction would therefore raise barriers to entry and expansion for Google’s competitors for these services to the detriment of advertisers, who would ultimately face higher prices and have less choice.
      • Access to Web Application Programming Interface (‘API’) in the market for digital healthcare: A number of players in this market currently access health and fitness data provided by Fitbit through a Web API, in order to provide services to Fitbit users and obtain their data in return. The Commission found that following the transaction, Google might restrict competitors’ access to the Fitbit Web API. Such a strategy would come especially at the detriment of start-ups in the nascent European digital healthcare space.
      • Wrist-worn wearable devices: The Commission is concerned that following the transaction, Google could put competing manufacturers of wrist-worn wearable devices at a disadvantage by degrading their interoperability with Android smartphones.
    • As noted, Google made a number of commitments to address competition concerns:
      • Ads Commitment:
        • Google will not use for Google Ads the health and wellness data collected from wrist-worn wearable devices and other Fitbit devices of users in the EEA, including search advertising, display advertising, and advertising intermediation products. This refers also to data collected via sensors (including GPS) as well as manually inserted data.
        • Google will maintain a technical separation of the relevant Fitbit’s user data. The data will be stored in a “data silo” which will be separate from any other Google data that is used for advertising.
        • Google will ensure that European Economic Area (‘EEA’) users will have an effective choice to grant or deny the use of health and wellness data stored in their Google Account or Fitbit Account by other Google services (such as Google Search, Google Maps, Google Assistant, and YouTube).
      • Web API Access Commitment:
        • Google will maintain access to users’ health and fitness data to software applications through the Fitbit Web API, without charging for access and subject to user consent.
      • Android APIs Commitment:
        • Google will continue to license for free to Android original equipment manufacturers (OEMs) those public APIs covering all current core functionalities that wrist-worn devices need to interoperate with an Android smartphone. Such core functionalities include but are not limited to, connecting via Bluetooth to an Android smartphone, accessing the smartphone’s camera or its GPS. To ensure that this commitment is future-proof, any improvements of those functionalities and relevant updates are also covered.
        • It is not possible for Google to circumvent the Android API commitment by duplicating the core interoperability APIs outside the Android Open Source Project (AOSP). This is because, according to the commitments, Google has to keep the functionalities afforded by the core interoperability APIs, including any improvements related to the functionalities, in open-source code in the future. Any improvements to the functionalities of these core interoperability APIs (including if ever they were made available to Fitbit via a private API) also need to be developed in AOSP and offered in open-source code to Fitbit’s competitors.
        • To ensure that wearable device OEMs have also access to future functionalities, Google will grant these OEMs access to all Android APIs that it will make available to Android smartphone app developers including those APIs that are part of Google Mobile Services (GMS), a collection of proprietary Google apps that is not a part of the Android Open Source Project.
        • Google also will not circumvent the Android API commitment by degrading users experience with third party wrist-worn devices through the display of warnings, error messages or permission requests in a discriminatory way or by imposing on wrist-worn devices OEMs discriminatory conditions on the access of their companion app to the Google Play Store.
  • The United States (U.S.) Department of Health and Human Services’ (HHS) Office of Civil Rights (OCR) has proposed a major rewrite of the regulations governing medical privacy in the U.S. As the U.S. lacks a unified privacy regime, the proposed changes would affect on those entities in the medical sector subject to the regime, which is admittedly many such entities. Nevertheless, it is almost certain the Biden Administration will pause this rulemaking and quite possibly withdraw it should it prove crosswise with the new White House’s policy goals.
    • HHS issued a notice of proposed rulemaking “to modify the Standards for the Privacy of Individually Identifiable Health Information (Privacy Rule) under the Health Insurance Portability and Accountability Act of 1996 (HIPAA) and the Health Information Technology for Economic and Clinical Health Act of 2009 (HITECH Act).”
      • HHS continued:
        • The Privacy Rule is one of several rules, collectively known as the HIPAA Rules, that protect the privacy and security of individuals’ medical records and other protected health information (PHI), i.e., individually identifiable health information maintained or transmitted by or on behalf of HIPAA covered entities (i.e., health care providers who conduct covered health care transactions electronically, health plans, and health care clearinghouses).
        • The proposals in this NPRM support the Department’s Regulatory Sprint to Coordinated Care (Regulatory Sprint), described in detail below. Specifically, the proposals in this NPRM would amend provisions of the Privacy Rule that could present barriers to coordinated care and case management –or impose other regulatory burdens without sufficiently compensating for, or offsetting, such burdens through privacy protections. These regulatory barriers may impede the transformation of the health care system from a system that pays for procedures and services to a system of value-based health care that pays for quality care.
    • In a press release, OCR asserted:
      • The proposed changes to the HIPAA Privacy Rule include strengthening individuals’ rights to access their own health information, including electronic information; improving information sharing for care coordination and case management for individuals; facilitating greater family and caregiver involvement in the care of individuals experiencing emergencies or health crises; enhancing flexibilities for disclosures in emergency or threatening circumstances, such as the Opioid and COVID-19 public health emergencies; and reducing administrative burdens on HIPAA covered health care providers and health plans, while continuing to protect individuals’ health information privacy interests.
  • The Federal Trade Commission (FTC) has used its powers to compel selected regulated entities to provide requested information in asking that “nine social media and video streaming companies…provide data on how they collect, use, and present personal information, their advertising and user engagement practices, and how their practices affect children and teens.” The TFTC is using its Section 6(b) authority to compel the information from Amazon.com, Inc., ByteDance Ltd., which operates the short video service TikTok, Discord Inc., Facebook, Inc., Reddit, Inc., Snap Inc., Twitter, Inc., WhatsApp Inc., and YouTube LLC. Failure to respond can result in the FTC fining a non-compliant entity.
    • The FTC claimed in its press release it “is seeking information specifically related to:
      • how social media and video streaming services collect, use, track, estimate, or derive personal and demographic information;
      • how they determine which ads and other content are shown to consumers;
      • whether they apply algorithms or data analytics to personal information;
      • how they measure, promote, and research user engagement; and
      • how their practices affect children and teens.
    • The FTC explained in its sample order:
      • The Commission is seeking information concerning the privacy policies, procedures, and practices of Social Media and Video Streaming Service providers, Including the method and manner in which they collect, use, store, and disclose Personal Information about consumers and their devices. The Special Report will assist the Commission in conducting a study of such policies, practices, and procedures.
  • The United States (U.S.) Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) supplemented its Emergency Directive 21-01 to federal civilian agencies in response to the Sluzhba vneshney razvedki Rossiyskoy Federatsii’s (SVR) hack via SolarWinds. In an 18 December update, CISA explained:
    • This section provides additional guidance on the implementation of CISA Emergency Directive (ED) 21-01, to include an update on affected versions, guidance for agencies using third-party service providers, and additional clarity on required actions.
    •  In a 30 December update, CISA stated:
      • Specifically, all federal agencies operating versions of the SolarWinds Orion platform other than those identified as “affected versions” below are required to use at least SolarWinds Orion Platform version 2020.2.1HF2. The National Security Agency (NSA) has examined this version and verified that it eliminates the previously identified malicious code. Given the number and nature of disclosed and undisclosed vulnerabilities in SolarWinds Orion, all instances that remain connected to federal networks must be updated to 2020.2.1 HF2 by COB December 31, 2020. CISA will follow up with additional supplemental guidance, to include further clarifications and hardening requirements.
  • Australia’s Attorney-General’s Department published an unclassified version of the four volumes of the “Report of the Comprehensive Review of the Legal Framework of the National Intelligence Community,” an “examination of the legislative framework underpinning the National Intelligence Community (NIC)…the first and largest since the Hope Royal Commissions considered the Australian Intelligence Community (AIC) in the 1970s and 1980s.” Ultimately, the authors of the report concluded:
    • We do not consider the introduction of a common legislative framework, in the form of a single Act governing all or some NIC agencies, to be a practical, pragmatic or proportionate reform. It would be unlikely that the intended benefits of streamlining and simplifying NIC legislation could be achieved due to the diversity of NIC agency functions—from intelligence to law enforcement, regulatory and policy—and the need to maintain differences in powers, immunities and authorising frameworks. The Review estimates that reform of this scale would cost over $200million and take up to 10years to complete. This would be an impractical and disproportionate undertaking for no substantial gain. In our view, the significant costs and risks of moving to a single, consolidated Act clearly outweigh the limited potential benefits.
    • While not recommending a common legislative framework for the entire NIC, some areas of NIC legislation would benefit from simplification and modernisation. We recommend the repeal of the TIA Act, Surveillance Devices Act 2004(SD Act) and parts of the Australian Security Intelligence Organisation Act 1979 (ASIO Act), and their replacement with a single new Act governing the use of electronic surveillance powers—telecommunications interception, covert access to stored communications, computers and telecommunications data, and the use of optical, listening and tracking devices—under Commonwealth law.
  • The National Institute of Standards and Technology (NIST) released additional materials to supplement a major rewrite of a foundational security guidance document. NIST explained “[n]ew supplemental materials for NIST Special Publication (SP) 800-53 Revision 5, Security and Privacy Controls for Information Systems and Organizations, are available for download to support the December 10, 2020 errata release of SP 800-53 and SP 800-53B, Control Baselines for Information Systems and Organizations.” These supplemental materials include:
    • A comparison of the NIST SP 800-53 Revision 5 controls and control enhancements to Revision 4. The spreadsheet describes the changes to each control and control enhancement, provides a brief summary of the changes, and includes an assessment of the significance of the changes.  Note that this comparison was authored by The MITRE Corporation for the Director of National Intelligence (DNI) and is being shared with permission by DNI.
    • Mapping of the Appendix J Privacy Controls (Revision 4) to Revision 5. The spreadsheet supports organizations using the privacy controls in Appendix J of SP 800-53 Revision 4 that are transitioning to the integrated control catalog in Revision 5.
    • Mappings between NIST SP 800-53 and other frameworks and standards. The mappings provide organizations a general indication of SP 800-53 control coverage with respect to other frameworks and standards. When leveraging the mappings, it is important to consider the intended scope of each publication and how each publication is used; organizations should not assume equivalency based solely on the mapping tables because mappings are not always one-to-one and there is a degree of subjectivity in the mapping analysis.
  • Via a final rule, the Department of Defense (DOD) codified “the National Industrial Security Program Operating Manual (NISPOM) in regulation…[that] establishes requirements for the protection of classified information disclosed to or developed by contractors, licensees, grantees, or certificate holders (hereinafter referred to as contractors) to prevent unauthorized disclosure.” The DOD stated “[i]n addition to adding the NISPOM to the Code of Federal Regulations (CFR), this rule incorporates the requirements of Security Executive Agent Directive (SEAD) 3, “Reporting Requirements for Personnel with Access to Classified Information or Who Hold a Sensitive Position.” The DOD stated “SEAD 3 requires reporting by all contractor cleared personnel who have been granted eligibility for access to classified information.”
    • The DOD added “[t]his NISPOM rule provides for a single nation-wide implementation plan which will, with this rule, include SEAD 3 reporting by all contractor cleared personnel to report specific activities that may adversely impact their continued national security eligibility, such as reporting of foreign travel and foreign contacts.”
    • The DOD explained “NISP Cognizant Security Agencies (CSAs) shall conduct an analysis of such reported activities to determine whether they pose a potential threat to national security and take appropriate action.”
    • The DOD added that “the rule also implements the provisions of Section 842 of Public Law 115-232, which removes the requirement for a covered National Technology and Industrial Base (NTIB) entity operating under a special security agreement pursuant to the NISP to obtain a national interest determination as a condition for access to proscribed information.”
  • An advisory committee housed at the United States (U.S.) Department of Homeland Security (DHS) is calling for the White House to quickly “operationalize intelligence in a classified space with senior executives and cyber experts from most critical entities in the energy, financial services, and communications sectors working directly with intelligence analysts and other government staff.” In their report, the President’s National Infrastructure Advisory Council (NIAC) proposed the creation of a Critical Infrastructure Command Center (CICC) to “provid[e] real-time collaboration between government and industry…[and] take direct action and provide tactical solutions to mitigate, remediate,  and deter threats.” NIAC urged the President to “direct relevant federal agencies to support the private sector in executing the concept, including identifying the required government staff…[and] work with Congress to ensure the appropriate authorities are established to allow the CICC to fully realize its operational functionality.” NIAC recommended “near-term actions to implement the CICC concept:
    • 1.The President should direct the relevant federal agencies to support the private sector in rapidly standing up the CICC concept with the energy, financial services, and communications sectors:
      • a. Within 90 days the private sector will identify the executives who will lead execution of the CICC concept and establish governing criteria (including membership, staffing and rotation, and other logistics).
      • b. Within 120 days the CICC sector executives will identify and assign the necessary CICC staff from the private sector.
      • c. Within 90 days an appropriate venue to house the operational component will be identified and the necessary agreements put in place.
    • 2. The President should direct the Intelligence Community and other relevant government agencies to identify and co-locate the required government staff counterparts to enable the direct coordination required by the CICC. This staff should be pulled from the IC, SSAs, and law enforcement.
    • 3. The President, working with Congress, should establish the appropriate authorities and mission for federal agencies to directly share intelligence with critical infrastructure companies, along with any other authorities required for the CICC concept to be fully successful (identified in Appendix A).
    • 4. Once the CICC concept is fully operational (within 180 days), the responsible executives should deliver a report to the NSC and the NIAC demonstrating how the distinct capabilities of the CICC have been achieved and the impact of the capabilities to date. The report should identify remaining gaps in resources, direction, or authorities.

Coming Events

  • On 13 January, the Federal Communications Commission (FCC) will hold its monthly open meeting, and the agency has placed the following items on its tentative agenda “Bureau, Office, and Task Force leaders will summarize the work their teams have done over the last four years in a series of presentations:
    • Panel One. The Commission will hear presentations from the Wireless Telecommunications Bureau, International Bureau, Office of Engineering and Technology, and Office of Economics and Analytics.
    • Panel Two. The Commission will hear presentations from the Wireline Competition Bureau and the Rural Broadband Auctions Task Force.
    • Panel Three. The Commission will hear presentations from the Media Bureau and the Incentive Auction Task Force.
    • Panel Four. The Commission will hear presentations from the Consumer and Governmental Affairs Bureau, Enforcement Bureau, and Public Safety and Homeland Security Bureau.
    • Panel Five. The Commission will hear presentations from the Office of Communications Business Opportunities, Office of Managing Director, and Office of General Counsel.
  • On 27 July, the Federal Trade Commission (FTC) will hold PrivacyCon 2021.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2021. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by opsa from Pixabay

EC Finally Unveils Digital Services Act and Digital Markets Act

The EU releases its proposals to remake digital markets.

The European Commission (EC) has released its draft proposals to remake how the European Union (EU) regulates digital markets and digital services, the latest in the bloc’s attempts to rein in what it sees as harms and abuses to people and competition in Europe and the world. At the earliest, these proposals would take effect in 2022 and are sure to be vigorously opposed by large United States (U.S.) multinationals like Google and Facebook and will also likely faced more restrained pushback from the U.S. government.

The Digital Markets Act would allow the EU to designate certain core platform services as gatekeepers subject to certain quanitative metrics or on a case-by-case basis. Once a company is deemed a gatekeeper, it would be subject to much greater regulation by the EU and violations of the new act could result in fines of 10% of worldwide revenue.

In its press release, the EC asserted:

European values are at the heart of both proposals. The new rules will better protect consumers and their fundamental rights online, and will lead to fairer and more open digital markets for everyone. A modern rulebook across the single market will foster innovation, growth and competitiveness and will provide users with new, better and reliable online services. It will also support the scaling up of smaller platforms, small and medium-sized enterprises, and start-ups, providing them with easy access to customers across the whole single market while lowering compliance costs. Furthermore, the new rules will prohibit unfair conditions imposed by online platforms that have become or are expected to become gatekeepers to the single market. The two proposals are at the core of the Commission’s ambition to make this Europe’s Digital Decade.

In the Digital Markets Act, the EC explained the problem with large platforms dominating certain digital markets. The EC discussed the harm to people and medium and small businesses as some large companies control certain markets and use their size and dominance to extract unfair prices for inferior services and products. The EC listed the core platform services that might be regulated:

  • online intermediation services (incl. for example marketplaces, app stores and online intermediation services in other sectors like mobility, transport or energy)
  • online search engines,
  • social networking
  • video sharing platform services,
  • number-independent interpersonal electronic communication services,
  • operating systems,
  • cloud services and
  • advertising services, including advertising networks, advertising exchanges and any other advertising intermediation services, where these advertising services are being related to one or more of the other core platform services mentioned above.

Clearly, a number of major American firms could easily be considered “core platform services” including Amazon, Apple, Google, Facebook, Instagram, YouTube, WhatsApp, Microsoft, and others. Whether they would be deemed gatekeepers would hinge on whether they meet the quantitative metrics the EU will put in place, and this will be a rebuttable presumption such that if a firm meets the standards, it may present evidence to the contrary and argue it is not a gatekeeper.

The EC detailed the quantitative metrics in Article 3. A company may qualify if it meets all three of the following criteria subject to further metrics:

A provider of core platform services shall be designated as gatekeeper if:

(a) it has a significant impact on the internal market;

(b) it operates a core platform service which serves as an important gateway for business users to reach end users; and

(c) it enjoys an entrenched and durable position in its operations or it is foreseeable that it will enjoy such a position in the near future.

The other metrics include €6.5 billion in revenue over the last three years or a €65 billion market capitalization and the provision of core platform services in at least three member states to show a “significant impact on internal market.” For the second category listed above, a company would need to provide a core platform service to 45 million or more people in the EU and 10,000 or more businesses in the EU. And, for the last category, passing the 45 million user and 10,000 business threshold for three consecutive years would suffice. The act reads:

A provider of core platform services shall be presumed to satisfy:

(a) the requirement in paragraph 1 point (a) where the undertaking to which it belongs achieves an annual EEA turnover equal to or above EUR 6.5 billion in the last three financial years, or where the average market capitalisation or the equivalent fair market value of the undertaking to which it belongs amounted to at least EUR 65 billion in the last financial year, and it provides a core platform service in at least three Member States;

(b) the requirement in paragraph 1 point (b) where it provides a core platform service that has more than 45 million monthly active end users established or located in the Union and more than 10,000 yearly active business users established in the Union in the last financial year; for the purpose of the first subparagraph, monthly active end users shall refer to the average number of monthly active end users throughout the largest part of the last financial year;

(c) the requirement in paragraph 1 point (c) where the thresholds in point (b) were met in each of the last three financial years.

The EU would also be able to label a provider of core platform services a gatekeeper on a case-by-case basis:

Provision should also be made for the assessment of the gatekeeper role of providers of core platform services which do not satisfy all of the quantitative thresholds, in light of the overall objective requirements that they have a significant impact on the internal market, act as an important gateway for business users to reach end users and benefit from a durable and entrenched position in their operations or it is foreseeable that it will do so in the near future.

It bears note that a company would be found to be a gatekeeper if it is merely foreseeable that it will satisfy these criteria soon. This flexibility could allow the EU to track companies and flag them as gatekeepers before they, in fact, achieve the sort of market dominance this regulation is intended to stop.

Among the relevant excerpts from the “Reasons for and objectives of the proposal” section of the act are:

  • Large platforms have emerged benefitting from characteristics of the sector such as strong network effects, often embedded in their own platform ecosystems, and these platforms represent key structuring elements of today’s digital economy, intermediating the majority of transactions between end users and business users. Many of these undertakings are also comprehensively tracking and profiling end users. A few large platforms increasingly act as gateways or gatekeepers between business users and end users and enjoy an entrenched and durable position, often as a result of the creation of conglomerate ecosystems around their core platform services, which reinforces existing entry barriers.
  • As such, these gatekeepers have a major impact on, have substantial control over the access to, and are entrenched in digital markets, leading to significant dependencies of many business users on these gatekeepers, which leads, in certain cases, to unfair behaviour vis-à-vis these business users. It also leads to negative effects on the contestability of the core platform services concerned. Regulatory initiatives by Member States cannot fully address these effects; without action at EU level, they could lead to a fragmentation of the Internal Market.
  • Unfair practices and lack of contestability lead to inefficient outcomes in the digital sector in terms of higher prices, lower quality, as well as less choice and innovation to the detriment of European consumers. Addressing these problems is of utmost importance in view of the size of the digital economy (estimated at between 4.5% to 15.5% of global GDP in 2019 with a growing trend) and the important role of online platforms in digital markets with its societal and economic implications.
  • Weak contestability and unfair practices in the digital sector are more frequent and pronounced in certain digital services than others. This is the case in particular for widespread and commonly used digital services and infrastructures that mostly directly intermediate between business users and end users.
  • The enforcement experience under EU competition rules, numerous expert reports and studies and the results of the OPC show that there are a number of digital services that have the following features: (i) highly concentrated multi-sided platform services, where usually one or very few large digital platforms set the commercial conditions with considerable autonomy; (ii) a few large digital platforms act as gateways for business users to reach their customers and vice-versa; and (iii) gatekeeper power of these large digital platforms is often misused by means of unfair behaviour vis-à-vis economically dependent business users and customers.
  • The proposal is therefore further limited to a number of ‘core platform services’ where the identified problems are most evident and prominent and where the presence of a limited number of large online platforms that serve as gateways for business users and end users has led or is likely to lead to weak contestability of these services and of the markets in which these intervene. These core platform services include: (i) online intermediation services (incl. for example marketplaces, app stores and online intermediation services in other sectors like mobility, transport or energy) (ii) online search engines, (iii) social networking (iv)video sharing platform services, (v) number-independent interpersonal electronic communication services, (vi) operating systems, (vii) cloud services and (viii) advertising services, including advertising networks, advertising exchanges and any other advertising intermediation services, where these advertising services are being related to one or more of the other core platform services mentioned above.
  • The fact that a digital service qualifies as a core platform service does not mean that issues of contestability and unfair practices arise in relation to every provider of these core platform services. Rather, these concerns appear to be particularly strong when the core platform service is operated by a gatekeeper. Providers of core platform providers can be deemed to be gatekeepers if they: (i) have a significant impact on the internal market, (ii) operate one or more important gateways to customers and (iii) enjoy or are expected to enjoy an entrenched and durable position in their operations.
  • Such gatekeeper status can be determined either with reference to clearly circumscribed and appropriate quantitative metrics, which can serve as rebuttable presumptions to determine the status of specific providers as a gatekeeper, or based on a case-by-case qualitative assessment by means of a market investigation.

The Digital Services Act would add new regulation on top of Directive 2000/31/EC (aka the e-Commerce Directive) by “[b]uilding on the key principles set out in the e-Commerce Directive, which remain valid today.” This new scheme “seeks to ensure the best conditions for the provision of innovative digital services in the internal market, to contribute to online safety and the protection of fundamental rights, and to set a robust and durable governance structure for the effective supervision of providers of intermediary services.”

The Digital Services Act is focused mostly on the information and misinformation present all over the online world and the harms it wreaks on EU citizens. However, the EC is also seeking to balance fundamental EU rights in more tightly regulating online platforms. Like the Digital Markets Act, this regulation would focus on the largest online content, product and services providers, which, as a practical matter, would likely be Facebook, Amazon, Google, Spotify, and a handful of other companies. Once a company has 10% of more of the EU’s population using its offerings, then the requirements of the Digital Services Act would be triggered.

Additionally, the Digital Services Act unites two online issues not usually considered together in the United States (U.S.): harmful online content and harmful online products. Even though it seems logical to consider these online offerings in tandem, there is clear bifurcation in the U.S. in how these two issues are regulated to the extent they are at the federal and state levels.

The Digital Services Act “will introduce a series of new, harmonised EU-wide obligations for digital services, carefully graduated on the basis of those services’ size and impact, such as:

  • Rules for the removal of illegal goods, services or content online;
  • Safeguards for users whose content has been erroneously deleted by platforms;
  • New obligations for very large platforms to take risk-based action to prevent abuse of their systems;
  • Wide-ranging transparency measures, including on online advertising and on the algorithms used to recommend content to users;
  • New powers to scrutinize how platforms work, including by facilitating access by researchers to key platform data;
  • New rules on traceability of business users in online market places, to help track down sellers of illegal goods or services;
  • An innovative cooperation process among public authorities to ensure effective enforcement across the single market.”

The EC explained

new and innovative business models and services, such as online social networks and marketplaces, have allowed business users and consumers to impart and access information and engage in transactions in novel ways. A majority of Union citizens now uses those services on a daily basis. However, the digital transformation and increased use of those services has also resulted in new risks and challenges, both for individual users and for society as a whole.

The EC spelled out what the Digital Services Act would do:

This Regulation lays down harmonised rules on the provision of intermediary services in the internal market. In particular, it establishes:

(a) a framework for the conditional exemption from liability of providers of intermediary services;

(b) rules on specific due diligence obligations tailored to certain specific categories of providers of intermediary services;

(c) rules on the implementation and enforcement of this Regulation, including as regards the cooperation of and coordination between the competent authorities.

The EC explained the purpose of the act:

  • this proposal seeks to ensure the best conditions for the provision of innovative digital services in the internal market, to contribute to online safety and the protection of fundamental rights, and to set a robust and durable governance structure for the effective supervision of providers of intermediary services.
  • The proposal defines clear responsibilities and accountability for providers of intermediary services, and in particular online platforms, such as social media and marketplaces. By setting out clear due-diligence obligations for certain intermediary services, including notice-and-action procedures for illegal content and the possibility to challenge the platforms’ content moderation decisions, the proposal seeks to improve users’ safety online across the entire Union and improve the protection of their fundamental rights. Furthermore, an obligation for certain online platforms to receive, store and partially verify and publish information on traders using their services will ensure a safer and more transparent online environment for consumers.
  • Recognising the particular impact of very large online platforms on our economy and society, the proposal sets a higher standard of transparency and accountability on how the providers of such platforms moderate content, on advertising and on algorithmic processes. It sets obligations to assess the risks their systems pose to develop appropriate risk management tools to protect the integrity of their services against the use of manipulative techniques.

The EC summarized how the act will work:

  • The operational threshold for service providers in scope of these obligations includes those online platforms with a significant reach in the Union, currently estimated to be amounting to more than 45 million recipients of the service. This threshold is proportionate to the risks brought by the reach of the platforms in the Union; where the Union’s population changes by a certain percentage, the Commission will adjust the number of recipients considered for the threshold, so that it consistently corresponds to 10 % of the Union’s population. Additionally, the Digital Services Act will set out a co-regulatory backstop, including building on existing voluntary initiatives.
  • This proposal should constitute the appropriate basis for the development of robust technologies to prevent the reappearance of illegal information, accompanied with the highest safeguards to avoid that lawful content is taken down erroneously; such tools could be developed on the basis of voluntary agreements between all parties concerned and should be encouraged by Member States; it is in the interest of all parties involved in the provision of intermediary services to adopt and implement such procedures; the provisions of this Regulation relating to liability should not preclude the development and effective operation, by the different interested parties, of technical systems of protection and identification and of automated recognition made possible by digital technology within the limits laid down by Regulation 2016/679.
  • Union citizens and others are exposed to ever-increasing risks and harms online – from the spread of illegal content and activities, to limitations to express themselves and other societal harms. The envisaged policy measures in this legislative proposal will substantially improve this situation by providing a modern, future-proof governance framework, effectively safeguarding the rights and legitimate interests of all parties involved, most of all Union citizens. The proposal introduces important safeguards to allow citizens to freely express themselves, while enhancing user agency in the online environment, as well as the exercise of other fundamental rights such as the right to an effective remedy, non-discrimination, rights of the child as well as the protection of personal data and privacy online.
  • The proposed Regulation will mitigate risks of erroneous or unjustified blocking speech, address the chilling effects on speech, stimulate the freedom to receive information and hold opinions, as well as reinforce users’ redress possibilities. Specific groups or persons may be vulnerable or disadvantaged in their use of online services because of their gender, race or ethnic origin, religion or belief, disability, age or sexual orientation. They can be disproportionately affected by restrictions and removal measures following from (unconscious or conscious) biases potentially embedded in the notification systems by users and third parties, as well as replicated in automated content moderation tools used by platforms. The proposal will mitigate discriminatory risks, particularly for those groups or persons and will contribute to the protection of the rights of the child and the right to human dignity online. The proposal will only require removal of illegal content and will impose mandatory safeguards when users’ information is removed, including the provision of explanatory information to the user, complaint mechanisms supported by the service providers as well as external out-of-court dispute resolution mechanism. Furthermore, it will ensure EU citizens are also protected when using services provided by providers not established in the Union but active on the internal market, since those providers are covered too.
  • With regard to service providers’ freedom to conduct a business, the costs incurred on businesses are offset by reducing fragmentation across the internal market. The proposal introduces safeguards to alleviate the burden on service providers, including measures against repeated unjustified notices and prior vetting of trusted flaggers by public authorities. Furthermore, certain obligations are targeted to very large online platforms, where the most serious risks often occur and which have the capacity absorb the additional burden.
  • The proposed legislation will preserve the prohibition of general monitoring obligations of the e-Commerce Directive, which in itself is crucial to the required fair balance of fundamental rights in the online world. The new Regulation prohibits general monitoring obligations, as they could disproportionately limit users’ freedom of expression and freedom to receive information, and could burden service providers excessively and thus unduly interfere with their freedom to conduct a business. The prohibition also limits incentives for online surveillance and has positive implications for the protection of personal data and privacy.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Sambeet D from Pixabay

Further Reading, Other Developments, and Coming Events (15 December)

Further Reading

  • DHS, State and NIH join list of federal agencies — now five — hacked in major Russian cyberespionage campaign” By Ellen Nakashima and Craig Timberg — The Washington Post; “Scope of Russian Hack Becomes Clear: Multiple U.S. Agencies Were Hit” By David E. Sanger, Nicole Perlroth and Eric Schmitt — The New York Times; The list of United States (U.S.) government agencies breached by Sluzhba vneshney razvedki Rossiyskoy Federatsii (SVR), the Russian Federation’s Foreign Intelligence Service, has grown. Now the Department of Homeland Security, Defense, and State and the National Institutes of Health are reporting they have been breached. It is unclear if Fortune 500 companies in the U.S. and elsewhere and U.S. nuclear laboratories were also breached in this huge, sophisticated espionage exploit. It appears the Russians were selective and careful, and these hackers may have only accessed information held on U.S. government systems. And yet, the Trump Administration continues to issue equivocal statements neither denying nor acknowledging the hack, leaving the public to depend on quotes from anonymous officials. Perhaps admitting the Russians hacked U.S. government systems would throw light on Russian interference four years ago, and the President is loath to even contemplate that attack. In contrast, President Donald Trump has made all sorts of wild, untrue claims about vote totals being hacked despite no evidence supporting his assertions. It appears that the declaration of mission accomplished by some agencies of the Trump Administration over no Russian hacking of or interference with the 2020 election will be overshadowed by what may prove the most damaging hack of U.S. government systems ever.
  • Revealed: China suspected of spying on Americans via Caribbean phone networks” By Stephanie Kirchgaessner — The Guardian. This story depends on one source, so take it for what it is worth, but allegedly the People’s Republic of China (PRC) is using vulnerabilities in mobile communications networks to hack into the phones of Americans travelling in the Caribbean. If so, the PRC may be exploiting the same Signaling System 7 (SS7) weaknesses an Israeli firm, Circles, is using to sell access to phones, at least according to a report published recently by the University of Toronto’s Citizen Lab.
  • The Cartel Project | Revealed: The Israelis Making Millions Selling Cyberweapons to Latin America” By Amitai Ziv — Haaretz. Speaking of Israeli companies, the NSO Group among others are actively selling offensive cyber and surveillance capabilities to Central American nations often through practices that may be corrupt.
  • U.S. Schools Are Buying Phone-Hacking Tech That the FBI Uses to Investigate Terrorists” By Tom McKay and Dhruv Mehrotra — Gizmodo. Israeli firm Cellebrite and competitors are being used in school systems across the United States (U.S.) to access communications on students’ phones. The U.S. Supreme Court caselaw gives schools very wide discretion for searches, and the Fourth Amendment is largely null and void on school grounds.
  • ‘It’s Hard to Prove’: Why Antitrust Suits Against Facebook Face Hurdles” By Mike Issac and Cecilia Kang — The New York Times. The development of antitrust law over the last few decades may have laid an uphill path for the Federal Trade Commission (FTC) and state attorneys general in securing a breakup of Facebook, something that has not happened on a large scale since the historic splintering of AT&T in the early 1980’s.
  • Exclusive: Israeli Surveillance Companies Are Siphoning Masses Of Location Data From Smartphone Apps” By Thomas Brewster — Forbes. Turns out Israeli firms are using a feature (or what many would call a bug) in the online advertising system that allows those looking to buy ads to get close to real-time location data from application developers looking to sell advertising space. By putting out a shingle as a Demand Side Platform, it is possible to access reaps of location data, and two Israeli companies are doing just that and offering the service of locating and tracking people using this quirk in online advertising. And this is not just companies in Israel. There is a company under scrutiny in the United States (U.S.) that may have used these practices and then provided location data to federal agencies.

Other Developments

  • The Government Accountability Office (GAO) evaluated the United States’ (U.S.) Department of Defense’s electromagnetic spectrum (EMS) operations found that the DOD’s efforts to maintain EMS superiority over the Russian Federation and the People’s Republic of China (PRC). The GAO concluded:
    • Studies have shown that adversaries of the United States, such as China and Russia, are developing capabilities and strategies that could affect DOD superiority in the information environment, including the EMS. DOD has also reported that loss of EMS superiority could result in the department losing control of the battlefield, as its Electromagnetic Spectrum Operations (EMSO) supports many warfighting functions across all domains. DOD recognizes the importance of EMSO to military operations in actual conflicts and in operations short of open conflict that involve the broad information environment. However, gaps we identified in DOD’s ability to develop and implement EMS-related strategies have impeded progress in meeting DOD’s goals. By addressing gaps we found in five areas—(1) the processes and procedures to integrate EMSO throughout the department, (2) governance reforms to correct diffuse organization, (3) responsibility by an official with appropriate authority, (4) a strategy implementation plan, and (5) activities that monitor and assess the department’s progress in implementing the strategy—DOD can capitalize on progress that it has already made and better support ensuring EMS superiority.
    • The GAO recommended:
      • The Secretary of Defense should ensure that the Vice Chairman of the Joint Chiefs of Staff, as Senior Designated Official of the Electromagnetic Spectrum Operations Cross-Functional Team (CFT), identifies the procedures and processes necessary to provide for integrated defense-wide strategy, planning, and budgeting with respect to joint electromagnetic spectrum operations, as required by the FY19 NDAA. (Recommendation 1)
      • The Secretary of Defense should ensure that the Vice Chairman of the Joint Chiefs of Staff as Senior Designated Official of the CFT proposes EMS governance, management, organizational, and operational reforms to the Secretary. (Recommendation 2)
      • The Secretary of Defense should assign clear responsibility to a senior official with authority and resources necessary to compel action for the long-term implementation of the 2020 strategy in time to oversee the execution of the 2020 strategy implementation plan. (Recommendation 3)
      • The Secretary of Defense should ensure that the designated senior official for long-term strategy implementation issues an actionable implementation plan within 180 days following issuance of the 2020 strategy. (Recommendation 4)
      • The Secretary of Defense should ensure that the designated senior official for long-term strategy implementation creates oversight processes that would facilitate the department’s implementation of the 2020 strategy. (Recommendation 5)
  • A forerunner to Apple’s App Store has sued the company, claiming it has monopolized applications on its operating system to the detriment of other parties and done the same with respect to its payment system. The company behind Cydia is arguing that it conceived of and created the first application store for the iPhone, offering a range of programs Apple did not. Cydia is claiming that once Apple understood how lucrative an app store would be, it blocked Cydia and established its own store, the exclusive means through which programs can be installed and used on the iOS. Furthermore, this has enabled Apple to levy 30% of all in-application purchases made, which is allegedly a $50 billion market annually. This is the second high-profile suit this year against Apple. Epic Games, the maker of the popular game, Fortnite, sued Apple earlier this year on many of the same grounds because the company started allowing users to buy directly from it for a 30% discount. Apple responded by removing the game from the App Store, which has blocked players from downloading updated versions. That litigation has just begun. In its complaint, Cydia asserts:
    • Historically, distribution of apps for a specific operating system (“OS”) occurred in a separate and robustly competitive market. Apple, however, began coercing users to utilize no other iOS app distribution service but the App Store, coupling it closer and closer to the iPhone itself in order to crowd out all competition. But Apple did not come up with this idea initially—it only saw the economic promise that iOS app distribution represented after others, like [Cydia], demonstrated that value with their own iOS app distribution products/services. Faced with this realization, Apple then decided to take that separate market (as well as the additional iOS app payment processing market described herein) for itself.
    • Cydia became hugely popular by offering a marketplace to find and obtain third party iOS applications that greatly expanded the capabilities of the stock iPhone, including games, productivity applications, and audio/visual applications such as a video recorder (whereas the original iPhone only allowed still cameraphotos). Apple subsequently took many of these early third party applications’ innovations, incorporating them into the iPhone directly or through apps.
    • But far worse than simply copying others’ innovations, Apple also recognized that it could reap enormous profits if it cornered this fledgling market for iOS app distribution, because that would give Apple complete power over iOS apps, regardless of the developer. Apple therefore initiated a campaign to eliminate competition for iOS app distribution altogether. That campaign has been successful and continues to this day. Apple did (and continues to do) so by, inter alia, tying the App Store app to iPhone purchases by preinstalling it on all iOS devices and then requiring it as the default method to obtain iOS apps, regardless of user preference for other alternatives; technologically locking down the iPhone to prevent App Store competitors like Cydia from even operating on the device; and imposing contractual terms on users that coerce and prevent them from using App Store competitors. Apple has also mandated that iOS app developers use it as their sole option for app payment processing (such as in-app purchases), thus preventing other competitors, such as Cydia, from offering the same service to those developers.
    • Through these and other anticompetitive acts, Apple has wrongfully acquired and maintained monopoly power in the market (or aftermarket) for iOS app distribution, and in the market (or aftermarket) for iOS app payment processing. Apple has frozen Cydia and all other competitors out of both markets, depriving them of the ability to compete with the App Store and to offer developers and consumers better prices, better service, and more choice. This anticompetitive conduct has unsurprisingly generated massive profits and unprecedented market capitalization for Apple, as well as incredible market power.
  • California is asking to join antitrust suit against Google filed by the United States Department of Justice (DOJ) and eleven state attorneys general. This antitrust action centers on Google’s practices of making Google the default search engine on Android devices and paying browsers and other technology entities to make Google the default search engine. However, a number of states that had initially joined the joint state investigation of Google have opted not to join this action and will instead be continuing to investigate, signaling a much broader case than the one filed in the United States District Court for the District of Columbia. In any event, if the suit does proceed, and a change in Administration could result in a swift change in course, it may take years to be resolved. Of course, given the legion leaks from the DOJ and state attorneys general offices about the pressure U.S. Attorney General William Barr placed on staff and attorneys to bring a case before the election, there is criticism that rushing the case may result in a weaker, less comprehensive action that Google may ultimately fend off.
    • And, there is likely to be another lawsuit against Google filed by other state attorneys general. A number of attorneys general who had orginally joined the effort led by Texas Attorney General Ken Paxton in investigating Google released a statement at the time the DOJ suit was filed, indicating their investigation would continue, presaging a different, possibly broader lawsuit that might also address Google’s role in other markets. The attorneys general of New York, Colorado, Iowa, Nebraska, North Carolina, Tennessee, and Utah did not join the case that was filed but may soon file a related but parallel case. They stated:
      • Over the last year, both the U.S. DOJ and state attorneys general have conducted separate but parallel investigations into Google’s anticompetitive market behavior. We appreciate the strong bipartisan cooperation among the states and the good working relationship with the DOJ on these serious issues. This is a historic time for both federal and state antitrust authorities, as we work to protect competition and innovation in our technology markets. We plan to conclude parts of our investigation of Google in the coming weeks. If we decide to file a complaint, we would file a motion to consolidate our case with the DOJ’s. We would then litigate the consolidated case cooperatively, much as we did in the Microsoft case.
  • France’s Commission nationale de l’informatique et des libertés (CNIL) handed down multi-million Euro fines on Google and Amazon for putting cookies on users’ devices. CNIL fined Google a total of €100 million and Amazon €35 million because its investigation of both entities determined “when a user visited [their] website, cookies were automatically placed on his or her computer, without any action required on his or her part…[and] [s]everal of these cookies were used for advertising purposes.”
    • CNIL explained the decision against Google:
      • [CNIL] noticed three breaches of Article 82 of the French Data Protection Act:
      • Deposit of cookies without obtaining the prior consent of the user
        • When a user visited the website google.fr, several cookies used for advertising purposes were automatically placed on his or her computer, without any action required on his or her part.
        • Since this type of cookies can only be placed after the user has expressed his or her consent, the restricted committee considered that the companies had not complied with the requirement provided for in Article 82 of the French Data Protection Act regarding the collection of prior consent before placing cookies that are not essential to the service.
      • Lack of information provided to the users of the search engine google.fr
        • When a user visited the page google.fr, an information banner displayed at the bottom of the page, with the following note “Privacy reminder from Google”, in front of which were two buttons: “Remind me later” and “Access now”.
        • This banner did not provide the user with any information regarding cookies that had however already been placed on his or her computer when arriving on the site. The information was also not provided when he or she clicked on the button “Access now”.
        • Therefore, the restricted committee considered that the information provided by the companies did not enable the users living in France either to be previously and clearly informed regarding the deposit of cookies on their computer or, therefore, to be informed of the purposes of these cookies and the available means enabling to refuse them.
      • Partial failure of the « opposition » mechanism
        • When a user deactivated the ad personalization on the Google search by using the available mechanism from the button “Access now”, one of the advertising cookies was still stored on his or her computer and kept reading information aimed at the server to which it is attached.
        • Therefore, the restricted committee considered that the “opposition” mechanism set up by the companies was partially defective, breaching Article 82 of the French Data Protection Act.
    • CNIL explained the case against Amazon:
      • [CNIL] noticed two breaches of Article 82 of the French Data Protection Act:
      • Deposit of cookies without obtaining the prior consent of the user
        • The restricted committee noted that when a user visited one of the pages of the website amazon.fr, a large number of cookies used for advertising purposes was automatically placed on his or her computer, before any action required on his or her part. Yet, the restricted committee recalled that this type of cookies, which are not essential to the service, can only be placed after the user has expressed his or her consent. It considered that the deposit of cookies at the same time as arriving on the site was a practice which, by its nature, was incompatible with a prior consent.
      • Lack of information provided to the users of the website amazon.fr
        • First, the restricted committee noted that, in the case of a user visiting the website amazon.fr, the information provided was neither clear, nor complete.
        • It considered that the information banner displayed by the company, which was “By using this website, you accept our use of cookies allowing to offer and improve our services. Read More.”, only contained a general and approximate information regarding the purposes of all the cookies placed. In particular, it considered that, by reading the banner, the user could not understand that cookies placed on his or her computer were mainly used to display personalized ads. It also noted that the banner did not explain to the user that it could refuse these cookies and how to do it.
        • Then, the restricted committee noticed that the company’s failure to comply with its obligation was even more obvious regarding the case of users that visited the website amazon.fr after they had clicked on an advertisement published on another website. It underlined that in this case, the same cookies were placed but no information was provided to the users about that.
  • Senator Amy Klobuchar (D-MN) wrote the Secretary of Health and Human Services (HHS), to express “serious concerns regarding recent reports on the data collection practices of Amazon’s health-tracking bracelet (Halo) and to request information on the actions [HHS] is taking to ensure users’ health data is secure.” Klobuchar stated:
    • The Halo is a fitness tracker that users wear on their wrists. The tracker’s smartphone application (app) provides users with a wide-ranging analysis of their health by tracking a range of biological metrics including heartbeat patterns, exercise habits, sleep patterns, and skin temperature. The fitness tracker also enters into uncharted territory by collecting body photos and voice recordings and transmitting this data for analysis. To calculate the user’s body fat percentage, the Halo requires users to take scans of their body using a smartphone app. These photos are then temporarily sent to Amazon’s servers for analysis while the app returns a three-dimensional image of the user’s body, allowing the user to adjust the image to see what they would look like with different percentages of body fat. The Halo also offers a tone analysis feature that examines the nuances of a user’s voice to indicate how the user sounds to others. To accomplish this task, the device has built-in microphones that listen and records a user’s voice by taking periodic samples of speech throughout the day if users opt-in to the feature.
    • Recent reports have raised concerns about the Halo’s access to this extensive personal and private health information. Among publicly available consumer health devices, the Halo appears to collect an unprecedented level of personal information. This raises questions about the extent to which the tracker’s transmission of biological data may reveal private information regarding the user’s health conditions and how this information can be used. Last year, a study by BMJ (formerly the British Medical Journal) found that 79 percent of health apps studied by researchers were found to share user data in a manner that failed to provide transparency about the data being shared. The study concluded that health app developers routinely share consumer data with third-parties and that little transparency exists around such data sharing.
    • Klobuchar asked the Secretary of Health and Human Services Alex Azar II to “respond to the following questions:
      • What actions is HHS taking to ensure that fitness trackers like Halo safeguard users’ private health information?
      • What authority does HHS have to ensure the security and privacy of consumer data collected and analyzed by health tracking devices like Amazon’s Halo?
      • Are additional regulations required to help strengthen privacy and security protections for consumers’ personal health data given the rise of health tracking devices? Why or why not?
      • Please describe in detail what additional authority or resources that the HHS could use to help ensure the security and protection of consumer health data obtained through health tracking devices like the Halo.

Coming Events

  • On 15 December, the Senate Judiciary Committee’s Intellectual Property Subcommittee will hold a hearing titled “The Role of Private Agreements and Existing Technology in Curbing Online Piracy” with these witnesses:
    • Panel I
      • Ms. Ruth Vitale, Chief Executive Officer, CreativeFuture
      • Mr. Probir Mehta, Head of Global Intellectual Property and Trade Policy, Facebook, Inc.
      • Mr. Mitch Glazier, Chairman and CEO, Recording Industry Association of America
      • Mr. Joshua Lamel, Executive Director, Re:Create
    • Panel II
      • Ms. Katherine Oyama, Global Director of Business Public Policy, YouTube
      • Mr. Keith Kupferschmid, Chief Executive Officer, Copyright Alliance
      • Mr. Noah Becker, President and Co-Founder, AdRev
      • Mr. Dean S. Marks, Executive Director and Legal Counsel, Coalition for Online Accountability
  • The Senate Armed Services Committee’s Cybersecurity Subcommittee will hold a closed briefing on Department of Defense Cyber Operations on 15 December with these witnesses:
    • Mr. Thomas C. Wingfield, Deputy Assistant Secretary of Defense for Cyber Policy, Office of the Under Secretary of Defense for Policy
    • Mr. Jeffrey R. Jones, Vice Director, Command, Control, Communications and Computers/Cyber, Joint Staff, J-6
    • Ms. Katherine E. Arrington, Chief Information Security Officer for the Assistant Secretary of Defense for Acquisition, Office of the Under Secretary of Defense for Acquisition and Sustainment
    • Rear Admiral Jeffrey Czerewko, United States Navy, Deputy Director, Global Operations, J39, J3, Joint Staff
  • The Senate Banking, Housing, and Urban Affairs Committee’s Economic Policy Subcommittee will conduct a hearing titled “US-China: Winning the Economic Competition, Part II” on 16 December with these witnesses:
    • The Honorable Will Hurd, Member, United States House of Representatives;
    • Derek Scissors, Resident Scholar, American Enterprise Institute;
    • Melanie M. Hart, Ph.D., Senior Fellow and Director for China Policy, Center for American Progress; and
    • Roy Houseman, Legislative Director, United Steelworkers (USW).
  • On 17 December the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency’s (CISA) Information and Communications Technology (ICT) Supply Chain Risk Management (SCRM) Task Force will convene for a virtual event, “Partnership in Action: Driving Supply Chain Security.”

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Photo by Naya Shaw from Pexels

Task Force Calls For Enhanced Digital Regulation in UK

The UK may soon reform its competition and consumer laws visa vis digital markets.

A United Kingdom (UK) entity has recommended that Prime Minister Boris Johnson and his Conservative government remake digital regulation in the UK, especially with respect to competition policy. A task force has returned an extensive set of recommendations requiring legislation and increased coordination and a new focus for existing regulators. The timeline for such action is not clear, and Downing Street would have to agree before anything happens. However, the UK’s new regulatory scheme and the European Union’s ongoing efforts to revamp its regulatory approach to large technology firms will both likely affect United States (U.S.) multinationals such as Facebook and Google. It may also serve as a template for the U.S. to remake its regulation of digital competition.

The United Kingdom’s Competition & Markets Authority (CMA) led an effort consisting of the Office of Communications (Ofcom) and the Information Commissioner’s Office (ICO) in the form of the Digital Markets Taskforce. The Task Force follows the 2019 “Unlocking digital competition, Report of the Digital Competition Expert Panel”, an effort led by Obama Administration Council of Economic Advisers Chair Jason Furman and the more recent July 2020 “Online platforms and digital advertising market study.” In 2019, the Task Force issued its “Digital Markets Strategy” that “sets out five strategic aims, and seven priority focus areas.”

The Task Force acknowledged its efforts in the UK were not unique. It referenced similar inquiries and plans to reform other nations’ regulation of digital markets in the U.S., the EU, Germany, Japan, and Australia.

The Task Force summarized its findings:

The accumulation and strengthening of market power by a small number of digital firms has the potential to cause significant harm to consumers and businesses that rely on them, to innovative competitors and to the economy and society more widely:

  • A poor deal for consumers and businesses who rely on them. These firms can exploit their powerful positions. For consumers this can mean they get a worse deal than they would in a more competitive market, for example having less protection or control of their data. For businesses this can mean they are, for example, charged higher listing fees or higher prices for advertising online. These higher prices for businesses can then feed through into higher prices for consumers for a wide range of products and services across the economy.
  • Innovative competitors face an unfair disadvantage. A powerful digital firm can extend its strong position in one market into other markets, ultimately giving itself an unfair advantage over its rivals. This means innovative competitors, even if they have a good idea, are likely to find it much harder to compete and grow their businesses. This can result in long-term harmful effects on innovation and the dynamism of UK markets.
  • A less vibrant digital economy. If powerful digital firms act to unfairly disadvantage their innovative competitors, these innovative firms will find it harder to enter and expand in new markets, meaning the ‘unicorns’ of tomorrow that will support jobs and the future digital economy will not emerge.

The Task Force calls for the establishment of a new Digital Markets Unit (DMU) that would be particularly focused on policing potential harm before it occurs. Thus, the Task Force is calling for a regulator that is proactive and nimble enough to address risks to competition and consumers any harm happens. The DMU would oversee a new “Strategic Market Status” regime, and the Task Force is recommending that the government and Parliament revisit and refresh consumer and competition laws. The Task Force stated that the “government should put in place a regulatory framework for the most powerful digital firms, alongside strengthening existing competition and consumer laws…[and] [i]n considering the design of this regulatory framework we have sought to strike the right balance between the following key principles:

  • Evidence driven and effective – regulation must be effective, and that means ensuring it is evidence based, but also that it can react swiftly enough to prevent and address harms. The activities undertaken by the most powerful digital firms are diverse and a ‘one size fits all’ approach could have damaging results.
  • Proportionate and targeted – regulation must be proportionate and targeted at addressing a particular problem, minimising the risk of any possible unintended consequences.
  • Open, transparent and accountable – across all its work the DMU should operate in an open and transparent manner. In reaching decisions it should consult a wide range of parties. It should clearly articulate why it has reached decisions and be held accountable for them.
  • Proactive and forward-looking – the DMU should be focused on preventing harm from occurring, rather than enforcing ex post. It should seek to understand how digital markets might evolve, the risks this poses to competition and innovation, and act proactively to assess and manage those risks.
  • Coherent – the DMU should seek to promote coherence with other regulatory regimes both domestically and internationally, in particular by working through the Digital Regulation Cooperation Forum which is already working to deliver a step change in coordination and cooperation between regulators in digital markets.

The Task Force provided more detail on the new SMS scheme:

The entry point to the SMS regime is an assessment of whether a firm has ‘strategic market status’. This should be an evidence-based economic assessment as to whether a firm has substantial, entrenched market power in at least one digital activity, providing the firm with a strategic position (meaning the effects of its market power are likely to be particularly widespread and/or significant). It is focused on assessing the very factors which may give rise to harm, and which motivate the need for regulatory intervention.

Those firms that are designated with SMS should be subject to the following three pillars of the regime:

  • An enforceable code of conduct that sets out clearly how an SMS firm is expected to behave in relation to the activity motivating its SMS designation. The aim of the code is to manage the effects of market power, for example by preventing practices which exploit consumers and businesses or exclude innovative competitors.
  • Pro-competitive interventions like personal data mobility, interoperability and data access which can be used to address the factors which are the source of an SMS firm’s market power in a particular activity. These interventions seek to drive longer-term dynamic changes in these activities, opening up opportunities for greater competition and innovation.
  • SMS merger rules to ensure closer scrutiny of transactions involving SMS firms, given the particular risks and potential consumer harm arising from these transactions.

The SMS regime should be an ex ante regime, focused on proactively preventing harm. Fostering a compliance culture within SMS firms will be crucial to its overall success. However, a key part of fostering compliance is credible deterrence and the DMU will need to be able to take tough action where harm does occur, requiring firms to change their behaviour, and with the ability to impose substantial penalties. The ability to take tough action sits alongside enabling resolution through a participative approach, whereby the DMU seeks to engage constructively with all affected parties to achieve fast and effective results.

The Task Force sketched its ideal timeline during which Parliament would enact its recommendations, which would be next year at the earliest:

We believe the case for an ex ante regime in digital markets has been made. We therefore welcome the government’s response to the CMA’s online platforms and digital advertising market study, and its commitment to establishing a DMU from April 2021 within the CMA. We also welcome government’s commitment to consult on proposals for a new pro-competition regime in early 2021 and to legislate to put the DMU on a statutory footing when parliamentary time allows. We urge government to move quickly in taking this legislation forward. As government rightly acknowledges, similar action is being pursued across the globe and there is a clear opportunity for the UK to lead the way in championing a modern pro-competition, pro-innovation regime.

The Task Force summarized its recommendations to the government:

A Digital Markets Unit

Recommendation 1: The government should set up a DMU which should seek to further the interests of consumers and citizens in digital markets, by promoting competition and innovation.

  • Recommendation 1a: The DMU should be a centre of expertise and knowledge in relation to competition in digital markets.
  • Recommendation 1b: The DMU should be proactive, seeking to foster compliance with regulatory requirements and taking swift action to prevent harm from occurring.

A pro-competition regime for the most powerful digital firms

Recommendation 2: The government should establish a pro-competition framework, to be overseen by the DMU, to pursue measures in relation to SMS firms which further the interests of consumers and citizens, by promoting competition and innovation.

Recommendation 3: The government should provide the DMU with the power to designate a firm with SMS.

  • Recommendation 3a: SMS should require a finding that the firm has substantial, entrenched market power in at least one digital activity, providing the firm with a strategic position.
  • Recommendation 3b: The DMU should set out in formal guidance its prioritisation rules for designation assessments. These should include the firm’s revenue (globally and within the UK), the activity undertaken by the firm and a consideration of whether a sector regulator is better placed to address the issues of concern.
  • Recommendation 3c: The designation process should be open and transparent with a consultation on the provisional decision and the assessment completed within a statutory deadline.
  • Recommendation 3d: A firm’s SMS designation should be set for a fixed period before being reviewed.
  • Recommendation 3e: When a firm meets the SMS test, the associated remedies should apply only to a subset of the firm’s activities, whilst the status should apply to the firm as a whole.

Recommendation 4: The government should establish the SMS regime such that when the SMS test is met, the DMU can establish an enforceable code of conduct for the firm in relation to its designated activities to prevent it from taking advantage of its power and position.

  • Recommendation 4a: A code should comprise high-level objectives supported by principles and guidance.
  • Recommendation 4b: The objectives of the code should be set out in legislation, with the remainder of the content of each code to be determined by the DMU, tailored to the activity, conduct and harms it is intended to address.
  • Recommendation 4c: The DMU should ensure the code addresses the concerns about the effect of the power and position of SMS firms when dealing with publishers, as identified by the Cairncross Review.
  • Recommendation 4d: The code of conduct should always apply to the activity or activities which are the focus of the SMS designation.
  • Recommendation 4e: The DMU should consult on and establish a code as part of the designation assessment. The DMU should be able to vary the code outside the designation review cycle.

Recommendation 5: SMS firms should have a legal obligation to ensure their conduct is compliant with the requirements of the code at all times and put in place measures to foster compliance.

Recommendation 6: The government should establish the SMS regime such that the DMU can impose pro-competitive interventions on an SMS firm to drive dynamic change as well as to address harms related to the designated activities.

  • Recommendation 6a: With the exception of ownership separation, the DMU should not be limited in the types of remedies it is able to apply.
  • Recommendation 6b: The DMU should be able to implement PCIs anywhere within an SMS firm in order to address a concern related to its substantial entrenched market power and strategic position in a designated activity.
  • Recommendation 6c: In implementing a PCI the DMU should demonstrate that it is an effective and proportionate remedy to an adverse effect on competition or consumers. A PCI investigation should be completed within a fixed statutory deadline.
  • Recommendation 6d: PCIs should be implemented for a limited duration and should be regularly reviewed.

Recommendation 7: The government should establish the SMS regime such that the DMU can undertake monitoring in relation to the conduct of SMS firms and has a range of tools available to resolve concerns.

  • Recommendation 7a: Where appropriate, the DMU should seek to resolve concerns using a participative approach, engaging with parties to deliver fast and effective resolution.
  • Recommendation 7b: The DMU should be able to open formal investigations into breaches of the code and where a breach is found, require an SMS firm to change its behaviour. These investigations should be completed within a fixed statutory deadline.
  • Recommendation 7c: The DMU should be able to impose substantial penalties for breaches of the code and for breaches of code and PCI orders.
  • Recommendation 7d: The DMU should be able to take action quickly on an interim basis where it suspects the code has been breached.
  • Recommendation 7e: The DMU should be able to undertake scoping assessments where it is concerned there is an adverse effect on competition or consumers in relation to a designated activity. The outcome of such assessments could include a code breach investigation, a pro-competitive intervention investigation, or variation to a code principle or guidance.

Recommendation 8: The government should establish the SMS regime such that the DMU can draw information from a wide range of sources, including by using formal information gathering powers, to gather the evidence it needs to inform its work.

Recommendation 9: The government should ensure the DMU’s decisions are made in an open and transparent manner and that it is held accountable for them.

  • Recommendation 9a: The DMU’s decisions should allow for appropriate internal scrutiny.
  • Recommendation 9b: The DMU should consult on its decisions.
  • Recommendation 9c: The DMU’s decisions should be timely, with statutory deadlines used to set expectations and deliver speedy outcomes.
  • Recommendation 9d: The DMU’s decisions should be judicially reviewable on ordinary judicial review principles and the appeals process should deliver robust outcomes at pace.

Recommendation 10: The government should establish the SMS regime such that SMS firms are subject to additional merger control requirements.

Recommendation 11: The government should establish the SMS merger control regime such that SMS firms are required to report all transactions to the CMA. In addition, transactions that meet clear-cut thresholds should be subject to mandatory notification, with completion prohibited prior to clearance. Competition concerns should be assessed using the existing substantive test but a lower and more cautious standard of proof.

A modern competition and consumer regime for digital markets

Recommendation 12: The government should provide the DMU with a duty to monitor digital markets to enable it to build a detailed understanding of how digital businesses operate, and to provide the basis for swifter action to drive competition and innovation and prevent harm.

Recommendation 13: The government should strengthen competition and consumer protection laws and processes to ensure they are better adapted for the digital age.

  • Recommendation 13a: The government should pursue significant reforms to the markets regime to ensure it can be most effectively utilised to promote competition and innovation across digital markets, for example by pursuing measures like data mobility and interoperability.
  • Recommendation 13b: The government should strengthen powers to tackle unlawful or illegal activity or content on digital platforms which could result in economic detriment to consumers and businesses.
  • Recommendation 13c: The government should take action to strengthen powers to enable effective consumer choice in digital markets, including by addressing instances where choice architecture leads to consumer harm.
  • Recommendation 13d: The government should provide for stronger enforcement of the Platform to Business Regulation.

A coherent regulatory landscape

Recommendation 14: The government should ensure the DMU is able to work closely with other regulators with responsibility for digital markets, in particular Ofcom, the ICO and the FCA.

  • Recommendation 14a: The DMU should be able to share information with other regulators and seek reciprocal arrangements.
  • Recommendation 14b: The government should consider, in consultation with Ofcom and the FCA, empowering these agencies with joint powers with the DMU in relation to the SMS regime, with the DMU being the primary authority.

Recommendation 15: The government should enable the DMU to work closely with regulators in other jurisdictions to promote a coherent regulatory landscape.

  • Recommendation 15a: The DMU should be able to share information with regulators in other jurisdictions and should seek reciprocal arrangements.
  • Recommendation 15b: The DMU should explore establishing a network of international competition and consumer agencies to facilitate better monitoring and action in relation to the conduct of SMS firms.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Free-Photos from Pixabay

Further Reading, Other Developments, and Coming Events (9 December)

Further Reading

  • Secret Amazon Reports Expose the Company’s Surveillance of Labor and Environmental Groups” By Lauren Kaori Gurley — Vice’s Motherboard. Yet another article by Vice drawing back the curtain on Amazon’s labor practices, especially its apparently fervent desire to stop unionizing. This piece shines light on the company’s Global Security Operations Center that tracks labor organizing and union activities among Amazon’s workers and monitors environmental and human rights on social media. The company has even hired Pinkerton operatives to surveil its warehouse employees. Although the focus is on Europe because the leaked emails on which the story is based pertain to activities on that continent, there is no reason to expect the same tactics are not being used elsewhere. Moreover, the company may be violating the much stricter laws in Europe protecting workers and union activities.
  • Cyber Command deployed personnel to Estonia to protect elections against Russian threat” By Shannon Vavra — cyberscoop.  It was recently revealed that personnel from the United States (U.S.) Cyber Command were deployed to Estonia to work with the latter country’s Defense Forces Cyber Command to fend off potential Russian attacks during the U.S. election. This follows another recent “hunt forward” mission for Cyber Command in Montenegro, another nation on the “frontline” of Russian hacking activities. Whether this has any effect beyond building trust and capacity between nations opposed to state-sponsored hacking and disinformation is unclear.
  • How China Is Buying Up the West’s High-Tech Sector” By Elizabeth Braw — Foreign Policy. This piece by a fellow at the ring wing American Enterprise Institute (AEI) makes the case that reviewing and potentially banning direct foreign investment by People’s Republic of China (PRC) in the United States (U.S.), European Union (EU), and European nations is probably not cutting off PRC access to cutting edge technology. PRC entities are investing directly or indirectly as limited partners in venture capital firms and are probably still gaining access to new technology. For example, an entity associated with the University of Cambridge is working with Huawei on a private 5G wireless network even though London is advancing legislation and policy to ban the PRC giant from United Kingdom (UK) networks. The author advocates for expanding the regulation of foreign investment to include limited partnerships and other structures that are apparently allowing the PRC to continue investing in and reaping the benefit of Western venture capital. There is hope, however, as a number of Western nations are starting government-funded venture capital firms to fund promising technology.
  • Twitter expands hate speech rules to include race, ethnicity” By Katie Paul — Reuters. The social media platform announced that it “further expanding our hateful conduct policy to prohibit language that dehumanizes people on the basis of race, ethnicity, or national origin.” A human rights group, the Color of Change, that was part of a coalition to pressure Twitter and other platforms called the change “essential concessions” but took issue with the timing, stating it would have had more impact had it been made before the election. A spokesperson added “[t]he jury is still out for a company with a spotty track record of policy implementation and enforcing its rules with far-right extremist users…[and] [v]oid of hard evidence the company will follow through, this announcement will fall into a growing category of too little, too late PR stunt offerings.”
  • White House drafts executive order that could restrict global cloud computing companies” By Steven Overly and Eric Geller — Politico. The Trump Administration may make another foray into trying to ban foreign companies from United States (U.S.) key critical infrastructure, and this time would reportedly bar U.S. cloud companies like Microsoft, Amazon, and others from partnering with foreign companies or entities that pose risk to the U.S. through the use of these U.S. systems to conduct cyber-attacks. This seems like another attempt to strike at the People’s Republic of China’s (PRC) technology firms. If issued, it remains to be seen how a Biden Administration would use or implement such a directive given that there is not enough time for the Trump government to see things through to end on such an order. In any event, one can be sure that tech giants have already begun pressing both the outgoing and incoming Administration against any such order and most likely Congress as well.

Other Developments

  • A bipartisan group of Senators and Representatives issued the framework for a $908 billion COVID-19 stimulus package that is reportedly the subject of serious in Congress. The framework details $10 billion for broadband without no detail on how these funds would be distributed.
  • The Australian Competition & Consumer Commission (ACCC) announced the signing of the Australian Product Safety Pledge, “a voluntary initiative that commits its signatories to a range of safety related responsibilities that go beyond what is legally required of them” in e-commerce. The ACCC stated “AliExpress, Amazon Australia, Catch.com.au and eBay Australia, who together account for a significant share of online sales in Australia, are the first businesses to sign the pledge, signifying their commitment to consumers’ safety through a range of commitments such as removing unsafe product listings within two days of being notified by the ACCC.” The pledge consists of 12 commitments:
    • Regularly consult the Product Safety Australia website and other relevant sources for information on recalled/unsafe products. Take appropriate action[1] on these products once they are identified.
    • Provide a dedicated contact point(s) for Australian regulatory authorities to notify and request take-downs of recalled/unsafe products.
    • Remove identified unsafe product listings within two business days of the dedicated contact point(s) receiving a take-down request from Australian regulatory authorities. Inform authorities on the action that has been taken and any relevant outcomes.
    • Cooperate with Australian regulatory authorities in identifying, as far as possible, the supply chain of unsafe products by responding to data/information requests within ten business days should relevant information not be publicly available.
    • Have an internal mechanism for processing data/information requests and take-downs of unsafe products.
    • Provide a clear pathway for consumers to notify the pledge signatory directly of unsafe product listings. Such notifications are treated according to the signatory’s processes and where responses to consumers are appropriate, they are given within five business days.
    • Implement measures to facilitate sellers’ compliance with Australian product safety laws. Share information with sellers on compliance training/guidance, including a link to the ACCC’s Selling online page on the Product Safety Australia website.
    • Cooperate with Australian regulatory authorities and sellers to inform consumers[2] about relevant recalls or corrective actions on unsafe products.
    • Set up processes aimed at preventing or restricting the sale of banned, non-compliant and recalled products as appropriate.
    • Put in place reasonable measures to act against repeat offenders selling unsafe products, including in cooperation with Australian regulatory authorities.
    • Take measures aimed at preventing the reappearance of unsafe product listings already removed.
    • Explore the potential use of new technologies and innovation to improve the detection and removal of unsafe products.
  • Senator Ron Wyden (D-OR) and Representative Lauren Underwood (D-IL) introduced “The Federal Cybersecurity Oversight Act” (S.4912) that would amend the “Federal Cybersecurity Enhancement Act of 2015” (P.L. 114-113) to restrict the use of exceptions to longstanding requirements that federal agencies use measures such as multi-factor authentication and encryption. Currently federal agencies exempt themselves on a number of grounds. Wyden and Underwood’s bill would tighten this process by making the exceptions good only for a year at a time and require the Office of Management and Budget (OMB) approve the execption. In a fact sheet, they claimed:
    • [T]he bill requires the Director of the Office of Management and Budget to approve all waivers, which can currently be self-issued by the head of the agency. To request a waiver, the agency head will have to certify that:
      • It would be excessively burdensome to implement the particular requirement;
      • The particular requirement is not necessary to secure the agency system and data; and
      • The agency has taken all necessary steps to secure the agency system and data.
  • The Government Accountability Office (GAO) looked at the United States (U.S.) longstanding efforts to buy common services and equipment in bulk known as Category Management. The GAO found progress but saw room for considerably more progress. GAO noted:
    • Since 2016, the Office of Management and Budget (OMB) has led efforts to improve how agencies buy these products and services through the category management initiative, which directs agencies across the government to buy more like a single enterprise. OMB has reported the federal government has saved $27.3 billion in 3 years through category management.
  • The GAO concluded:
    • The category management initiative has saved the federal government billions of dollars, and in some instances, enhanced agencies’ mission capabilities. However, the initiative has opportunities to accomplish much more. To date, OMB has focused primarily on contracting aspects of the initiative, and still has several opportunities to help agencies improve how they define their requirements for common products and services. OMB can take concrete steps to improve how agencies define these requirements through more robust guidance and training, changes to leadership delegations and cost savings reporting, and the development of additional metrics to measure implementation of the initiative.
    • Additionally, OMB can lead the development of a coordinated strategy that addresses government-wide data challenges hindering agencies’ efforts to assess their spending and identify prices paid for common products and services.
    • Finally, OMB can tailor additional training courses to provide more relevant information to agency personnel responsible for small business matters, and improve public reporting about the impact of category management on small businesses. In doing so, OMB can enhance the quality of the information provided to the small business community and policymakers. Through these efforts to further advance the category management initiative, OMB can help federal agencies accomplish their missions more effectively while also being better stewards of taxpayer dollars.
    • The GAO made the following recommendations:
      • The Director of the Office of Management and Budget should emphasize in its overarching category management guidance the importance of effectively defining requirements for common products and services when implementing the category management initiative. (Recommendation 1)
      • The Director of the Office of Management and Budget should work with the Category Management Leadership Council and the General Services Administration’s Category Management Program Management Office, and other appropriate offices, to develop additional tailored training for Senior Accountable Officials and agency personnel who manage requirements for common products and services. (Recommendation 2)
      • The Director of the Office of Management and Budget should account for agencies’ training needs, including training needs for personnel who define requirements for common products and services, when setting category management training goals. (Recommendation 3)
      • The Director of the Office of Management and Budget should ensure that designated Senior Accountable Officials have the authority necessary to hold personnel accountable for defining requirements for common products and services as well as contracting activities. (Recommendation 4)
      • The Director of the Office of Management and Budget should report cost savings from the category management initiative by agency. (Recommendation 5)
      • The Director of the Office of Management and Budget should work with the Category Management Leadership Council and the Performance Improvement Council to establish additional performance metrics for the category management initiative that are related to agency requirements. (Recommendation 6)
      • The Director of the Office of Management and Budget should, in coordination with the Category Management Leadership Council and the Chief Data Officer Council, establish a strategic plan to coordinate agencies’ responses to government-wide data challenges hindering implementation of the category management initiative, including challenges involving prices-paid and spending data. (Recommendation 7)
      • The Director of the Office of Management and Budget should work with the General Services Administration’s Category Management Program Management Office and other organizations, as appropriate, to develop additional tailored training for Office of Small Disadvantaged Business Utilization personnel that emphasizes information about small business opportunities under the category management initiative. (Recommendation 8)
      • The Director of the Office of Management and Budget should update its methodology for calculating potentially duplicative contract reductions to strengthen the linkage between category management actions and the number of contracts eliminated. (Recommendation 9)
      • The Director of the Office of Management and Budget should identify the time frames covered by underlying data when reporting on how duplicative contract reductions have impacted small businesses. (Recommendation 10)
  • The chair and ranking member of the House Commerce Committee are calling on the Federal Communications Commission (FCC) to take preparatory steps before Congress provides funding to telecommunications providers to remove and replace Huawei and ZTE equipment. House Energy and Commerce Committee Chair Frank Pallone Jr (D-NJ) and Ranking Member Greg Walden (R-OR) noted the “Secure and Trusted Communications Networks Act” (P.L. 116-124):
    • provides the Federal Communications Commission (FCC) with several new authorities to secure our communications supply chain, including the establishment and administration of the Secure and Trusted Communications Networks Reimbursement Program (Program). Through this Program, small communications providers may seek reimbursement for the cost of removing and replacing suspect network equipment. This funding is critical because some small and rural communications providers would not otherwise be able to afford these upgrades. Among the responsibilities entrusted to the FCC to carry out the Program is the development of a list of suggested replacements for suspect equipment, including physical and virtual communications equipment, application and management software, and services.
    • Pallone and Walden conceded that Congress has not yet provided funds but asked the FCC to take some steps:
      • First, the FCC should develop and release the list of eligible replacement equipment, software, and services as soon as possible. Second, the agency should reassure companies that they will not jeopardize their eligibility for reimbursement under the Program just because replacement equipment purchases were made before the Program is funded, assuming other eligibility criteria are met.
  • The Office of Special Counsel (OSC) wrote one of the whistleblowers at the United States Agency for Global Media (USAGM) and indicated it has ordered the head of USAGM to investigate the claims of malfeasance at the agency. The OSC stated:
    • On December 2, 2020, after reviewing the information you submitted, we directed the Chief Executive Officer (CEO) of USAGM to order an investigation into the following allegations and report back to OSC pursuant to 5 U.S.C. § 1213(c). Allegations to be investigated include that, since June 2020, USAGM:
      • Repeatedly violated the Voice of America (VOA) firewall—the law that protects VOA journalists’ “professional independence and integrity”;
      • Engaged in gross mismanagement and abuse of authority by:
        • Terminating the Presidents of each USAGM-funded network— Radio Free Asia (RFA), Radio Free Europe/Radio Liberty (RFE/RL), the Middle East Broadcasting Networks (MBN), and the Office of Cuba Broadcasting (OCB)—as well as the President and the CEO of the Open Technology Fund (OTF);
        • Dismissing the bipartisan board members that governed the USAGM- funded networks, replacing those board members with largely political appointees, and designating the USAGM CEO as Chairman;
        • Revoking all authority from various members of USAGM’s Senior Executive Service (SES) and reassigning those authorities to political appointees outside of the relevant offices;
        • Removing the VOA Editor for News Standards and Best Practices—a central figure in the VOA editorial standards process and a critical component of the VOA firewall—from his position and leaving that position vacant;
        • Similarly removing the Executive Editor of RFA;
        • Suspending the security clearances of six of USAGM’s ten SES members and placing them on administrative leave; and
        • Prohibiting several offices critical to USAGM’s mission—including the Offices of General Counsel, Chief Strategy, and Congressional and Public Affairs—from communicating with outside parties without the front office’s express knowledge and consent;
      • Improperly froze all agency hiring, contracting, and Information Technology migrations, and either refused to approve such decisions or delayed approval until the outside reputation and/or continuity of agency or network operations, and at times safety of staff, were threatened;
      • Illegally repurposed, and pressured career staff to illegally repurpose, congressionally appropriated funds and programs without notifying Congress; and
      • Refused to authorize the renewal of the visas of non-U.S. citizen journalists working for the agency, endangering both the continuity of agency operations and those individuals’ safety.

Coming Events

  • The Senate Judiciary Committee will hold an executive session at which the “Online Content Policy Modernization Act” (S.4632), a bill to narrow the liability shield in 47 USC 230, may be marked up on 10 December.
  • On 10 December, the Federal Communications Commission (FCC) will hold an open meeting and has released a tentative agenda:
    • Securing the Communications Supply Chain. The Commission will consider a Report and Order that would require Eligible Telecommunications Carriers to remove equipment and services that pose an unacceptable risk to the national security of the United States or the security and safety of its people, would establish the Secure and Trusted Communications Networks Reimbursement Program, and would establish the procedures and criteria for publishing a list of covered communications equipment and services that must be removed. (WC Docket No. 18-89)
    • National Security Matter. The Commission will consider a national security matter.
    • National Security Matter. The Commission will consider a national security matter.
    • Allowing Earlier Equipment Marketing and Importation Opportunities. The Commission will consider a Notice of Proposed Rulemaking that would propose updates to its marketing and importation rules to permit, prior to equipment authorization, conditional sales of radiofrequency devices to consumers under certain circumstances and importation of a limited number of radiofrequency devices for certain pre-sale activities. (ET Docket No. 20-382)
    • Promoting Broadcast Internet Innovation Through ATSC 3.0. The Commission will consider a Report and Order that would modify and clarify existing rules to promote the deployment of Broadcast Internet services as part of the transition to ATSC 3.0. (MB Docket No. 20-145)

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Makalu from Pixabay

Further Reading, Other Development, and Coming Events (8 December)

Further Reading

  • Facebook failed to put fact-check labels on 60% of the most viral posts containing Georgia election misinformation that its own fact-checkers had debunked, a new report says” By Tyler Sonnemaker — Business Insider. Despite its vows to improve its managing of untrue and false content, the platform is not consistently taking down such material related to the runoffs for the Georgia Senate seats. The group behind this finding argues it is because Facebook does not want to. What is left unsaid is that engagement drives revenue, and so, Facebook’s incentives are not to police all violations. Rather it would be to take down enough to be able to say their doing something.
  • Federal Labor Agency Says Google Wrongly Fired 2 Employees” By Kate Conger and Noam Scheiber — The New York Times. The National Labor Relations Board (NLRB) has reportedly sided with two employees Google fired for activities that are traditionally considered labor organizing. The two engineers had been dismissed for allegedly violating the company’s data security practices when they researched the company’s retention of a union-busting firm and sought to alert others about organizing. Even though Google is vowing to fight the action, which has not been finalized, it may well settle given the view of Big Tech in Washington these days. This action could also foretell how a Biden Administration NLRB may look at the labor practices of these companies.
  • U.S. states plan to sue Facebook next week: sources” By Diane Bartz — Reuters. We could see state and federal antitrust suits against Facebook this week. One investigation led by New York Attorney General Tish James could include 40 states although the grounds for alleged violations have not been leaked at this point. It may be Facebook’s acquisition of potential rivals Instagram and WhatsApp that have allowed it to dominate the social messaging market. The Federal Trade Commission (FTC) may also file suit, and, again, the grounds are unknown. The European Commission (EC) is also investigating Facebook for possible violations of European Union (EU) antitrust law over the company’s use of the personal data it holds and uses and about its operation of it online marketplace.
  • The Children of Pornhub” By Nicholas Kristof — The New York Times. This column comprehensively traces the reprehensible recent history of a Canadian conglomerate Mindgeek that owns Pornhub where one can find reams of child and non-consensual pornography. Why Ottawa has not cracked down on this firm is a mystery. The passage and implementation of the “Allow States and Victims to Fight Online Sex Trafficking Act of 2017” (P.L. 115-164) that narrowed the liability shield under 47 USC 230 has forced the company to remove content, a significant change from its indifference before the statutory change in law. Kristof suggests some easy, common sense changes Mindgeek could implement to combat the presence of this illegal material, but it seems like the company will do enough to say it is acting without seriously reforming its platform. Why would it? There is too much money to be made. Additionally, those fighting against this sort of material have been pressuring payment platforms to stop doing business with Mindgeek. PayPal has foresworn any  interaction, and due to pressure Visa and Mastercard are “reviewing” their relationship with Mindgeek and Pornhub. In a statement to a different news outlet, Pornhub claimed it is “unequivocally committed to combating child sexual abuse material (CSAM), and has instituted a comprehensive, industry-leading trust and safety policy to identify and eradicate illegal material from our community.” The company further claimed “[a]ny assertion that we allow CSAM is irresponsible and flagrantly untrue….[w]e have zero tolerance for CSAM.”
  • Amazon and Apple Are Powering a Shift Away From Intel’s Chips” By Don Clark — The New York Times. Two tech giants have chosen new faster, cheaper chips signaling a possible industry shift away from Intel, the firm that has been a significant player for decades. Intel will not go quietly, of course, and a key variable is whether must have software and applications are rewritten to accommodate the new chips from a British firm, Arm.

Other Developments

  • The Government Accountability Office (GAO) and the National Academy of Medicine (NAM) have released a joint report on artificial intelligence in healthcare, consisting of GAO’s Technology Assessment: Artificial Intelligence in Health Care: Benefits and Challenges of Technologies to Augment Patient Care and NAM’s Special Publication: Advancing Artificial Intelligence in Health Settings Outside the Hospital and Clinic. GAO’s report “discusses three topics: (1) current and emerging AI tools available for augmenting patient care and their potential benefits, (2) challenges to the development and adoption of these tools, and (3) policy options to maximize benefits and mitigate challenges to the use of AI tools to augment patient care.” NAM’s “paper aims to provide an analysis of: 1) current technologies and future applications of AI in HSOHC, 2) the logistical steps and challenges involved in integrating AI- HSOHC applications into existing provider workflows, and 3) the ethical and legal considerations of such AI tools, followed by a brief proposal of potential key initiatives to guide the development and adoption of AI in health settings outside the hospital and clinic (HSOHC).
    • The GAO “identified five categories of clinical applications where AI tools have shown promise to augment patient care: predicting health trajectories, recommending treatments, guiding surgical care, monitoring patients, and supporting population health management.” The GAO “also identified three categories of administrative applications where AI tools have shown promise to reduce provider burden and increase the efficiency of patient care: recording digital clinical notes, optimizing operational processes, and automating laborious tasks.” The GAO stated:
      • This technology assessment also identifies challenges that hinder the adoption and impact of AI tools to augment patient care, according to stakeholders, experts, and the literature. Difficulties accessing sufficient high-quality data may hamper innovation in this space. Further, some available data may be biased, which can reduce the effectiveness and accuracy of the tools for some people. Addressing bias can be difficult because the electronic health data do not currently represent the general population. It can also be challenging to scale tools up to multiple locations and integrate them into new settings because of differences in institutions and the patient populations they serve. The limited transparency of AI tools used in health care can make it difficult for providers, regulators, and others to determine whether an AI tool is safe and effective. A greater dispersion of data across providers and institutions can make securing patient data difficult. Finally, one expert described how existing case law does not specifically address AI tools, which can make providers and patients reticent to adopt them. Some of these challenges are similar to those identified previously by GAO in its first publication in this series, such as the lack of high-quality, structured data, and others are more specific to patient care, such as liability concerns.
    • The GAO “described six policy options:”
      • Collaboration. Policymakers could encourage interdisciplinary collaboration between developers and health care providers. This could result in AI tools that are easier to implement and use within an existing workflow.
      • Data Access. Policymakers could develop or expand high-quality data access mechanisms. This could help developers address bias concerns by ensuring data are representative, transparent, and equitable.
      • Best Practices. Policymakers could encourage relevant stakeholders and experts to establish best practices (such as standards) for development, implementation, and use of AI technologies. This could help with deployment and scalability of AI tools by providing guidance on data, interoperability, bias, and formatting issues.
      • Interdisciplinary Education. Policymakers could create opportunities for more workers to develop interdisciplinary skills. This could allow providers to use AI tools more effectively, and could be accomplished through a variety of methods, including changing medical curricula or grants.
      • Oversight Clarity. Policymakers could collaborate with relevant stakeholders to clarify appropriate oversight mechanisms. Predictable oversight could help ensure that AI tools remain safe and effective after deployment and throughout their lifecycle.
      • Status Quo. Policymakers could allow current efforts to proceed without intervention.
    • NAM claimed
      • Numerous AI-powered health applications designed for personal use have been shown to improve patient outcomes, building predictions based on large volumes of granular, real-time, and individualized behavioral and medical data. For instance, some forms of telehealth, a technology that has been critical during the COVID-19 pandemic, benefit considerably from AI software focused on natural language processing, which enables efficient triaging of patients based on urgency and type of illness. Beyond patient-provider communication, AI algorithms relevant to diabetic and cardiac care have demonstrated remarkable efficacy in helping patients manage their blood glucose levels in their day-to-day lives and in detecting cases of atrial fibrillation. AI tools that monitor and synthesize longitudinal patient behaviors are also particularly useful in psychiatric care, where of the exact timing of interventions is often critical. For example, smartphone-embedded sensors that track location and proximity of individuals can alert clinicians of possible substance use, prompting immediate intervention. On the population health level, these individual indicators of activity and health can be combined with environmental- and system-level data to generate predictive insight into local and global health trends. The most salient example of this may be the earliest warnings of the COVID-19 outbreak, issued in December 2019 by two private AI technology firms.
      • Successful implementation and widespread adoption of AI applications in HSOHC requires careful consideration of several key issues related to personal data, algorithm development, and health care insurance and payment. Chief among them are data interoperability, standardization, privacy, ameliorating systemic biases in algorithms, reimbursement of AI- assisted services, quality improvement, and integration of AI tools into provider workflows. Overcoming these challenges and optimizing the impact of AI tools on clinical outcomes will involve engaging diverse stakeholders, deliberately designing AI tools and interfaces, rigorously evaluating clinical and economic utility, and diffusing and scaling algorithms across different health settings. In addition to these potential logistical and technical hurdles, it is imperative to consider the legal and ethical issues surrounding AI, particularly as it relates to the fair and humanistic deployment of AI applications in HSOHC. Important legal considerations include the appropriate designation of accountability and liability of medical errors resulting from AI- assisted decisions for ensuring the safety of patients and consumers. Key ethical challenges include upholding the privacy of patients and their data—particularly with regard to non-HIPAA covered entities involved in the development of algorithms—building unbiased AI algorithms based on high-quality data from representative populations, and ensuring equitable access to AI technologies across diverse communities.
  • The National Institute of Standards and Technology (NIST) published a “new study of face recognition technology created after the onset of the COVID-19 pandemic [that] shows that some software developers have made demonstrable progress at recognizing masked faces.” In Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face Recognition Accuracy with Face Masks Using Post-COVID-19 Algorithms (NISTIR 8331), NIST stated the “report augments its predecessor with results for more recent algorithms provided to NIST after mid-March 2020.” NIST said that “[w]hile we do not have information on whether or not a particular algorithm was designed with face coverings in mind, the results show evidence that a number of developers have adapted their algorithms to support face recognition on subjects potentially wearing face masks.” NIST stated that
    • The following results represent observations on algorithms provided to NIST both before and after the COVID-19 pandemic to date. We do not have information on whether or not a particular algorithm was designed with face coverings in mind. The results documented capture a snapshot of algorithms submitted to the FRVT 1:1 in face recognition on subjects potentially wearing face masks.
      • False rejection performance: All algorithms submitted after the pandemic continue to give in-creased false non-match rates (FNMR) when the probes are masked. While a few pre-pandemic algorithms still remain within the most accurate on masked photos, some developers have submit-ted algorithms after the pandemic showing significantly improved accuracy and are now among the most accurate in our test.
      • Evolution of algorithms on face masks: We observe that a number of algorithms submitted since mid-March 2020 show notable reductions in error rates with face masks over their pre-pandemic predecessors. When comparing error rates for unmasked versus masked faces, the median FNMR across algorithms submitted since mid-March 2020 has been reduced by around 25% from the median pre-pandemic results. The figure below presents examples of developer evolution on both masked and unmasked datasets. For some developers, false rejection rates in their algorithms submitted since mid-March 2020 decreased by as much as a factor of 10 over their pre-pandemic algorithms, which is evidence that some providers are adapting their algorithms to handle facemasks. However, in the best cases, when comparing results for unmasked images to masked im-ages, false rejection rates have increased from 0.3%-0.5% (unmasked) to 2.4%-5% (masked).
      • False acceptance performance: As most systems are configured with a fixed threshold, it is necessary to report both false negative and false positive rates for each group at that threshold. When comparing a masked probe to an unmasked enrollment photo, in most cases, false match rates (FMR) are reduced by masks. The effect is generally modest with reductions in FMR usually being smaller than a factor of two. This property is valuable in that masked probes do not impart adverse false match security consequences for verification.
      • Mask-agnostic face recognition: All 1:1 verification algorithms submitted to the FRVT test since the start of the pandemic are evaluated on both masked and unmasked datasets. The test is de-signed this way to mimic operational reality: some images will have masks, some will not (especially enrollment samples from a database or ID card). And to the extent that the use of protective masks will exist for some time, our test will continue to evaluate algorithmic capability on verifying all combinations of masked and unmasked faces.
  • The government in London has issued a progress report on its current cybersecurity strategy that has another year to run. The Paymaster General assessed how well the United Kingdom (UK) has implemented the National Cyber Security Strategy 2016 to 2021 and pointed to goals yet to be achieved. This assessment comes in the shadow of the pending exit of the UK from the European Union (EU) and Prime Minister Boris Johnson’s plans to increase the UK’s role in select defense issues, including cyber operations. The Paymaster General stated:
    • The global landscape has changed significantly since the publication of the National Cyber Security Strategy Progress Report in May 2019. We have seen unprecedented levels of disruption to our way of life that few would have predicted. The COVID-19 pandemic has increased our reliance on digital technologies – for our personal communications with friends and family and our ability to work remotely, as well as for businesses and government to continue to operate effectively, including in support of the national response.
    • These new ways of living and working highlight the importance of cyber security, which is also underlined by wider trends. An ever greater reliance on digital networks and systems, more rapid advances in new technologies, a wider range of threats, and increasing international competition on underlying technologies and standards in cyberspace, emphasise the need for good cyber security practices for individuals, businesses and government.
    • Although the scale and international nature of these changes present challenges, there are also opportunities. With the UK’s departure from the European Union in January 2020, we can define and strengthen Britain’s place in the world as a global leader in cyber security, as an independent, sovereign nation.
    • The sustained, strategic investment and whole of society approach delivered so far through the National Cyber Security Strategy has ensured we are well placed to respond to this changing environment and seize new opportunities.
    • The Paymaster General asserted:
      • [The] report has highlighted growing risks, some accelerated by the COVID-19 pandemic, and longer-term trends that will shape the environment over the next decade:
      • Ever greater reliance on digital networks and systems as daily life moves online, bringing huge benefits but also creating new systemic and individuals risks.
      • Rapid technological change and greater global competition, challenging our ability to shape the technologies that will underpin our future security and prosperity.
      • A wider range of adversaries as criminals gain easier access to commoditised attack capabilities and cyber techniques form a growing part of states’ toolsets.
      • Competing visions for the future of the internet and the risk of fragmentation, making consensus on norms and ethics in cyberspace harder to achieve.
      • In February 2020 the Prime Minister announced the Integrated Review of Security, Defence, Development and Foreign Policy. This will define the government’s ambition for the UK’s role in the world and the long-term strategic aims of our national security and foreign policy. It will set out the way in which the UK will be a problem-solving and burden-sharing nation, and a strong direction for recovery from COVID-19, at home and overseas.
      • This will help to shape our national approach and priorities on cyber security beyond 2021. Cyber security is a key element of our international, defence and security posture, as well as a driving force for our economic prosperity.
  • The University of Toronto’s Citizen Lab published a report on an Israeli surveillance firm that uses “[o]ne of the widest-used—but least appreciated” means of surveilling people (i.e., “leveraging of weaknesses in the global mobile telecommunications infrastructure to monitor and intercept phone calls and traffic.” Citizen Lab explained that an affiliate of the NSO Group, “Circles is known for selling systems to exploit Signaling System 7 (SS7) vulnerabilities, and claims to sell this technology exclusively to nation-states.” Citizen Lab noted that “[u]nlike NSO Group’s Pegasus spyware, the SS7 mechanism by which Circles’ product reportedly operates does not have an obvious signature on a target’s phone, such as the telltale targeting SMS bearing a malicious link that is sometimes present on a phone targeted with Pegasus.” Citizen Lab found that
    • Circles is a surveillance firm that reportedly exploits weaknesses in the global mobile phone system to snoop on calls, texts, and the location of phones around the globe. Circles is affiliated with NSO Group, which develops the oft-abused Pegasus spyware.
    • Circles, whose products work without hacking the phone itself, says they sell only to nation-states. According to leaked documents, Circles customers can purchase a system that they connect to their local telecommunications companies’ infrastructure, or can use a separate system called the “Circles Cloud,” which interconnects with telecommunications companies around the world.
    • According to the U.S. Department of Homeland Security, all U.S. wireless networks are vulnerable to the types of weaknesses reportedly exploited by Circles. A majority of networks around the globe are similarly vulnerable.
    • Using Internet scanning, we found a unique signature associated with the hostnames of Check Point firewalls used in Circles deployments. This scanning enabled us to identify Circles deployments in at least 25 countries.
    • We determine that the governments of the following countries are likely Circles customers: Australia, Belgium, Botswana, Chile, Denmark, Ecuador, El Salvador, Estonia, Equatorial Guinea, Guatemala, Honduras, Indonesia, Israel, Kenya, Malaysia, Mexico, Morocco, Nigeria, Peru, Serbia, Thailand, the United Arab Emirates (UAE), Vietnam, Zambia, and Zimbabwe.
    • Some of the specific government branches we identify with varying degrees of confidence as being Circles customers have a history of leveraging digital technology for human rights abuses. In a few specific cases, we were able to attribute the deployment to a particular customer, such as the Security Operations Command (ISOC) of the Royal Thai Army, which has allegedly tortured detainees.
  • Senators Ron Wyden (D-OR) Elizabeth Warren (D-MA) Edward J. Markey (D-MA) and Brian Schatz (D-HI) “announced that the Department of Homeland Security (DHS) will launch an inspector general investigation into Customs and Border Protection’s (CBP) warrantless tracking of phones in the United States following an inquiry from the senators earlier this year” per their press release.
    • The Senators added:
      • As revealed by public contracts, CBP has paid a government contractor named Venntel nearly half a million dollars for access to a commercial database containing location data mined from applications on millions of Americans’ mobile phones. CBP officials also confirmed the agency’s warrantless tracking of phones in the United States using Venntel’s product in a September 16, 2020 call with Senate staff.
      • In 2018, the Supreme Court held in Carpenter v. United States that the collection of significant quantities of historical location data from Americans’ cell phones is a search under the Fourth Amendment and therefore requires a warrant.
      • In September 2020, Wyden and Warren successfully pressed for an inspector general investigation into the Internal Revenue Service’s use of Venntel’s commercial location tracking service without a court order.
    • In a letter, the DHS Office of the Inspector General (OIG) explained:
      • We have reviewed your request and plan to initiate an audit that we believe will address your concerns. The objective of our audit is to determine if the Department of Homeland Security (DHS) and it [sic] components have developed, updated, and adhered to policies related to cell-phone surveillance devices. In addition, you may be interested in our audit to review DHS’ use and protection of open source intelligence. Open source intelligence, while different from cell phone surveillance, includes the Department’s use of information provided by the public via cellular devices, such as social media status updates, geo-tagged photos, and specific location check-ins.
    • In an October letter, these Senators plus Senator Sherrod Brown (D-OH) argued:
      • CBP is not above the law and it should not be able to buy its way around the Fourth Amendment. Accordingly, we urge you to investigate CBP’s warrantless use of commercial databases containing Americans’ information, including but not limited to Venntel’s location database. We urge you to examine what legal analysis, if any, CBP’s lawyers performed before the agency started to use this surveillance tool. We also request that you determine how CBP was able to begin operational use of Venntel’s location database without the Department of Homeland Security Privacy Office first publishing a Privacy Impact Assessment.
  • The American Civil Liberties Union (ACLU) has filed a lawsuit in a federal court in New York City, seeking an order to compel the United States (U.S.) Department of Homeland Security (DHS), U.S. Customs and Border Protection (CBP), and U.S. Immigration and Customs Enforcement (ICE) “to release records about their purchases of cell phone location data for immigration enforcement and other purposes.” The ACLU made these information requests after numerous media accounts showing that these and other U.S. agencies were buying location data and other sensitive information in ways intended to evade the bar in the Fourth Amendment against unreasonable searches.
    • In its press release, the ACLU asserted:
      • In February, The Wall Street Journal reported that this sensitive location data isn’t just for sale to commercial entities, but is also being purchased by U.S. government agencies, including by U.S. Immigrations and Customs Enforcement to locate and arrest immigrants. The Journal identified one company, Venntel, that was selling access to a massive database to the U.S. Department of Homeland Security, U.S. Customs and Border Protection, and ICE. Subsequent reporting has identified other companies selling access to similar databases to DHS and other agencies, including the U.S. military.
      • These practices raise serious concerns that federal immigration authorities are evading Fourth Amendment protections for cell phone location information by paying for access instead of obtaining a warrant. There’s even more reason for alarm when those agencies evade requests for information — including from U.S. senators — about such practices. That’s why today we asked a federal court to intervene and order DHS, CBP, and ICE to release information about their purchase and use of precise cell phone location information. Transparency is the first step to accountability.
    • The ACLU explained in the suit:
      • Multiple news sources have confirmed these agencies’ purchase of access to databases containing precise location information for millions of people—information gathered by applications (apps) running on their smartphones. The agencies’ purchases raise serious concerns that they are evading Fourth Amendment protections for cell phone location information by paying for access instead of obtaining a warrant. Yet, more than nine months after the ACLU submitted its FOIA request (“the Request”), these agencies have produced no responsive records. The information sought is of immense public significance, not only to shine a light on the government’s use of powerful location-tracking data in the immigration context, but also to assess whether the government’s purchase of this sensitive data complies with constitutional and legal limitations and is subject to appropriate oversight and control.
  • Facebook’s new Oversight Board announced “the first cases it will be deliberating and the opening of the public comment process” and “the appointment of five new trustees.” The cases were almost all referred by Facebook users and the new board is asking for comments on the right way to manage what may be objectionable content. The Oversight Board explained it “prioritizing cases that have the potential to affect lots of users around the world, are of critical importance to public discourse or raise important questions about Facebook’s policies.”
    • The new trustees are:
      • Kristina Arriaga is a globally recognized advocate for freedom of expression, with a focus on freedom of religion and belief. Kristina is president of the advisory firm Intrinsic.
      • Cherine Chalaby is an expert on internet governance, international finance and technology, with extensive board experience. As Chairman of ICANN, he led development of the organization’s five-year strategic plan for 2021 to 2025.
      • Wanda Felton has over 30 years of experience in the financial services industry, including serving as Vice Chair of the Board and First Vice President of the Export-Import Bank of the United States.
      • Kate O’Regan is a former judge of the Constitutional Court of South Africa and commissioner of the Khayelitsha Commission. She is the inaugural director of the Bonavero Institute of Human Rights at the University of Oxford.
      • Robert Post is an American legal scholar and Professor of Law at Yale Law School, where he formerly served as Dean. He is a leading scholar of the First Amendment and freedom of speech.

Coming Events

  • The National Institute of Standards and Technology (NIST) will hold a webinar on the Draft Federal Information Processing Standards (FIPS) 201-3 on 9 December.
  • On 9 December, the Senate Commerce, Science, and Transportation Committee will hold a hearing titled “The Invalidation of the EU-US Privacy Shield and the Future of Transatlantic Data Flows” with the following witnesses:
    • The Honorable Noah Phillips, Commissioner, Federal Trade Commission
    • Ms. Victoria Espinel, President and Chief Executive Officer, BSA – The Software Alliance
    • Mr. James Sullivan, Deputy Assistant Secretary for Services, International Trade Administration, U.S. Department of Commerce
    • Mr. Peter Swire, Elizabeth and Tommy Holder Chair of Law and Ethics, Georgia Tech Scheller College of Business, and Research Director, Cross-Border Data Forum
  • The Senate Judiciary Committee will hold an executive session at which the “Online Content Policy Modernization Act” (S.4632), a bill to narrow the liability shield in 47 USC 230, may be marked up.
  • On 10 December, the Federal Communications Commission (FCC) will hold an open meeting and has released a tentative agenda:
    • Securing the Communications Supply Chain. The Commission will consider a Report and Order that would require Eligible Telecommunications Carriers to remove equipment and services that pose an unacceptable risk to the national security of the United States or the security and safety of its people, would establish the Secure and Trusted Communications Networks Reimbursement Program, and would establish the procedures and criteria for publishing a list of covered communications equipment and services that must be removed. (WC Docket No. 18-89)
    • National Security Matter. The Commission will consider a national security matter.
    • National Security Matter. The Commission will consider a national security matter.
    • Allowing Earlier Equipment Marketing and Importation Opportunities. The Commission will consider a Notice of Proposed Rulemaking that would propose updates to its marketing and importation rules to permit, prior to equipment authorization, conditional sales of radiofrequency devices to consumers under certain circumstances and importation of a limited number of radiofrequency devices for certain pre-sale activities. (ET Docket No. 20-382)
    • Promoting Broadcast Internet Innovation Through ATSC 3.0. The Commission will consider a Report and Order that would modify and clarify existing rules to promote the deployment of Broadcast Internet services as part of the transition to ATSC 3.0. (MB Docket No. 20-145)

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Gerd Altmann from Pixabay

EU Announces One Antitrust Action Against A Big Tech Firm and Previews Another

The EU commences with one antitrust action against Amazon while investigating other possible violations.

The European Commission (EC) released a summary of its findings in one antitrust investigation against Amazon, finding enough evidence to proceed while also starting the process to investigate another alleged violation by the United States (U.S.) multinational. The EC started its investigation of Amazon in July 2019, and this action follows an announced investigation of Apple earlier this year. Also, the European Union (EU) has fined Google €8.2 billion cumulatively for three separate antitrust violations over the last five or six years. Moreover, the EC is readying a “Digital Markets Act” to update the EU’s competition laws.

Article 102 of the Treaty on the Functioning of the European Union (TFEU) bars a company from abusing its dominant market position. The EC is asserting that Amazon has a dominant market position regarding its use of sales data from selling the items of third parties that the company sometimes uses to undercut the third parties. According to the EC, this is abuse in violation of Article 102, and it has issued a Statement of Objections. However, the process by which an antitrust action in the EU is brought is not finished at this stage. Amazon will have the opportunity to respond and any final decision, particularly fines, must be approved by the Advisory Committee which consists of the EU’s competition authorities.

In its press statement, the EC explained:

  • The European Commission has informed Amazon of its preliminary view that it has breached EU antitrust rules by distorting competition in online retail markets. The Commission takes issue with Amazon systematically relying on non-public business data of independent sellers who sell on its marketplace, to the benefit of Amazon’s own retail business, which directly competes with those third party sellers.
  • The Commission also opened a second formal antitrust investigation into the possible preferential treatment of Amazon’s own retail offers and those of marketplace sellers that use Amazon’s logistics and delivery services.

In its Statement of Objections, the EC further detailed its case that Amazon’s access to and use of private business data of third-party sellers for Amazon’s benefit distorts competition contrary to EU law:

  • Amazon has a dual role as a platform: (i) it provides a marketplace where independent sellers can sell products directly to consumers; and (ii) it sells products as a retailer on the same marketplace, in competition with those sellers.
  • As a marketplace service provider, Amazon has access to non-public business data of third party sellers such as the number of ordered and shipped units of products, the sellers’ revenues on the marketplace, the number of visits to sellers’ offers, data relating to shipping, to sellers’ past performance, and other consumer claims on products, including the activated guarantees.
  • The Commission’s preliminary findings show that very large quantities of non-public seller data are available to employees of Amazon’s retail business and flow directly into the automated systems of that business, which aggregate these data and use them to calibrate Amazon’s retail offers and strategic business decisions to the detriment of the other marketplace sellers. For example, it allows Amazon to focus its offers in the best-selling products across product categories and to adjust its offers in view of non-public data of competing sellers.
  • The Commission’s preliminary view, outlined in its Statement of Objections, is that the use of non-public marketplace seller data allows Amazon to avoid the normal risks of retail competition and to leverage its dominance in the market for the provision of marketplace services in France and Germany- the biggest markets for Amazon in the EU. If confirmed, this would infringe Article 102 of the TFEU that prohibits the abuse of a dominant market position.

The EC also launched another inquiry into the platform’s practices that allegedly favor the company’s items as compared to third-party sellers and also those items offered by third-parties that use Amazon’s logistics and delivery services. The EC explained it “opened a second antitrust investigation into Amazon’s business practices that might artificially favour its own retail offers and offers of marketplace sellers that use Amazon’s logistics and delivery services (the so-called “fulfilment by Amazon or FBA sellers”).” The EC continued:

  • In particular, the Commission will investigate whether the criteria that Amazon sets to select the winner of the “Buy Box” and to enable sellers to offer products to Prime users, under Amazon’s Prime loyalty programme, lead to preferential treatment of Amazon’s retail business or of the sellers that use Amazon’s logistics and delivery services.
  • The “Buy Box” is displayed prominently on Amazon’s websites and allows customers to add items from a specific retailer directly into their shopping carts. Winning the “Buy Box” (i.e. being chosen as the offer that features in this box) is crucial to marketplace sellers as the Buy Box prominently shows the offer of one single seller for a chosen product on Amazon’s marketplaces, and generates the vast majority of all sales. The other aspect of the investigation focusses on the possibility for marketplace sellers to effectively reach Prime users. Reaching these consumers is important to sellers because the number of Prime users is continuously growing and because they tend to generate more sales on Amazon’s marketplaces than non-Prime users.
  • If proven, the practice under investigation may breach Article 102 of the TFEU that prohibits the abuse of a dominant market position.

The EC’s antitrust action may be followed by an action by the United States (U.S.) government. It has been reported in the media that the Federal Trade Commission (FTC) is also investigating Amazon’s conduct visa vis third-party sellers on its platform and could also bring suit. However, there may be a lack of bandwidth and resources at the agency if it proceeds with an antitrust action against Facebook as is rumored to be filed by year’s end.

Moreover, the U.S. House of Representatives’ Judiciary Committee’s Antitrust, Commercial and Administrative Law Subcommittee’s “Investigation into Competition in Online Markets” detailed the same conduct the EU is alleging violates antitrust law:

One of the widely reported ways in which Amazon treats third-party sellers unfairly centers on Amazon’s asymmetric access to and use of third-party seller data. During the investigation, the Subcommittee heard repeated concerns that Amazon leverages its access to third-party sellers’ data to identify and replicate popular and profitable products from among the hundreds of millions of listings on its marketplace. Armed with this information, it appears that Amazon would: (1) copy the product to create a competing private-label product; or (2) identify and source the product directly from the manufacturer to free ride off the seller’s efforts, and then cut that seller out of the equation.

Amazon claims that it has no incentive to abuse sellers’ trust because third-party sales make up nearly 60% of its sales, and that Amazon’s first-party sales are relatively small. Amazon has similarly pointed out that third-party listings far outnumber Amazon’s first-party listings. In a recent shareholder letter, CEO Jeff Bezos wrote, “Third-party sellers are kicking our first-party butt. Badly.” In response to a question from the Subcommittee, however, Amazon admitted that by percentage of sales—a more telling measure—Amazon’s first-party sales are significant and growing in a number of categories. For example, in books, Amazon owns 74% of sales, whereas third-party sellers only account for 26% of sales. At the category level, it does not appear that third-party sellers are kicking Amazon’s first-party butt. Amazon may, in fact, be positioned to overtake its thirdparty sellers in several categories as its first-party business continues to grow.

As noted, earlier this year, the EC announced two antitrust investigations of Apple regarding allegations of unfair and anticompetitive practices with its App Store and Apple Pay.

In a press release, the EC announced it “has opened a formal antitrust investigation to assess whether Apple’s conduct in connection with Apple Pay violates EU competition rules…[that] concerns Apple’s terms, conditions and other measures for integrating Apple Pay in merchant apps and websites on iPhones and iPads, Apple’s limitation of access to the Near Field Communication (NFC) functionality (“tap and go”) on iPhones for payments in stores, and alleged refusals of access to Apple Pay.” The EC noted that “[f]ollowing a preliminary investigation, the Commission has concerns that Apple’s terms, conditions, and other measures related to the integration of Apple Pay for the purchase of goods and services on merchant apps and websites on iOS/iPadOS devices may distort competition and reduce choice and innovation.” The EC contended “Apple Pay is the only mobile payment solution that may access the NFC “tap and go” technology embedded on iOS mobile devices for payments in stores.” The EC revealed “[t]he investigation will also focus on alleged restrictions of access to Apple Pay for specific products of rivals on iOS and iPadOS smart mobile devices” and “will investigate the possible impact of Apple’s practices on competition in providing mobile payments solutions.”

In a press release issued the same day, the EC explained it had also “opened formal antitrust investigations to assess whether Apple’s rules for app developers on the distribution of apps via the App Store violate EU competition rules.” The EC said “[t]he investigations concern in particular the mandatory use of Apple’s own proprietary in-app purchase system and restrictions on the ability of developers to inform iPhone and iPad users of alternative cheaper purchasing possibilities outside of apps.” The EC added “[t]he investigations concern the application of these rules to all apps, which compete with Apple’s own apps and services in the European Economic Area (EEA)…[and] [t]he investigations follow-up on separate complaints by Spotify and by an e-book/audiobook distributor on the impact of the App Store rules on competition in music streaming and e-books/audiobooks.”

Finally, recently, EU Executive Vice-President Margrethe Vestager gave a speech titled “Building trust in technology,” in which she previewed one long awaited draft EU law on technology and another to address antitrust and anti-competitive practices of large technology companies. Vestager stated “in just a few weeks, we plan to publish two draft laws that will help to create a more trustworthy digital world.” Both drafts are expected on 2 December and represent key pieces of the new EU leadership’s Digital Strategy, the bloc’s initiative to update EU laws to account for changes in technology since the beginning of the century. The Digital Services Act will address and reform the legal treatment of both online commerce and online content. The draft Digital Markets Act would give the EC more tools to combat the same competition and market dominance issues posed by companies like Apple, Amazon, Facebook, and Google. Vestager stated:

  • So, to keep our markets fair and open to competition, it’s vital that we have the right toolkit in place. And that’s what the second set of rules we’re proposing – what we call the Digital Markets Act – is for. 
  • That proposal will have two pillars. The first of those pillars will be a clear list of dos and don’ts for big digital gatekeepers, based on our experience with the sorts of behaviour that can stop markets working well. 
  • For instance, the decisions that gatekeepers take, about how to rank different companies in search results, can make or break businesses in dozens of markets that depend on the platform. And if platforms also compete in those markets themselves, they can use their position as player and referee to help their own services succeed, at the expense of their rivals. For instance, gatekeepers might manipulate the way that they rank different businesses, to show their own services more visibly than their rivals’. So, the proposal that we’ll put forward in a few weeks’ time will aim to ban this particular type of unfair self-preferencing. 
  • We also know that these companies can collect a lot of data about companies that rely on their platform – data which they can then use, to compete against those very same companies in other markets. That can seriously damage fairness in these markets – which is why our proposal aims to ban big gatekeepers from misusing their business users’ data in that way. 
  • These clear dos and don’ts will allow us to act much faster and more effectively, to tackle behaviour that we know can stop markets working well. But we also need to be ready for new situations, where digitisation creates deep, structural failures in the way our markets work.  
  • Once a digital company gets to a certain size, with the big network of users and the huge collections of data that brings, it can be very hard for anyone else to compete – even if they develop a much better service. So, we face a constant risk that big companies will succeed in pushing markets to a tipping point, sending them on a rapid, unstoppable slide towards monopoly – and creating yet another powerful gatekeeper. 
  • One way to deal with risks like this would be to stop those emerging gatekeepers from locking users into their platform. That could mean, for instance, that those gatekeepers would have to make it easier for users to switch platform, or to use more than one service. That would keep the market open for competition, by making it easier for innovative rivals to compete. But right now, we don’t have the power to take this sort of action when we see these risks arising. 
  • It can also be difficult to deal with large companies which use the power of their platforms again and again, to take over one related market after another. We can deal with that issue with a series of cases – but the risk is that we’ll always find ourselves playing catch-up, while platforms move from market to market, using the same strategies to drive out their rivals. 
  • The risk, though, is that we’ll have a fragmented system, with different rules in different EU countries. That would make it hard to tackle huge platforms that operate throughout Europe, and to deal with other problems that you find in digital markets in many EU countries. And it would mean that businesses and consumers across Europe can’t all rely on the same protection. 
  • That’s why the second pillar of the Digital Markets Act would put a harmonised market investigation framework in place across the single market, giving us the power to tackle market failures like this in digital markets, and stop new ones from emerging. That would give us a harmonised set of rules that would allow us to investigate certain structural problems in digital markets. And if necessary, we could take action to make these markets contestable and competitive.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Photo by stein egil liland from Pexels