Further Reading, Other Developments, and Coming Events (18 February 2021)

Further Reading

  • Google, Microsoft, Qualcomm Protest Nvidia’s Acquisition of Arm Ltd.” By  David McLaughlin, Ian King, and Dina Bass — Bloomberg. Major United States (U.S.) tech multinationals are telling the U.S. government that Nvidia’s proposed purchase of Arm will hurt competition in the semi-conductor market, an interesting position for an industry renowned for being acquisition hungry. The British firm, Arm, is a key player in the semi-conductor business that deals with all companies, and the fear articulated by firms like Qualcomm, Microsoft, and Google is that Nvidia will cut supply and increase prices once it controls Arm. According to one report, Arm has made something like 95% of the chip architecture for the world’s smartphones and 95% of the chips made in the People’s Republic of China (PRC). The deal has to clear U.S., British, EU, and PRC regulators. In the U.S., the Federal Trade Commission (FTC) has reportedly made very large document requests, which indicates their interest in digging into the deal and suggests the possibility they may come out against the acquisition. The FTC may also be waiting to read the mood in Washington as there is renewed, bipartisan concern about antitrust and competition and about the semi-conductor industry. Finally, acting FTC Chair Rebecca Kelly Slaughter has come out against a lax approach to so-called vertical mergers such as the proposed Nvidia-Arm deal, which may well be the ultimate position of a Democratic FTC.
  • Are Private Messaging Apps the Next Misinformation Hot Spot?” By Brian X. Chen and Kevin Roose — The New York Times. The conclusion these two tech writers reach is that, on balance, private messaging apps like Signal and Telegram, are better for society than not. Moreover, they reason it is better to have extremists migrate from platforms like Facebook to ones where it is much harder to spread their views and proselytize.
  • Amazon Has Transformed the Geography of Wealth and Power” By Vauhini Vara — The Atlantic. A harrowing view of the rise of Amazon cast against the decline of the middle class and the middle of the United States (U.S.) Correlation is not causation, of course, but the company has sped the decline of a number of industries and arguably a number of cities.
  • Zuckerberg responds to Apple’s privacy policies: “We need to inflict pain” By Samuel Axon — Ars Technica. Relations between the companies have worsened as their CEO have taken personal shots at each other in public and private culminating in Apple’s change to its iOS requiring users to agree to being tracked by apps across the internet, which is Facebook’s bread and butter. Expect things to get worse as both Tim Cook and Mark Zuckerberg think augmented reality or mixed reality are the next major frontiers in tech, suggesting the competition may intensify.
  • Inside the Making of Facebook’s Supreme Court” By Kate Klonik — The New Yorker. A very immersive piece on the genesis and design of the Facebook Oversight Board, originally conceived of as a supreme court for content moderation. However, not all content moderation decisions can be referred to the Board; in fact, only when Facebook decides to take down content does a person have a right to appeal. Otherwise, one must depend on the company’s beneficence. So, for example, if Facebook decided to leave up content that is racist toward Muslims, a Facebook user could not appeal the decision. Additionally, Board decisions are not precedential, which, in plain English means, if the Board decides a take down of, say, Nazi propaganda comports with Facebook’s rules, the company would not be obligated to take down similar Nazi content thereafter. This latter wrinkle will ultimately serve to limit the power of the Board. The piece quotes critics, including many involved with the design and establishment of the Board, who see the final form as being little more than a fig leaf for public relations.

Other Developments

  • The Department of Health and Human Services (HHS) was taken to task by a federal appeals court in a blunt opinion decrying the agency’s failure to articulate even the most basic rationale for a multi-million dollar fine of a major Houston hospital for its data security and data privacy violations. HHS’ Office of Civil Rights had levied $4.348 million find on  the University of Texas M.D. Anderson Cancer Center (M.D. Anderson) for violations of the regulations promulgated pursuant to the “Health Insurance Portability and Accountability Act of 1996” (P.L. 104–191) and “Health Information Technology for Economic and Clinical Health Act” (HITECH Act) (P.L. 111-5) governing the security and privacy of certain classes of health information. M.D. Anderson appealed the decision, losing at each stage, until it reached the United States Court of Appeals for the Fifth Circuit (Fifth Circuit.) In its ruling, the Fifth Circuit held that OCR’s “decision  was  arbitrary,  capricious,  and contrary to law.” The Fifth Circuit vacated the penalty and sent the matter back to HHS for further consideration.
    • In its opinion, the Fifth Circuit explained the facts:
      • First, back in 2012, an M.D. Anderson faculty member’s laptop was stolen. The laptop was not encrypted or password-protected but contained “electronic protected health information (ePHI) for 29,021 individuals.” Second, also in 2012, an M.D. Anderson trainee lost an unencrypted USB thumb drive during her evening commute. That thumb drive contained ePHI for over 2,000 individuals. Finally, in 2013, a visiting researcher at M.D. Anderson misplaced another unencrypted USB thumb drive, this time containing ePHI for nearly 3,600 individuals.
      • M.D. Anderson disclosed these incidents to HHS. Then HHS determined that M.D. Anderson had violated two federal regulations. HHS promulgated both of those regulations under the Health Insurance Portability and Accountability Act of 1996 (“HIPAA”) and the Health Information Technology for Economic and Clinical Health Act of 2009 (the “HITECH Act”). The first regulation requires entities covered by HIPAA and the HITECH Act to “[i]mplement a mechanism to encrypt” ePHI or adopt some other “reasonable and appropriate” method to limit access to patient data. 45 C.F.R. §§ 164.312(a)(2)(iv), 164.306(d) (the “Encryption Rule”). The second regulation prohibits the unpermitted disclosure of protected health information. Id. § 164.502(a) (the “Disclosure Rule”).
      • HHS also determined that M.D. Anderson had “reasonable cause” to know that it had violated the rules. 42 U.S.C. § 1320d-5(a)(1)(B) (setting out the “reasonable cause” culpability standard). So, in a purported exercise of its power under 42 U.S.C. § 1320d-5 (HIPAA’s enforcement provision), HHS assessed daily penalties of $1,348,000 for the Encryption Rule violations, $1,500,000 for the 2012 Disclosure Rule violations, and $1,500,000 for the 2013 Disclosure Rule violations. In total, HHS imposed a civil monetary penalty (“CMP” or “penalty”) of $4,348,000.
      • M.D. Anderson unsuccessfully worked its way through two levels of administrative appeals. Then it petitioned our court for review. See 42 U.S.C. § 1320a-7a(e)  (authorizing  judicial  review).  After  M.D.  Anderson  filed  its  petition, the Government conceded that it could not defend its penalty and asked us to reduce it by a factor of 10 to $450,000. 
  • The Australian Senate Standing Committee for the Scrutiny of Bills has weighed in on both the Surveillance Legislation Amendment (Identify and Disrupt) Bill 2020 and the Treasury Laws Amendment (News Media and Digital Platforms Mandatory Bargaining Code) Bill 2020, two major legislative proposals put forth in December 2020. This committee plays a special role in legislating in the Senate, for it must “scrutinise each bill introduced into the Parliament as to whether the bills, by express words or otherwise:
    • (i)  trespass unduly on personal rights and liberties;
    • (ii)  make rights, liberties or obligations unduly dependent upon insufficiently defined administrative powers;
    • (iii)  make rights, liberties or obligations unduly dependent upon non- reviewable decisions;
    • (iv)  inappropriately delegate legislative powers; or
    • (v)  insufficiently subject the exercise of legislative power to parliamentary scrutiny.
    • Regarding the Surveillance Legislation Amendment (Identify and Disrupt) Bill 2020 (see here for analysis), the committee explained:
      • The bill seeks to amend the Surveillance Devices Act 2004 (SD Act), the Crimes Act 1914 (Crimes Act) and associated legislation to introduce three new types of warrants available to the Australian Federal Police (AFP) and the Australian Criminal Intelligence Commission (ACIC) for investigating and disrupting online crime. These are:
        • data disruption warrants, which enable the AFP and the ACIC to modify, add, copy or delete data for the purposes of frustrating the commission of serious offences online;
        • network activity warrants, which permit access to devices and networks used by suspected criminal networks, and
        • account takeover warrants, which provide the AFP and the ACIC with the ability to take control of a person’s online account for the purposes of gathering evidence to further a criminal investigation.
    • The committee flagged concerns about the bill in these categories:
      • Authorisation of coercive powers
        • Issuing authority
        • Time period for warrants
        • Mandatory considerations
        • Broad scope of offences
      • Use of coercive powers without a warrant
        • Emergency authorisations
      • Innocent third parties
        • Access to third party computers, communications in transit and account-based data
        • Compelling third parties to provide information
        • Broad definition of ‘criminal network of individuals’
      • Use of information obtained through warrant processes
        • Prohibitions on use
        • Storage and destruction of records
      • Presumption of innocence—certificate constitutes prima facie evidence
      • Reversal of evidential burden of proof
      • Broad delegation of administrative powers
        • Appropriate authorising officers of the ACIC
    • The committee asked for the following feedback from the government on the bill:
      • The committee requests the minister’s detailed advice as to:
        • why it is considered necessary and appropriate to enable law enforcement officers to disrupt or access data or takeover an online account without a warrant in certain emergency situations (noting the coercive and intrusive nature of these powers and the ability to seek a warrant via the telephone, fax or email);
        • the appropriateness of retaining information obtained under an emergency authorisation that is subsequently not approved by a judge or AAT member;
        • and the appropriateness of enabling law enforcement agencies to act to conceal any thing done under a warrant after the warrant has ceased to be in force, and whether the bill could be amended to provide a process for obtaining a separate concealment of access warrant if the original warrant has ceased to be in force.
      • The committee requests the minister’s detailed advice as to:
        • the effect of Schedules 1-3 on the privacy rights of third parties and a detailed justification for the intrusion on those rights, in particular:
        • why proposed sections 27KE and 27KP do not specifically require the judge or nominated AAT member to consider the privacy implications
        • for third parties of authorising access to a third party computer or
        • communication in transit;
        • why the requirement that an issuing authority be satisfied that an assistance order is justifiable and proportionate, having regard to the offences to which it would relate, only applies to an assistance order with respect to data disruption warrants, and not to all warrants; and
        • whether the breadth of the definitions of ‘electronically linked group of individuals’ and ‘criminal network of individuals’ can be narrowed to reduce the potential for intrusion on the privacy rights of innocent third parties.
    • The committee requests the minister’s detailed advice as to:
      • whether all of the exceptions to the restrictions on the use, recording or disclosure of protected information obtained under the warrants are appropriate and whether any exceptions are drafted in broader terms than is strictly necessary; and
      • why the bill does not require review of the continued need for the retention of records or reports comprising protected information on a more regular basis than a period of five years.
    • As the explanatory materials do not adequately address these issues, the committee requests the minister’s detailed advice as to:
      • why it is considered necessary and appropriate to provide for evidentiary certificates to be issued in connection a data disruption warrant or emergency authorisation, a network access warrant, or an account takeover warrant;
      • the circumstances in which it is intended that evidentiary certificates would be issued, including the nature of any relevant proceedings; and
      • the impact that issuing evidentiary certificates may have on individuals’ rights and liberties, including on the ability of individuals to challenge the lawfulness of actions taken by law enforcement agencies.
    • As the explanatory materials do not address this issue, the committee requests the minister’s advice as to why it is proposed to use offence-specific defences (which reverse the evidential burden of proof) in this instance. The committee’s consideration of the appropriateness of a provision which reverses the burden of proof is assisted if it explicitly addresses relevant principles as set out in the Guide to Framing Commonwealth Offences.
    • The committee requests the minister’s advice as to why it is considered necessary to allow for executive level members of staff of the ACIC to be ‘appropriate authorising officers’, in particular with reference to the committee’s scrutiny concerns in relation to the use of coercive powers without judicial authorisation under an emergency authorisation.
    • Regarding the Treasury Laws Amendment (News Media and Digital Platforms Mandatory Bargaining Code) Bill 2020, the committee asserted the bill “seeks to establish a mandatory code of conduct to support the sustainability of the Australian news media sector by addressing bargaining power imbalances between digital platforms and Australian news businesses.” The committee requested less input on this bill:
      • requests the Treasurer’s advice as to why it is considered necessary and appropriate to leave the determination of which digital platforms must participate in the News Media and Digital Platforms Mandatory Bargaining Code to delegated legislation.
      • If it is considered appropriate to leave this matter to delegated legislation, the committee requests the Treasurer’s advice as to whether the bill can be amended to require the positive approval of each House of the Parliament before determinations made under proposed section 52E come into effect.
  • The European Data Protection Board (EDPB) issued a statement “on new draft provisions of the second additional protocol to the Council of Europe Convention on Cybercrime (Budapest Convention),” the second time it has weighed in on the rewrite of “the first international treaty on crimes committed via the Internet and other computer networks, dealing particularly with infringements of copyright, computer-related fraud, child pornography and violations of network security.” The EDPB took issue with the process of meeting and drafting new provisions:
    • Following up on the publication of new draft provisions of the second additional protocol to the Budapest Convention , the EDPB therefore, once again, wishes to provide an expert and constructive contribution with a view to ensure that data protection considerations are duly taken into account in the overall drafting process of the additional protocol, considering that the meetings dedicated to the preparation of the additional protocol are being held in closed sessions and that the direct involvement of data protection authorities in the drafting process has not been foreseen in the T-CY Terms of Reference
    • The EDPB offered itself again as a resource and key stakeholder that needs to be involved with the effort:
      • In November 2019, the EDPB also published its latest contribution to the consultation on a draft second additional protocol, indicating that it remained available for further contributions and called for an early and more proactive involvement of data protection authorities in the preparation of these specific provisions, in order to ensure an optimal understanding and consideration of data protections safeguards (emphasis in the original).
    • The EDPB further asserted:
      • The EDPB remains fully aware that situations where judicial and law enforcement authorities are faced with a “cross-border situation” with regards to access to personal data as part of their investigations can be a challenging reality and recognises the legitimate objective of enhancing international cooperation on cybercrime and access to information. In parallel, the EDPB reiterates that the protection of personal data and legal certainty must be guaranteed, thus contributing to the objective of establishing sustainable arrangements for the sharing of personal data with third countries for law enforcement purposes, which are fully compatible with the EU Treaties and the Charter of Fundamental Rights of the EU. The EDPB furthermore considers it essential to frame the preparation of the additional protocol within the framework of the Council of Europe core values and principles, and in particular human rights and the rule of law.
  • The European Commission (EC) published a statement on how artificial intelligence (AI) “can transform Europe’s health sector.” The EC sketched out legislation it hopes to introduce soon on regulating AI in the European union (EU). The EC asserted:
    • A high-standard health system, rich health data and a strong research and innovation ecosystem are Europe’s key assets that can help transform its health sector and make the EU a global leader in health-related artificial intelligence applications. 
    • The use of artificial intelligence (AI) applications in healthcare is increasing rapidly.
    • Before the COVID-19 pandemic, challenges linked to our ageing populations and shortages of healthcare professionals were already driving up the adoption of AI technologies in healthcare. 
    • The pandemic has all but accelerated this trend. Real-time contact tracing apps are just one example of the many AI applications used to monitor the spread of the virus and to reinforce the public health response to it.
    • AI and robotics are also key for the development and manufacturing of new vaccines against COVID-19.
    • The European Commission is currently preparing a comprehensive package of measures to address issues posed by the introduction of AI, including a European legal framework for AI to address fundamental rights and safety risks specific to the AI systems, as well as rules on liability related to new technologies.
  • The House Energy and Commerce Committee Chair Frank Pallone, Jr. (D-NJ) and Consumer Protection and Commerce Subcommittee Chair Jan Schakowsky (D-IL) wrote to Apple CEO Tim Cook “urging review and improvement of Apple’s new App Privacy labels in light of recent reports suggesting they are often misleading or inaccurate.” Pallone and Schakowsky are working from a Washington Post article, in which the paper’s tech columnist learned that Apple’s purported ratings system to inform consumers about the privacy practices of apps is largely illusory and possibly illegally deceptive. Pallone and Schakowsky asserted:
    • According to recent reports, App Privacy labels can be highly misleading or blatantly false. Using software that logs data transmitted to trackers, a reporter discovered that approximately one third of evaluated apps that said they did not collect data had inaccurate labels. For example, a travel app labeled as collecting no data was sending identifiers and other data to a massive search engine and social media company, an app-analytics company, and even a Russian Internet company. A ‘slime simulator’ rated for ages 4 and older had a ‘Data Not Collected’ label, even though the app shares identifying information with major tech companies and shared data about the phone’s battery level, storage, general location, and volume level with a video game software development company.
    • Simplifying and enhancing privacy disclosures is a laudable goal, but consumer trust in privacy labeling approaches may be undermined if Apple’s App Privacy labels disseminate false and misleading information. Without meaningful, accurate information, Apple’s tool of illumination and transparency may become a source of consumer confusion and harm. False and misleading privacy labels can dupe privacy-conscious consumers into downloading data-intensive apps, ultimately eroding the credibility and integrity of the labels. A privacy label without credibility and integrity also may dull the competitive forces encouraging app developers to improve their data practices.
    • A privacy label is no protection if it is false. We urge Apple to improve the validity of its App Privacy labels to ensure consumers are provided meaningful information about their apps’ data practices and that consumers are not harmed by these potentially deceptive practices.
    • Pallone and Schakowsky stated “[t]o better understand Apple’s practices with respect to the privacy labels, we request that you provide written response to the following questions by February 23, 2021:
      • 1. Apple has stated that it conducts routine and ongoing audits of the information provided by developers and works with developers to correct any inaccuracies.
        • a. Please detail the process by which Apple audits the privacy information provided by app developers. Please explain how frequently audits are conducted, the criteria by which Apple selects which apps to audit, and the methods for verifying the accuracy of the privacy information provided by apps.
        • b. How many apps have been audited since the implementation of the App Privacy label? Of those, how many were found to have provided inaccurate or misleading information? 
      • 2. Does Apple ensure that App Privacy labels are corrected upon the discovery of inaccuracies or misleading information? If not, why not? For each app that has been found to have provided inaccurate or misleading information, how quickly was that label corrected?
      • 3. Please detail Apple’s enforcement policies when an app fails to provide accurate privacy information for the App Privacy label.
      • 4. Does Apple require more in-depth privacy disclosures and conduct more stringent oversight of apps targeted to children under the age of 13? If not, why not? If so, please describe the additional disclosures required and the oversight actions employed for these apps.
      • 5. Providing clear and easily comprehendible privacy information at the point of sale is certainly valuable, but privacy policies are not static. Does Apple notify users when one of their app’s privacy labels has materially changed? If not, why not. If so, how are users notified of such changes.
  • The United Kingdom’s Department for Digital, Culture, Media & Sport (DCMS) “published its draft rules of the road for governing the future use of digital identities…[and] [i]t is part of plans to make it quicker and easier for people to verify themselves using modern technology and create a process as trusted as using passports or bank statements” according to its press release. The DCMS wants feedback by 11 March 2021 on the draft trust framework. The DCMS stated:
    • Digital identity products allow people to prove who they are, where they live or how old they are. They are set to revolutionise transactions such as buying a house, when people are often required to prove their identity multiple times to a bank, conveyancer or estate agent, and buying age-restricted goods online or in person.
    • The new ‘trust framework’ lays out the draft rules of the road organisations should follow. It includes the principles, policies, procedures and standards governing the use of digital identity to allow for the sharing of information to check people’s identities or personal details, such as a user’s address or age, in a trusted and consistent way. This will enable interoperability and increase public confidence.
    • The framework, once finalised, is expected to be brought into law. It has specific standards and requirements for organisations which provide or use digital identity services including:
      • Having a data management policy which explains how they create, obtain, disclose, protect, and delete data;
      • Following industry standards and best practice for information security and encryption;
      • Telling the user if any changes, for example an update to their address, have been made to their digital identity;
      • Where appropriate, having a detailed account recovery process and notifying users if organisations suspect someone has fraudulently accessed their account or used their digital identity;
      • Following guidance on how to choose secure authenticators for their service.
  • The European Commission (EC) “opened infringement procedures against 24 Member States for failing to enact new EU telecom rules.”
    • The EC asserted:
      • The European Electronic Communications Code modernises the European regulatory framework for electronic communications, to enhance consumers’ choices and rights, for example by ensuring clearer contracts, quality of services, and competitive markets. The Code also ensures higher standards of communication services, including more efficient and accessible emergency communications. Furthermore, it allows operators to benefit from rules incentivising investments in very-high capacity networks, as well as from enhanced regulatory predictability, leading to more innovative digital services and infrastructures.
      • The European Electronic Communications Code that brings the regulatory framework governing the European telecom sector up to date with the new challenges came into force in December 2018, and Member States have had two years to implement its rules. It is a central piece of legislation to achieve Europe’s Gigabit society and ensure full participation of all EU citizens in the digital economy and society.

Coming Events

  • On 18 February, the House Financial Services will hold a hearing titled “Game Stopped? Who Wins and Loses When Short Sellers, Social Media, and Retail Investors Collide” with Reddit Co-Founder and Chief Executive Officer Steve Huffman testifying along with other witnesses.
  • The U.S.-China Economic and Security Review Commission will hold a hearing titled “Deterring PRC Aggression Toward Taiwan” on 18 February.
  • On 24 February, the House Energy and Commerce Committee’s Communications and Technology Subcommittee will hold a hearing titled “Fanning the Flames: Disinformation and Extremism in the Media.”
  • On 27 July, the Federal Trade Commission (FTC) will hold PrivacyCon 2021.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2021. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Photo by Estúdio Bloom on Unsplash

Further Reading, Other Developments, and Coming Events (11 February 2021)

Further Reading

  • 3G Could End This Year. For People Who Rely on Basic Phones, That’s a Big Problem.” By Hannah Frishberg — OneZero. The major telecommunications carriers will soon shut down their 3G coverage and with it, the last of the “dumb” phones will theoretically no longer work. There are other issues, however. In some rural areas 4G is spotty when available.
  • ‘It let white supremacists organize’: the toxic legacy of Facebook’s Groups” By Kari Paul — The Guardian. Who knew that stacking up dry wood, dousing it in lighter fluid, and keeping an open flame nearby would lead to bad results? In the same vein, who knew that putting together an algorithm that pushed people to join groups, the prevalence of extremist and white supremacist groups, and little to no oversight or policing of these groups would result in an explosion of radicalization on Facebook? Only Nostradamus could have seen this coming. And, shockingly, experts and critics of Facebook are not impressed with the latest layout of deck chairs on the proverbial Titanic in response to the extremism the platform helped bring about.
  • World Wide Web inventor Tim Berners-Lee takes on Google, Facebook, Amazon to fix the internet” By Michael Braga — USA Today. Tim Berners-Lee and John Bruce have started Inrupt.com a new paradigm that would allow people to essentially store their personal data in pods that platforms would have to request permission to use. They are banking that this shift could lead to the decline in dominance of Google, Apple, Facebook, Amazon and Microsoft (GAFAM).
  • Biden’s whole-of-National Security Council strategy” By Bethany Allen-Ebrahimian — Axios. This is a good overview of how the National Security Council has been remade to focus on the People’s Republic of China (PRC) across its entire remit. How this translates into policy remains to be seen.
  • Amazon’s anti-union blitz stalks Alabama warehouse workers everywhere, even the bathroom” By Jay Greene — The Washington Post. As it has in the past, Amazon is going all out to stop a facility in Alabama from forming a union. Ballots are currently being cast by mail. If a union is certified, it would be the first in the United States at an Amazon facility.  

Other Developments

  • 37 Democratic Senators wrote the acting chair of the Federal Communications Commission (FCC) to “utilize the E-Rate program to start bridging the “homework gap” without delay.” A few days earlier, the FCC announced that it is “seeking comment on several petitions requesting permission to use E-Rate program funds to support remote learning during the pandemic.” Comments are due by 16 February and reply comments are due by 23 February. Nonetheless, the group of Senators, led by Senator Ed Markey (D-MA) and new Senate Commerce, Science, and Transportation Committee Chair Maria Cantwell (D-WA), asserted to acting FCC Chair Jessica Rosenworcel:
    • As we approach the one year-anniversary of this public health crisis, studies indicate that as many as 12 million children in the United States still lack internet access at home and are unable to participate in online learning. These students are disproportionally from communities of color, low-income households, Tribal lands, and rural areas. Despite our repeated call to address this homework gap, your predecessor at the FCC refused to use the emergency authority available to the Chair and resources available through the E-Rate program to connect these vulnerable children. This mistake allowed far too many students to fall behind in their education.
    • We appreciate that you have already recognized the FCC’s ability to act, including by asserting in congressional testimony that “the FCC could use E-Rate right now to provide every school library with Wi-Fi hotspots and other connectivity devices to loan out to students who lack reliable internet access at home.” In accordance with this statement, we urge you to now use your new leadership of the FCC to depart from the prior Commission’s erroneous position. Specifically, we request that you leverage the E-Rate program to begin providing connectivity and devices for remote learning. Although the funds currently available through the E-Rate will not be enough to connect every student across the country, your prompt action would provide an essential down payment. From there, Congress must provide the resources needed to finish the job by passing our Emergency Educational Connections Act, legislation that would appropriate billions more to be delivered through the E-Rate program to help close the homework gap during the pandemic.
  • Two Senators and Eight Representatives, all Democrats, “asked the National Security Agency (NSA) to explain the NSA’s actions to protect the government from supply chain attacks, like the recent SolarWinds hack, in which malicious code is snuck into commercial software used by the government” per their press release. They recited the history of a compromised encryption algorithm the NSA pressed on the National Institute of Standards and Technology (NIST) to publish as a government standard even though it contained a backdoor NSA created. Juniper, a networking company, started using this encryption algorithm a few years afterwards without knowing of the NSA’s action. The letter presses the NSA to turn over information about the subsequent hack of Juniper, which the Members implicitly compare to SolarWinds. Senators Ron Wyden (D-OR) and Cory Booker (D-NJ) and Representatives Pramila Jayapal (D-WA), Tom Malinowski (D-NJ), Ted Lieu (D-CA), Stephen Lynch (D-MA), Bill Foster (D-IL), Suzan DelBene (D-WA), Yvette Clarke (D-NY), and Anna Eshoo (D-CA) signed the letter. They claimed:
    • The recent SolarWinds hack has brought attention to the vulnerability of the government to supply chain attacks. However, five years ago another vendor to the U.S. government – Juniper Networks – revealed it also inadvertently delivered software updates containing malicious code. 
    • In 2015, Juniper revealed a security breach in which hackers modified the software the company delivered to its customers. Researchers subsequently discovered that Juniper had been using an NSA-designed encryption algorithm, which experts had long argued contained a backdoor, and that the hackers modified the key to this backdoor.
    • However, despite promising a full investigation after it announced the breach, Juniper has never publicly accounted for the incident.
    • The Members “asked the NSA to answer the following questions
      • After Juniper’s 2015 public disclosure that it inadvertently delivered software updates and products to customers containing malicious code, what actions did NSA take to protect itself, the Department of Defense, and the U.S. government from future software supply chain hacks? For each action, please identify why it was not successful in preventing the compromise of numerous government agencies in 2020 by a malware-laden update delivered by SolarWinds.
      • In the summer of 2018, during an unclassified briefing with Senator Wyden’s office, senior NSA officials revealed the existence of a “lessons learned” report on the Dual_EC_DRBG algorithm. Senator Wyden’s office has repeatedly requested this report, but NSA has yet to provide it. Please provide us with a copy of this report and any official historical reports that describe this algorithm, its development, and subsequent exploitation.
      • At the time that NSA submitted Dual_EC_DRBG to NIST for certification, did NSA know the algorithm contained a backdoor?
      • According to the NIST cryptographer’s postmortem, NSA informed NIST in 2005 that it selected the “Q” value that was published in the NIST Duel_EC_DRBG standard in a “secure, classified way.” Was this statement accurate? Please explain.
      • Juniper has confirmed that it added support for Dual_EC_DRBG “at the request of a customer,” but refused to identify that customer, or even confirm whether that customer was a U.S. government agency. Did NSA request that Juniper include in its products the Dual_EC_DRBG algorithm, P and Q values which were different from those published by NIST, or another NSA-designed encryption standard named Extended Random?
      • What statutory legal authority, if any, would permit NSA to introduce vulnerabilities into U.S. government approved algorithms certified by NIST and to keep those vulnerabilities hidden from NIST?
      • Would efforts by NSA to introduce backdoors or other vulnerabilities into government standards require the approval of the NSA Director, an inter-agency consultation, including input from the Cybersecurity and Infrastructure Security Agency, the Department of Commerce, the Federal Trade Commission, and the Federal Communications Commission? Would they require notification to the Congressional intelligence committees or an order from the Foreign Intelligence Surveillance Court? If no, please explain why.
  • The National Telecommunications and Information Administration (NTIA) has been holding a series of “Tribal Consultations for input on implementation of the Tribal Broadband Connectivity Program (TBCP),” a program seeded with $1 billion in the “Consolidated Appropriations Act, 2021” (P.L. 116-260).
    • In a letter, the NTIA explained:
      • The Act directs NTIA to make grants available to eligible entities within short time frames. NTIA is committed to holding consultation sessions expeditiously to ensure that your input informs the new grant program prior to the application process. In accordance with Commerce’s tribal consultation policy, I am inviting you and/or a tribal representative to participate in the virtual National Tribal Consultation to provide your advice and insights as NTIA staff are working through the critical issues related to the program.
    • In its presentation on the TBCP, the NTIA explained the provisions in the Consolidated Appropriations Act, 2021:
      • Section 905(c)(5) stipulates the following eligible uses of grant funds:
        • broadband infrastructure deployment, including support for the establishment of carrier-neutral submarine cable landing stations;
        • affordable broadband programs, including—–providing free or reduced-cost broadband service; and –preventing disconnection of existing broadband service;
        • distance learning;
        • telehealth;
        • digital inclusion efforts; and
        • broadband adoption activities.
      • Section 905(c)(6) caps the amount of grand funds to be used for administrative expenses:
        • An eligible entity may use not more than 2 percent of grant funds received under this subsection for administrative purposes.
      • Section 905(c)(8) provides information about broadband infrastructure deployment:
        • In using grant funds received under this subsection for new construction of broadband infrastructure, an eligible entity shall prioritize projects that deploy broadband infrastructure to unserved households.
      • Section 905(c)(3)(A) mandates that grant funds are awarded on an equitable basis:
      • The amounts appropriated under subsection (b)(1) shall be made available to eligible entities on an equitable basis, and not less than 3 percent of those amounts shall be made available for the benefit of Native Hawaiians.
  • The Department of Health and Human Services (HHS) issued an “Artificial Intelligence (AI)” that establishes an AI Council “to support AI governance, strategy execution, and development of strategic AI priorities across the enterprise…[and] has complementary objectives to:
    • Communicate and champion the Department’s AI vision and ambition
    • Execute and govern the implementation of the enterprise AI strategy and key strategic priorities to scale AI across the Department
    • HHS further explained:
      • To achieve HHS’s ambition, this enterprise AI strategy will set forth an approach and focus areas intended to encourage and enable Department-wide familiarity, comfort, and fluency with AI technology and its potential (AI adoption), the application of best practices and lessons learned from piloting and implementing AI capabilities to additional domains and use cases across HHS (AI scaling), and increased speed at which HHS adopts and scales AI (AI acceleration).
      • Ultimately, this strategy is the first step towards transforming HHS into an AI fueled enterprise. This strategy lays the foundation upon which the AI Council can use to drive change across the Department by encouraging the application of AI to promote advances in the sciences, public health, and social services—improving the quality of life for all Americans.
  • The New York State Department of Financial Services (NYDFS) issued “a new Cyber Insurance Risk Framework…[that] outlines industry best practices for New York-regulated property/casualty insurers that write cyber insurance to effectively manage their cyber insurance risk.” The NYDFS claimed the framework “is the first guidance by a U.S. regulator on cyber insurance” in its press release. NYDFS asserted:
    • The Framework is a result of DFS’s ongoing dialogue with the insurance industry and experts on cyber insurance, including meetings with insurers, insurance producers, cyber experts, and insurance regulators across the U.S. and Europe.  Building on DFS’s longstanding work fostering a strong and resilient insurance market that protects New Yorkers, the Framework furthers DFS’s commitment to improving cybersecurity for consumers and the industry.  DFS’s first-in-the-nation Cybersecurity Regulation took effect in March 2017.  In 2019, DFS was also the first financial services regulator to create a Cybersecurity Division to oversee all aspects of its cybersecurity regulation and policy.
    • The NYDFS claimed:
      • The growing risk makes cyber insurance protection more important than ever, while at the same time creating new challenges for insurers managing that risk.  DFS advises New York-regulated property/casualty insurers offering cyber insurance to establish a formal strategy for measuring cyber insurance risk that is directed and approved by its board or other governing entity.  The strategy should be proportionate with each insurer’s risk based on the insurer’s size, resources, geographic distribution, and other factors. Insurers are encouraged to incorporate the following best practices into their risk strategy:
      • Manage and eliminate exposure to “silent” cyber insurance risk, which results from an insurer’s obligation to cover loss from a cyber incident under a policy that does not explicitly mention cyber incidents;
      • Evaluate systemic risk, including the impact of catastrophic cyber events on third party service providers like the recently discovered SolarWinds supply chain attack;
      • Rigorously measure insured risk by using a data-driven approach to assess potential gaps and vulnerabilities in insureds’ cybersecurity;
      • Educate insureds and insurance producers about the value of cybersecurity measures and the need for, benefits of, and limitations to cyber insurance;
      • Obtain cybersecurity expertise through strategic recruiting and hiring practices; and
      • Require notice to law enforcement in the event of a cyber attack.
  • The National Counterintelligence and Security Center (NCSC) published a fact sheet titled “China’s Collection Of Genomic And Other Healthcare Data From  America: Risks To Privacy And U.S. Economic And National Security.” The NCSC stated:
    • Would you want your DNA or other healthcare data going to an authoritarian regime with a record of exploiting DNA for repression and surveillance? For years, the People’s Republic of China (PRC) has collected large healthcare data sets from the U.S. and nations around the globe, through both legal and illegal means, for purposes only it can control. While no one begrudges a nation conducting research to improve medical treatments, the PRC’s mass collection of DNA at home has helped it carry out human rights abuses against domestic minority groups and support state surveillance. The PRC’s collection of healthcare data from America poses equally serious risks, not only to the privacy of Americans, but also to the economic and national security of the U.S.
    • The NCSC identified the “Implications for Privacy and U.S. National Security:”
      • China’s access to U.S. healthcare and genomic data poses serious privacy and national security risks to the U.S.
        • Through its cyber intrusions in recent years, the PRC has already obtained the Personal Identifying Information (PII) of much of the U.S. population.
        • Recent breaches attributed to the PRC government or to cyber actors based in China include the theft of personnel records of roughly 21 million individuals from the U.S. Office of Personnel Management; the theft from Marriott hotels of roughly 400 million records; the theft of data from Equifax on roughly 145 million people; and the theft of data from Anthem on roughly 78 million people.
      • Furthermore, under the PRC’s national security laws, Chinese companies are compelled to share data they have collected with the PRC government. Article 7 of China’s 2017 National Intelligence Law, for instance, mandates that all Chinese companies and citizens shall support, assist, and cooperate with Chinese national intelligence efforts, and guard the secrecy of any national intelligence work that they are aware of. There is no mechanism for Chinese companies to refuse their government’s requests for data.
      • The combination of stolen PII, personal health information, and large genomic data sets collected from abroad affords the PRC vast opportunities to precisely target individuals in foreign governments, private industries, or other sectors for potential surveillance, manipulation, or extortion.
        • For instance, vulnerabilities in specific individuals revealed by genomic data or health records could be used to help target these individuals. Data associated with an embarrassing addiction or mental illness could be leveraged for blackmail. Combine this information with stolen credit data indicating bankruptcy or major debt and the tools for exerting leverage increase. Such data sets could help the PRC not only recruit individuals abroad, but also act against foreign dissidents.
    • The NCSC also named the “Economic Implications for the United States:”
      • Aside from these immediate privacy risks, China’s access to U.S. health and genomic data poses long-term economic challenges for the United States.
      • The PRC’s acquisition of U.S. healthcare data is helping to fuel China’s Artificial Intelligence and precision medicine industries, while the PRC severely restricts U.S. and other foreign access to such data from China, putting America’s roughly $100 billion biotech industry at a disadvantage.
      • Over time, this dynamic could allow China to outpace U.S. biotech firms with important new drugs and health treatments and potentially displace American firms as global biotech leaders.
      • Although new medicines coming out of China could benefit U.S. patients, America could be left more dependent on Chinese innovation and drug development for its cures, leading to a transfer of wealth, co-opting of new businesses and greater job opportunities in China.
  • The New York University Stern Center for Business and Human Rights (Center) issued a report titled “False Accusation: The Unfounded Claim that Social Media Companies Censor Conservatives” that concludes “[e]ven anecdotal evidence of supposed bias tends to crumble under close examination.” The Center stated:
    • Conservatives commonly accuse the major social media companies of censoring the political right. In response to Twitter’s decision on January 8, 2021, to exclude him from the platform, then-President Donald Trump accused the company of “banning free speech” in coordination with “the Democrats and Radical Left.”
    • This accusation—that social media platforms suppress conservatives— riles a Republican base that has long distrusted the mainstream media and is prone to seeing public events as being shaped by murky liberal plots. On a policy level, the bias claim serves as a basis for Republican attacks on Section 230 of the Communications Decency Act, the federal law that protects platforms from liability associated with user posts and content moderation decisions.
    • But the claim of anti-conservative animus is itself a form of disinformation: a falsehood with no reliable evidence to support it. No trustworthy large-scale studies have determined that conservative content is being removed for ideological reasons or that searches are being manipulated to favor liberal interests.
    • The Center offered these recommendations:
      • For the social media industry:
        • 1) Provide greater disclosure for content moderation actions. The platforms should give an easily under- stood explanation every time they sanction a post or account, as well as a readily available means to appeal enforcement actions. Greater transparency—such as that which Twitter and Facebook offered when they took action against President Trump in January—would help to defuse claims of political bias, while clarifying the boundaries of acceptable user conduct.
        • 2) Offer users a choice among content moderation algorithms. Users would have greater agency if they were offered a menu of choices among algorithms. Under this system, each user would be given the option of retaining the existing moderation algorithm or choosing one that screens out harmful content more vigorously. The latter option also would provide enhanced engagement by human moderators operating under more restrictive policies. If users had the ability to select from among several systems, they would be empowered to choose an algorithm that reflects their values and preferences.
        • 3) Undertake more vigorous, targeted human moderation of influential accounts. To avoid high-profile moderation mistakes, the platforms should significantly increase the number of full-time employees working directly for them who would help to create a more rigorous human-led moderation channel for the most influential accounts. To supervise this and other important issues related to policing content, we recommend that the platforms each hire a senior executive—a content overseer—who reports directly to the CEO or COO.
        • 4) Release more data for researchers. More granular disclosure would allow academics and civil society researchers to identify enforcement patterns, such as whether content is being removed for ideological reasons. This greater transparency should include the nature of any content that is removed, the particular rule(s) a post violated, how the platform became aware of noncompliance (user report versus algorithmic moderation), and how any appeals were resolved.
      • For the Biden administration:
        • 5) Pursue a constructive reform agenda for social media. This will require the federal government to press Facebook, Google, and Twitter to improve content policies and their enforcement, even as the government pursues pending antitrust lawsuits against Facebook and Google. The industry, for its part, must strive with urgency to do a better job of protecting users and society at large from harmful content—progress that can’t wait for the resolution of what might be years-long antitrust court battles.
        • 6) Work with Congress to update Section 230. The controversial law should be amended so that its liability shield is conditional, based on social media companies’ acceptance of a range of new responsibilities related to policing content. One of the new platform obligations could be ensuring that algorithms involved in content ranking and recommendation not favor sensationalistic or unreliable material in pursuit of user engagement.
        • 7) Create a new Digital Regulatory Agency. The false claim of anti-conservative bias has contributed to widespread distrust of the platforms’ willingness and ability to govern their sites. A new independent authority, charged with enforcing the responsibilities of a revised Section 230, could begin to rebuild that eroded trust. As an alternative, expanded jurisdiction and funding for social media oversight could be directed to an existing agency such as the Federal Trade Commission or Federal Communications Commission.

Coming Events

  • The House Judiciary Committee’s Antitrust, Commercial, and Administrative Law Subcommittee will hold a hearing titled “Justice Restored: Ending Forced Arbitration and Protecting Fundamental Rights” on 11 February.
  • The Federal Communications Commission’s (FCC) acting Chair Jessica Rosenworcel will hold a virtual Roundtable on Emergency Broadband Benefit Program on 12 February “a new a program that would enable eligible households to receive a discount on the cost of broadband service and certain connected devices during the COVID-19 pandemic.” The FCC also noted “[i]n the Consolidated Appropriations Act of 2021, Congress appropriated $3.2 billion” for the program.
  • On 17 February, the Federal Communications Commission (FCC) will hold an open meeting, its first under acting Chair Jessica Rosenworcel, with this tentative agenda:
    • Presentation on the Emergency Broadband Benefit Program. The Commission will hear a presentation on the creation of an Emergency Broadband Benefit Program. Congress charged the FCC with developing a new $3.2 billion program to help Americans who are struggling to pay for internet service during the pandemic.
    • Presentation on COVID-19 Telehealth Program. The Commission will hear a presentation about the next steps for the agency’s COVID-19 Telehealth program. Congress recently provided an additional $249.95 million to support the FCC’s efforts to expand connected care throughout the country and help more patients receive health care safely.
    • Presentation on Improving Broadband Mapping Data. The Commission will hear a presentation on the work the agency is doing to improve its broadband maps. Congress directly appropriated $65 million to help the agency develop better data for improved maps.
    • Addressing 911 Fee Diversion. The Commission will consider a Notice of Proposed Rulemaking that would implement section 902 of the Don’t Break Up the T-Band Act of 2020, which requires the Commission to take action to help address the diversion of 911 fees by states and other jurisdictions for purposes unrelated to 911. (PS Docket Nos. 20-291, 09-14)
    • Implementing the Secure and Trusted Communications Networks Act. The Commission will consider a Third Further Notice of Proposed Rulemaking that proposes to modify FCC rules consistent with changes that were made to the Secure and Trusted Communications Networks Act in the Consolidated Appropriations Act, 2021. (WC Docket No. 18-89)
  • On 27 July, the Federal Trade Commission (FTC) will hold PrivacyCon 2021.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2021. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Photo by cottonbro from Pexels

Another Democratic Section 230 Bill

The SAFE TECH Act addresses Section 230 issues bills introduced in the last Congress largely did not.

Three Democratic Senators have introduced a new bill to reform 47 USC 230 (Section 230), which is among the first major bills on the liability protection social media platforms and other technology companies have. Senate Intelligence Committee Chair Mark Warner (D-VA), Senator Mazie Hirono (D-HI) and Senator Amy Klobuchar (D-MN) released the “Safeguarding Against Fraud, Exploitation, Threats, Extremism and Consumer Harms (SAFE TECH) Act” (S.299) “to reform Section 230 and allow social media companies to be held accountable for enabling cyber-stalking, targeted harassment, and discrimination on their platforms” explains their press release. Warner, Hirono, and Klobuchar made available bill text, a three-page summary, frequently asked questions, and a redline.

The bill tracks like a Section 230 wish list for the left in how it would generally like to see the immunity of technology companies narrowed in order to create the incentives for them to better police certain types of harmful speech.

Of course, this bill and a number of the Republican bills introduced last year come at Section 230 from different, perhaps even conflicting directions, leaving observers and experts to wonder how, and if, compromise is possible. Or one must wonder what sort of Frankenstein bill would emerge as a compromise and whether the incentive structure for technology companies would be distorted and the second order and the third order effects to be what no one foresees or wants.

Incidentally, a note for Congress. Warner’s office made available a redline version of the legislation (i.e., a version showing changes to existing law in red) that makes understanding the bill much easier. I often make my own redlines, but I humbly suggest that the House and Senate change their rules to require that all bills provide redlines. Anyway, back to the bill.

Section 230, of course, gives the Facebooks, Twitters, Googles, YouTubes, etc. of the world very broad liability protection in that they cannot be sued for the vast majority of content third parties post to their platforms. Consequently, all sorts of harassing and quite frankly defamatory material can be posted, and the platforms can decline to remove said content without fear they will face a lawsuit. For example, The New York Times recently published an article about a woman who has accused others of being criminal and unethical without evidence, and platforms such as Google did nothing for years even though this slander had real world effects on the objects of her scorn.

The other side of the Section 230 coin, as some have argued, is that narrowing significantly or removing the liability protection would result in platforms removing immediately any content that might incur litigation, a stance that would likely fall hardest on those out of power and without resources.

Warner, Hirono, and Klobuchar asserted:

These changes to Section 230 do not guarantee that platforms will be held liable in all, or even most, cases. Proposed changes do not subject platforms to strict liability; and the current legal standards for plaintiffs still present steep obstacles. Rather, these reforms ensure that victims have an opportunity to raise claims without Section 230 serving as a categorical bar to their efforts to seek legal redress for harms they suffer – even when directly enabled by a platform’s actions or design.

The SAFE TECH Act would change Section 230 in a number of notable ways. First, in a nod to First Amendment issues, the crucial language in current law would be changed from “information” to “speech,” setting the stage where a world in which speech protected under the First Amendment would continue to be protected under Section 230. Hence, Twitter could not be sued if someone claims President Joe Biden is an idiot or has implemented the wrong policy on an issue.

Moreover, language appended to the last clause in Section 230(c) would also move certain speech outside the current legal shield. Any speech that the provider or user has been paid to make available could lead to litigation for the provider would no longer be immune for this class of speech. And so, if the Proud Boys paid a troll farm to slur Senator Ron Wyden (D-OR) on the basis of his Jewish heritage, say by claiming his allegiance is to Israel and not the United States, any platform hosting this content could be sued by Wyden for defamation among other possible grounds.

Likewise, platforms would no longer have liability protection for advertisements others pay for and place.

The SAFE TECH Act makes clear that platforms can seek to fend off lawsuits through an affirmative defense proving they are not the entity that created or disseminated the offensive information in question. However, the bill would set the evidentiary burden higher than the one used in most civil actions (more likely than not) to a preponderance of the evidence. This suggests the intent behind changing the evidence needed to use this defense is that platforms would have an incentive to better record who posts what. The likely side effect or second order effect would be it may become easier to track down those who are posting abusive or illegal content if it becomes much harder to post anonymously.

At present, platforms have so-called Good Samaritan liability that bars lawsuits against them for moderating and even taking down content. The SAFE TECH Act would pare back that liability protection in cases where a court has issued an order for the platform to remove or make unavailable content through an injunction order issued on the basis of irreparable harm. Moreover, a platform’s compliance with such an injunctive order cannot give rise to a lawsuit, and so platforms would be shielded from retaliatory litigation from the party who posted the content.

Like Representative Yvette Clarke’s (D-NY) discussion draft, the “Civil Rights Modernization Act of 2021,” (see here for more analysis), the SAFE TECH Act removes Section 230 liability in lawsuits alleging the content posted on a platform violates a federal or state civil rights law. This provision is short and worth quoting in full:

Nothing in this section shall be construed to limit, impair, or prevent any action alleging discrimination on the basis of any protected class, or conduct that has the effect or consequence of discriminating on the basis of any protected class, under any Federal or state law.

The italicized language will almost certainly be a non-starter for Republicans, most of whom object to language that makes illegal conduct without a discriminatory intent that results in de facto discrimination. Keeping in mind, as always, that at least 10 Republican votes would be needed to pass such a bill, it seems like this language would be left on the cutting room floor. Still, this is the sort of language left-wing and Democratic advocates would like to see, and the sponsors may have included it knowing it would not probably survive Republican objections. Giving the other party a victory in removing language like this may allow the primary parts of the bill to get enacted.

There are other carve outs of the Section 230 liability shield. First, platforms could be sued under federal or state laws barring “stalking, cyberstalking, harassment, cyberharassment, or intimidation based in whole or in part on sex (including sexual orientation and gender identity), race, color, religion, ancestry, national origin, or physical or mental disability.” Right now, Section 230 stops people from suing, say Reddit, for harassing material, or as in the aforementioned Times horror story, Pinterest and WordPress removed slanderous and libelous content only after a reporter contacted them while Google ultimately decided not to do so. If this provision of the SAFE TECH Act becomes law, such platforms would face lawsuits for failing to take down such material. I wonder, however, if the terms used in this provision would cover child pornography, non-consensual pornography, revenge pornography and similar content. Perhaps those types of content would be considered harassment or cyberharrassment.

Another carve out is that non-U.S. nationals could sue in federal courts alleging injuries on the basis of content posted on a platform under the Alien Tort Claims Act (ATCA) (28 USC 1350) that is usually used to allege violations of human rights. Warner, Hirono, and Klobuchar cite “the survivors of the Rohingya genocide” in Myanmar be able to sue platforms for the inflammatory material some refused to takedown that fed the genocidal activities of the Burmese Army and government. Facebook, in particular, was flagged for being unresponsive to requests to take down this sort of content.

Finally, if a person is bringing a civil suit for a wrongful death, Twitter, Facebook, Parler, Reddit, and others could be sued for actions they took or did not take that may have contributed to or led to the death in question.

Another provision would address the use of Section 230 as a defense against antitrust actions, a novel deployment of a provision meant to protect platforms from lawsuits about the content others post:

Nothing in this section shall be construed to prevent, impair, or limit any action brought under State or Federal antitrust laws.

In the FAQ, Warner, Hirono, and Klobuchar explained the rationale for this language:

Internet platforms and other tech companies have pushed the bounds of Section 230 in an effort to immunize themselves from all manner of activity. Just last year, a leading cyber-security firm claimed Section 230 immunized it against a claim it had engaged in anticompetitive conduct to harm a competitor and pursued its claim all the way to the Supreme Court.

This may well be an issue on which Democrats and Republican can agree as evidenced by the Trump Administration’s Department of Justice recommendations on reforming Section 230 that state:

A fourth category of reform is to make clear that federal antitrust claims are not, and were never intended to be, covered by Section 230 immunity.  Over time, the avenues for engaging in both online commerce and speech have concentrated in the hands of a few key players.  It makes little sense to enable large online platforms (particularly dominant ones) to invoke Section 230 immunity in antitrust cases, where liability is based on harm to competition, not on third-party speech.

Finally, another approach put forth by a key Democratic stakeholder may prove more preferable than the SAFE TECH Act for it homes in on the process by which platforms moderate content. These platforms  would need to publish clear and fair processes and then live by them. Moreover, this bill would require platforms to take down content as ordered by courts.

Last summer, Senator Brian Schatz (D-HI and then Senate Majority Whip John Thune (R-SD) introduced the “Platform Accountability and Consumer Transparency (PACT) Act” (S.4066) (see here for more analysis.) According to Schatz and Thune’s press release, the PACT Act will strengthen transparency in the process online platforms use to moderate content and hold those companies accountable for content that violates their own policies or is illegal. Schatz and Thune claimed the “PACT Act creates more transparency by:

  • Requiring online platforms to explain their content moderation practices in an acceptable use policy that is easily accessible to consumers;
  • Implementing a quarterly reporting requirement for online platforms that includes disaggregated statistics on content that has been removed, demonetized, or deprioritized; and
  • Promoting open collaboration and sharing of industry best practices and guidelines through a National Institute of Standards and Technology-led voluntary framework.

They asserted “[t]he PACT Act will hold platforms accountable by:

  • Requiring large online platforms to provide process protections to consumers by having a defined complaint system that processes reports and notifies users of moderation decisions within 14 days, and allows consumers to appeal online platforms’ content moderation decisions within the relevant company;
  • Amending Section 230 to require large online platforms to remove court-determined illegal content and activity within 24 hours; and
  • Allowing small online platforms to have more flexibility in responding to user complaints, removing illegal content, and acting on illegal activity, based on their size and capacity.

Schatz and Thune stated that “[t]he PACT Act will protect consumers by:

  • Exempting the enforcement of federal civil laws from Section 230 so that online platforms cannot use it as a defense when federal regulators, like the Department of Justice and Federal Trade Commission, pursue civil actions for online activity;
  • Allowing state attorneys general to enforce federal civil laws against online platforms that have the same substantive elements of the laws and regulations of that state; and
  • Requiring the Government Accountability Office to study and report on the viability of an FTC-administered whistleblower program for employees or contractors of online platforms.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2021. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Photo by Karsten Winegeart on Unsplash

Further Reading, Other Developments, and Coming Events (8 February 2021)

Further Reading

  • ‘A kiss of death’: Top GOP tech critics are personae non gratae after election challenge” By Cristiano Lima — Politico. I take these articles with a block of salt, not least of which because many inside the Beltway articles lack perspective and a sense of history. For sure, in the short term the Josh Hawleys and Ted Cruzes of the world are radioactive to Democrats, but months down the road things will look different, especially if Democrats need votes or allies in the Senate. For example, former Senator David Vitter’s (R-LA) interesting activities with prostitutes made him radioactive for some time and then all was forgotten because he held a valuable currency: a vote.
  • I Talked to the Cassandra of the Internet Age” By Charlie Warzel — The New York Times. A sobering read on the implications of the attention economy. We would all be helped by slowing down and choosing what to focus on.
  • A Vast Web of Vengeance” By Kashmir Hill — The New York Times. A true horror story illustrating the power platforms give anyone to slander others. The more these sorts of stories move to the fore of the consciousness of policymakers, the greater the chances of reform to 47 USC 230 (Section 230), which many companies used to deny requests that they take down defamatory, untrue material.
  • Amazon says government demands for user data spiked by 800% in 2020” By Zack Whitaker — TechCrunch. In an interesting development, Germany far outpaced the United States (U.S.) in information requests between 1 July and 31 December 2020 for Amazon except for Amazon Web Services (AWS). Regarding AWS, the U.S. accounted for 75% of requests. It bears note there were over 27,000 non-AWS requests and only 523 AWS requests.
  • Russian hack brings changes, uncertainty to US court system” By MaryClaire Dale — Associated Press. Because the Administrative Office of United States (U.S.) Courts may have been part of the massive SolarWinds hack, lawyers involved with cases that have national security aspects may no longer file materials electronically. It appears these cases will go old school with paper filings only, stored on a computers in federal courts that have no connection to the internet. However, it is apparently believed at present that the Foreign Intelligence Surveillance Court system was not compromised by the Russians.

Other Developments

  • Senator Ted Cruz (R-TX) placed a hold on Secretary of Commerce designate Gina Raimondo’s nomination, explaining on Twitter: “I’ll lift the hold when the Biden admin commits to keep the massive Chinese Communist Party spy operation Huawei on the Entity List.” Cruz was one of three Republicans to vote against reporting out Raimondo’s nomination from the Senate Commerce, Science, and Transportation Committee. Even though the Ranking Member, Senator Roger Wicker (R-MS), voted to advance her nomination to the Senate floor, he, too, articulated concerns about Raimondo and the Biden Administration’s refusal to commit to keeping Huawei on the Department of Commerce’s Entity List, a designation that cuts off needed technology and products from the company from the People’s Republic of China (PRC). Wicker said “I do remain concerned about the Governor’s reluctance to state unequivocally that she intends to keep Huawei on the department’s entity list…[and] [k]eeping Huawei on this list is important for the security of our networks and I urge the Governor and the administration to make its position clear.” Of course, the continuing Republican focus on the PRC is seeking to box in the Biden Administration and to try to force them to maintain the Trump Administration’s policies. The new administration has refused to make hard commitments on the PRC thus far and will likely seek different tactics than the Trump Administration even though there will likely be agreement on the threat posed by the PRC and its companies.
  • Virginia’s “Consumer Data Protection Act” (SB 1392/HB 2307) advanced from the Virginia Senate to the House of Delegates by a 36-0-1 vote on 5 February. The package was sent to the Communications, Technology and Innovation Subcommittee in the House on 7 February. Last week, it appeared as if the legislature would not have time to finish work on the United States’ second privacy law, but Governor Ralph Northam (D) convened a special session right before the legislature was set to adjourn. Now, there will be more time to address this bill and other priorities.
  • Senators Brian Schatz (D-HI), Deb Fischer (R-NE), Richard Blumenthal (D-CT), Rick Scott (R-FL) and Jacky Rosen (D-NV) introduced “The Safe Connections Act” “to help survivors of domestic violence and other crimes cut ties with their abusers and separate from shared wireless service plans, which can be exploited to monitor, stalk, or control victims” per their press release. The Senators asserted “the Safe Connections Act would help them stay safe and connected by:
    • Allowing survivors to separate a mobile phone line from any shared plan involving an abuser without penalties or other requirements. This includes lines of any dependents in their care;
    • Requiring the Federal Communications Commission (FCC) to initiate a rulemaking proceeding to seek comment on how to help survivors who separate from a shared plan enroll in the Lifeline Program for up to six-months as they become financially stable; and
    • Requiring the FCC to establish rules that would ensure any calls or texts to hotlines do not appear on call logs.
  • The European Commission’s Directorate-General for Justice and Consumers issued the “Report on the implementation of specific provisions of Regulation (EU) 2016/679,” the General Data Protection Regulation (GDPR), in which it was determined that implementation of these provisions at the member state level is uneven. The implication of this assessment released some 2.5 years after the GDPR took effect is that it may be some time more before each European Union state has made the statutory and policy changes necessary to the data protection regime full effect. And so, the Directorate-General made “[t]he following general observations can be made in relation to the implementation of the GDPR clauses under assessment:
    • As regards Article 8(1) GDPR (i.e., Conditions applicable to child’s consent in relation to information society services), the majority of the Member States have set an age limit lower than 16 years of age for the validity of the consent of a minor in relation to information society services. Nine Member States set the age limit at 16 years age, while eight Member States opted for that of 13 years, six for that of 14 years and three for 15 years.
    • With respect to Article 9(4) GDPR (i.e., Processing of special categories of personal data), most Member States provide for conditions/limitations with regard to the processing of genetic data, biometric data or data concerning health. Such limitations/conditions typically consist in listing the categories of persons who have access to such data, ensuring that they are subject to confidentiality obligations, or making processing subject to prior authorisation from the competent national authority. No national provision restricting or prohibiting the free movement of personal data within the European Union has been identified.
    • As regards Article 23(1) GDPR, and irrespective of the areas of public interest assessed under Article 23(1)(c) and (e) GDPR (i.e. public security, public administration, public health, taxation and migration), some Member States provide for restrictions in the area of (i) social security; or (ii) supervision of financial market participants, functioning of the guarantee systems and resolution and macroeconomic analyses. Concerning Article 23(1)(c) GDPR, the majority of Member States allow for restrictions of various provisions referred to in Article 23(1) GDPR. Normally there is a general reference to public security, while more specific areas of processing include the processing of personal data for the investigation and prosecution of crimes, and the use of video cameras for surveillance. Most commonly, the restrictions apply only where certain conditions are met. In some Member States the proportionality and necessity test is not contemplated at all, while in most Member States it is established in law, rather than left to the data controller. The overwhelming majority of Member States do not sufficiently implement the conditions and safeguards under Article 23(2) GDPR.
    • As regards Article 23(1)(e) GDPR in relation to public administration, half of the Member States provide for restrictions for such purpose. Normally there is a general reference to general public interest or public administration, while more specific areas of processing include discussions of the Council of Ministers and investigation of judicial or ‘administrative’ police authorities in connection with the commission of a crime or administrative infringement. Most commonly, the restrictions apply only where certain conditions are met. In some Member States the proportionality and necessity test is not contemplated at all, whereas in some other Member States the test is established in law or left to the data controller. No Member State implements all conditions and safeguards under Article 23(2) GDPR.
    • As regards Article 23(1)(e) GDPR in relation to public health, a minority of the Member States provide for restrictions for such purpose. Normally there is a general reference to public health or general public interest, while more specific areas of processing include the security of food chain and medical files. In most Member States, the applicable restrictions apply only where certain conditions are met. The proportionality and necessity test is generally established in the law. No Member State implements all conditions and safeguards under Article 23(2) GDPR.
    • With respect to Article 23(1)(e) GDPR in relation to taxation, a sizeable number of Member States provide restrictions for such purposes. There tends to be a general reference to taxation or general public interest, while more specific areas of processing include recovery of taxes, as well as automated tax data transfer procedures. Normally, the applicable restrictions apply only where certain conditions are met. The proportionality and necessity test is generally left to the data controller. No Member State implements all conditions and safeguards under Article 23(2) GDPR.
    • As regards Article 23(1)(e) GDPR in relation to migration, a minority of the Member States provide for restrictions for such purpose. Normally there is a general reference to migration or general public interest. The applicable restrictions tend to apply only where certain conditions are met. The proportionality and necessity test is generally left to the data controller. No Member State implements all conditions and safeguards under Article 23(2) GDPR.
    • As regards Article 85(1) GDPR (which requires Member States to reconcile by law the right to the protection of personal data with the right to freedom of expression and information), the majority of the Member States provide for provisions aiming to reconcile the right to the protection of personal data with the right to freedom of expression and information. These provisions are usually in the national data protection act implementing the GDPR, however, in some instances there are also specific provisions in media laws to this effect.
    • With respect to Article 85(2) GDPR (Reconciliation of the right to the protection of personal data with the right to freedom of expression and information), most Member States provide exemptions/derogations from the rules set out in Chapters II, III, IV, V, VI, VII and IX GDPR. More often than not, no specific balancing or reconciliation test is identified in the national legislation. A detailed account of the exemptions/derogations can be found in Annex 2 – Implementation of Article 85(2) GDPR.
  • The United Kingdom’s (UK) Information Commissioner’s Office (ICO) announced it is resuming the “investigation into real time bidding (RTB) and the adtech industry” in response to the COVID-19 pandemic. Simon McDougall, ICO Deputy Commissioner – Regulatory Innovation and Technology stated in a blog posting:
    • Enabling transparency and protecting vulnerable citizens are priorities for the ICO. The complex system of RTB can use people’s sensitive personal data to serve adverts and requires people’s explicit consent, which is not happening right now.
    • Sharing people’s data with potentially hundreds of companies, without properly assessing and addressing the risk of these counterparties, also raises questions around the security and retention of this data.
    • Our work will continue with a series of audits focusing on data management platforms* and we will be issuing assessment notices to specific companies in the coming months. The outcome of these audits will give us a clearer picture of the state of the industry.
    • The investigation is vast and complex and, because of the sensitivity of the work, there will be times where it won’t be possible to provide regular updates. However, we are committed to publishing our final findings, once the investigation is concluded.
    • We are also continuing to work with the Competition and Markets Authority (CMA) in considering Google’s Privacy Sandbox proposals to phase out support for third party cookies on Chrome.
  • Washington State Representative Shelley Kloba (D) and cosponsors introduced a bill, HB 1303, to establish a data brokers registry in Washington state that would also levy a 1.8% tax on gross revenue from selling personal data. In her press release, Kloba stated:
    • We are spending more and more of our lives on our phones and devices. From this has arisen a new business model where brokers collect, analyze, and resell personal data collected from applications on our phones and other devices. Currently, this type of business is totally unregulated and untaxed, and these businesses are reselling information with no compensation to the people of Washington. My legislation would shine a light on this very active segment of our economy while also establishing a small tax on the companies that profit from selling our personal data. Brokers that make money from collecting our personal information should contribute their fair share in tax revenue, and there should be more transparency on the number of businesses engaged in this industry.
    • HB 1303 would:
      • Impose a 1.8% Business & Occupation (B&O) tax on gross income arising from the sale of personal data.
      • Require companies that engage in this type of economic activity to register annually with the Department of Revenue (DOR).
      • Require DOR to provide the Legislature with an annual report on this information.
    • Recently, Kloba and cosponsors introduced the “People’s Privacy Act” (HB 1433), a bill to establish a privacy and data protection regime in Washington state. (see here for analysis.)
  • The Federal Trade Commission (FTC) used recently granted authority to police the use of algorithms and automated processes to buy tickets for entertainment and sporting events. The “Better Online Ticket Sales (BOTS) Act” (P.L. 114-274) “was enacted in 2016 and gives the FTC authority to take law enforcement action against individuals and companies that use bots or other means to circumvent limits on online ticket purchases” per the agency’s press release. The FTC stating it is taking “legal action against three ticket brokers based in New York who allegedly used automated software to illegally buy up tens of thousands of tickets for popular concerts and sporting events, then subsequently made millions of dollars reselling the tickets to fans at higher prices.” The FTC added:
    • The three ticket brokers will be subject to a judgment of more than $31 million in civil penalties for violating the Better Online Ticket Sales (BOTS) Act, under a proposed settlement reached with the FTC. Due to their inability to pay, the judgment will be partially suspended, requiring them to pay $3.7 million.
    • The FTC explained “[u]nder the terms of the proposed orders, judgments will be entered against the defendants for civil penalties as follows:
  • The National Institute of Standards and Technology (NIST) pushed back the deadline for comments until 26 February 2021 for four guidance documents on the Internet of Things:
    • Draft NIST SP 800-213, IoT Device Cybersecurity Guidance for the Federal Government: Establishing IoT Device Cybersecurity Requirements, has background and recommendations to help federal agencies consider how an IoT device they plan to acquire can integrate into a federal information system. IoT devices and their support for security controls are presented in the context of organizational and system risk management. SP 800-213 provides guidance on considering system security from the device perspective. This allows for the identification of IoT device cybersecurity requirements—the abilities and actions a federal agency will expect from an IoT device and its manufacturer and/or third parties, respectively.
    • Draft NISTIR 8259B, IoT Non-Technical Supporting Capability Core Baseline, complements the NISTIR 8259A device cybersecurity core baseline by detailing additional, non-technical supporting activities typically needed from manufacturers and/or associated third parties. This non-technical baseline collects and makes explicit supporting capabilities like documentation, training, customer feedback, etc.
    • Draft NISTIR 8259C, Creating a Profile Using the IoT Core Baseline and Non-Technical Baseline, describes a process, usable by any organization, that starts with the core baselines provided in NISTIRs 8259A and 8259B and explains how to integrate those baselines with organization- or application-specific requirements (e.g., industry standards, regulatory guidance) to develop a IoT cybersecurity profile suitable for specific IoT device customers or applications. The process in NISTIR 8259C guides organizations needing to define a more detailed set of capabilities responding to the concerns of a specific sector, based on some authoritative source such as a standard or other guidance, and could be used by organizations seeking to procure IoT technology or by manufacturers looking to match their products to customer requirements.
    • Draft NISTIR 8259D, Profile Using the IoT Core Baseline and Non-Technical Baseline for the Federal Government, provides a worked example result of applying the NISTIR 8259C process, focused on the federal government customer space, where the requirements of the FISMA process and the SP 800-53 security and privacy controls catalog are the essential guidance. NISTIR 8259D provides a device-centric, cybersecurity-oriented profile of the NISTIR 8259A and 8259B core baselines, calibrated against the FISMA low baseline described in NIST SP 800-53B as an example of the criteria for minimal securability for federal use cases.
  • The New York State Department of Financial Services (NYDFS) announced “[r]egulated entities and licensed persons must file the Certification of Compliance for the calendar year 2020 by April 15, 2021” These certificates are due under the NYDFS’ cybersecurity regulations with which most financial services companies in the state must comply. These regulations took effect in May 2017.

Coming Events

  • On 10 February, the House Homeland Committee will hold a hearing titled “Homeland Cybersecurity: Assessing Cyber Threats and Building Resilience” with these witnesses:
    • Mr. Chris Krebs, Former Director, Cybersecurity and Infrastructure Security Agency, U.S. Department of Homeland Security
    • Ms. Sue Gordon, Former Principal Deputy Director of National Intelligence, Office of the Director of National Intelligence
    • Mr. Michael Daniel, President & CEO, Cyber Threat Alliance
    • Mr. Dmitri Alperovitch, Executive Chairman, Silverado Policy Accelerator
  • The House Judiciary Committee’s Antitrust, Commercial, and Administrative Law Subcommittee will hold a hearing titled “Justice Restored: Ending Forced Arbitration and Protecting Fundamental Rights.”
  • The Federal Communications Commission’s (FCC) acting Chair Jessica Rosenworcel will hold a virtual Roundtable on Emergency Broadband Benefit Program on 12 February “a new a program that would enable eligible households to receive a discount on the cost of broadband service and certain connected devices during the COVID-19 pandemic.” The FCC also noted “[i]n the Consolidated Appropriations Act of 2021, Congress appropriated $3.2 billion” for the program.
  • On 17 February, the Federal Communications Commission (FCC) will hold an open meeting, its first under acting Chair Jessica Rosenworcel, with this tentative agenda:
    • Presentation on the Emergency Broadband Benefit Program. The Commission will hear a presentation on the creation of an Emergency Broadband Benefit Program. Congress charged the FCC with developing a new $3.2 billion program to help Americans who are struggling to pay for internet service during the pandemic.
    • Presentation on COVID-19 Telehealth Program. The Commission will hear a presentation about the next steps for the agency’s COVID-19 Telehealth program. Congress recently provided an additional $249.95 million to support the FCC’s efforts to expand connected care throughout the country and help more patients receive health care safely.
    • Presentation on Improving Broadband Mapping Data. The Commission will hear a presentation on the work the agency is doing to improve its broadband maps. Congress directly appropriated $65 million to help the agency develop better data for improved maps.
    • Addressing 911 Fee Diversion. The Commission will consider a Notice of Proposed Rulemaking that would implement section 902 of the Don’t Break Up the T-Band Act of 2020, which requires the Commission to take action to help address the diversion of 911 fees by states and other jurisdictions for purposes unrelated to 911. (PS Docket Nos. 20-291, 09-14)
    • Implementing the Secure and Trusted Communications Networks Act. The Commission will consider a Third Further Notice of Proposed Rulemaking that proposes to modify FCC rules consistent with changes that were made to the Secure and Trusted Communications Networks Act in the Consolidated Appropriations Act, 2021. (WC Docket No. 18-89)
  • On 27 July 2021, the Federal Trade Commission (FTC) will hold PrivacyCon 2021.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2021. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Photo by Martin Ceralde on Unsplash

Further Reading, Other Developments, and Coming Events (3 February 2021)

Further Reading

  • What We Learned From Apple’s New Privacy Labels” By Brian X. Chen — The New York Times. Another look at the App Store privacy labels Apple has rolled out and how confusing they can be. It can be confusing to compare the privacy and data usage afforded by a developer such that its often like comparing apples and oranges.
  • The U.S. Spent $2.2 Million on a Cybersecurity System That Wasn’t Implemented — and Might Have Stopped a Major Hack” by Peter Elkind and Jack Gillum — ProPublica. A free program developed with funding provided by the National Science Foundation (NSF) would have likely made it harder for the SVR to penetrate SolarWinds’ systems and use their updates as Trojan Horses to penetrate thousands of entities, including United States departments and agencies. No one has a good explanation of why this program was not made mandatory in federal systems and for federal contractors.
  • Suspected Chinese hackers used SolarWinds bug to spy on U.S. payroll agency – sources” By Christopher Bing, Jack Stubbs, Raphael Satter, and Joseph Menn — Reuters. Speaking of SolarWinds, it appears hackers associated with the People’s Republic of China (PRC) may have also penetrated and then used the company’s software to get into United States (U.S.) government systems. In this case, it appears a bureau inside the Department of Agriculture that handles payroll information for federal employees was compromised. And, as unlikely as it seems, this entity, the National Finance Center, handles the payroll for a number of agencies with security responsibilities including the Federal Bureau of Investigation and the Departments of Homeland Security, State and Treasury. This mirrors the PRC’s monumental hack of the Office of Personnel Management in the Obama Administration that continues to have implications today, especially in making it harder for American intelligence operatives overseas. And more concerning is that the PRC hackers used a different vulnerability than the Russians did.
  • Important stories hidden in Google’s ‘experiment’ blocking Australian news sites” By Nick Evershed — The Guardian. The search engine and online advertising giant has already begun experiments on blocking or deprioritizing search results ahead of the enactment of the “Treasury Laws Amendment (News Media and Digital Platforms Mandatory Bargaining Code) Bill 2020” that would require Google and Facebook to pay for the use of Australian media content. Major news sites are sometimes not findable nor are articles on those sites even if people are searching for them. Google claims this is just an experiment to gather data.
  • In cyber espionage, U.S. is both hunted and hunter” By Zach Dorfman — Axios. This piece makes the argument that whatever the Russian Federation and the People’s Republic of China have pilfered via SolarWinds vulnerabilities, United States (U.S.) hackers have and are engaging in the same activities.
  • Most Tools Failed to Detect the SolarWinds Malware. Those That Did Failed Too” By Rob Knake — Council on Foreign Relations. This piece covers some of the misalignment of incentives that may have caused some companies that successfully fended off the SolarWinds hack from sharing information so other companies could defend themselves. The author even suggests the time may have arrived for mandatory information sharing through a government hub such as the Cybersecurity and Infrastructure Security Agency (CISA).

Other Developments

  • Alejandro Mayorkas was confirmed by a 56-43 vote to be the next Secretary of Homeland Security, a position that has not been filled with a Senate-confirmed nominee since former Secretary Kirstjen Nielsen resigned in April 2019. Mayorkas’ nomination had been held by Senator Josh Hawley (R-MO) over potential Biden Administration immigration policy. However, to date, the White House has not named its nominee to head the Cybersecurity and Infrastructure Security Agency (CISA) nor the newly established National Cyber Director.
  • The new top Republican on the House Energy and Commerce Committee issued her “Big Tech Accountability Platform,” in which she cast “Big Tech” as “a destructive force to our society because of its attack on freedom of speech and the truth….principles…central to the foundations of our democracy and the Promise of America.” Ranking Member Cathy McMorris Rodgers (R-WA) laid out her priorities as the leader of the minority party on the primary committee of jurisdiction over technology in the House of Representatives. However, she conspicuously omitted any mention of privacy legislation and a number of other legislative areas. A year ago, McMorris Rodgers, then the ranking member on the Consumer Protection and Commerce Subcommittee, issued a privacy discussion draft with Chair Jan Schakowsky (D-IL) (see here for more analysis.) It is not clear from McMorris Rodgers’ policy statement the degree to which she is interested in working with the majority on the committee, in the House, and in the Senate on privacy legislation. The omission of privacy from her document may be a way of preserving maximum flexibility on federal privacy legislation and signaling to Democrats she wants to work with them. Nevertheless, McMorris Rodgers repeats the by now Republican orthodoxy that “Big Tech” is biased against them and is trampled their free speech rights in violation of the First Amendment despite no serious evidence of this being true.
    • Nevertheless, McMorris Rodgers suggested to the Republican Members of the committee that they seek to work in bipartisan fashion with Democrats on legislation and proposed a sunset provision on 47 USC 230 (Section 230), which would bring this legal shield’s protection to an end on a date in the future.
    • McMorris Rodgers stated “[o]ur Big Tech Accountability Platform will be guided by four principles: 1) increasing meaningful transparency; 2) enhancing oversight and accountability; 3) pushing for consistency and objectivity; and 4) exploring competition issues so innovation is unleashed, not quashed.”
    • McMorris Rodgers identified the “BIG TECH ISSUES TO BE ADDRESSED:”
      • Big Tech Responsibility:
        • Section 230 Reform: Consider several proposals requiring Big Tech to manage their platforms more responsibly, including repealing their liability protection when they neglect their “Good Samaritan” obligations;
        • Content Policies and Enforcement: Require disclosures regarding how Big Tech develops its content policies and require regular disclosures about content policy enforcement, including the types of content taken down and why, and clearly understood appeals processes;
        • Law Enforcement: Establish concrete means for Big Tech to communicate, consult, and coordinate with law enforcement to address illicit content on their platform, such as illegal sale of opioids, terrorist and violent extremists’ content, and other issues. We must ensure online threats are acted upon and evidence preserved;
        • Our Children: Explore and expose how Big Tech hurts children, including how Big Tech contributes to suicides and anxiety, especially in young girls; how Big Tech uses algorithms to drive addiction; and the role Big Tech plays in child grooming and trafficking;
        • Election Issues: Explore the role Big Tech plays in elections, particularly when it comes to their bias and censorship of news articles, such as the New York Post article they suppressed leading to the 2020 election; and
        • Deplatforming: Explore ways in which Big Tech makes decisions to deplatform users and whether some remedy to challenge those decisions should be available.
      • Big Tech Power:
        • App Stores: Explore Apple and Google’s app store policies, including how their decisions to remove or host certain apps limits or increases consumer choice;
        • Coordination: Explore how Big Tech wields its power and the groupthink that develops to silence the truth;
        • Media: Explore how Big Tech influences traditional media, including local media, how their power restricts consumer choice, and how they wield that power to build a narrative and control the stories we see online;
        • Data: Explore Big Tech’s mass accumulation of data and how it impacts new entrants’ ability to compete and create consumer choice; and
        • E-Commerce Marketplace Power: Explore how Big Tech wields its e-commerce power over consumer choice.
  • House Foreign Affairs Committee Ranking Member Michael McCaul (R-TX), House Armed Services Committee Ranking Member Mike Rogers (R-AL), Representative Elise Stefanik (R-NY), and 22 other House Republicans have written President Joe Biden “to engage with our allies on emerging technology issues” because “China is undoubtedly the greatest military, economic, and geopolitical threat to the United States and our allies in this century, as exemplified by the Chinese Communist Party’s (CCP) effort to lead the world in critical emerging technologies like 5G communications and artificial intelligence.”

Coming Events

  • On 3 February, the Senate Commerce, Science, and Transportation Committee will consider the nomination of Rhode Island Governor Gina Raimondo to be the Secretary of Commerce.
  • On 17 February, the Federal Communications Commission (FCC) will hold an open meeting, its first under acting Chair Jessica Rosenworcel, with this tentative agenda:
    • Presentation on the Emergency Broadband Benefit Program. The Commission will hear a presentation on the creation of an Emergency Broadband Benefit Program. Congress charged the FCC with developing a new $3.2 billion program to help Americans who are struggling to pay for internet service during the pandemic.
    • Presentation on COVID-19 Telehealth Program. The Commission will hear a presentation about the next steps for the agency’s COVID-19 Telehealth program. Congress recently provided an additional $249.95 million to support the FCC’s efforts to expand connected care throughout the country and help more patients receive health care safely.
    • Presentation on Improving Broadband Mapping Data. The Commission will hear a presentation on the work the agency is doing to improve its broadband maps. Congress directly appropriated $65 million to help the agency develop better data for improved maps.
    • Addressing 911 Fee Diversion. The Commission will consider a Notice of Proposed Rulemaking that would implement section 902 of the Don’t Break Up the T-Band Act of 2020, which requires the Commission to take action to help address the diversion of 911 fees by states and other jurisdictions for purposes unrelated to 911. (PS Docket Nos. 20-291, 09-14)
    • Implementing the Secure and Trusted Communications Networks Act. The Commission will consider a Third Further Notice of Proposed Rulemaking that proposes to modify FCC rules consistent with changes that were made to the Secure and Trusted Communications Networks Act in the Consolidated Appropriations Act, 2021. (WC Docket No. 18-89)
  • On 27 July 2021, the Federal Trade Commission (FTC) will hold PrivacyCon 2021.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2021. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Peter H from Pixabay

Further Reading, Other Developments, and Coming Events (2 February 2021)

Further Reading

  • I checked Apple’s new privacy ‘nutrition labels.’ Many were false.” By Geoffrey Fowler — The Washington Post. It turns out the blue check mark in Apple’s App Store signifying that an app does not collect personal data is based on the honor system. As the Post’s technology columnist learned, Apple tells users this in very small print: “This information has not been verified by Apple.” And so, as Fowler explains, this would seem contrary to the company’s claims of making user privacy a core value. Also, Apple’s definition of tracking is narrow, suggesting the company may be defining its way to being a champion of privacy. Finally, Apple’s practices in light of the coming changes to its iOS to defeat Facebook and others’ tracking people across digital space seem to belie the company’s PR and branding. It would seem like the Federal Trade Commission (FTC) and its overseas counterparts would be interested in such deceptive and unfair practices.
  • Lawmakers Take Aim at Insidious Digital ‘Dark Patterns’” By Tom Simonite — WIRED. Language in the “California Privacy Rights Act” (CPRA) makes consent gained through the use of “dark patterns” (i.e., all those cognitive tricks online and real-life entities use to slant the playing field against consumers) invalid. However, lest one celebrate that policymakers are addressing these underhanded means of gaining consent or selling things, the to be established California Privacy Protection Agency will need to define what dark patterns are and write the regulations barring whatever those will be. In Washington state, the sponsors of the Washington Privacy Act (SB 5062) copied the CPRA language, setting up the possibility Washington state could follow California. It remains to be seen how, or even if, federal privacy legislation proposals deal with dark patterns. And it well may considering that Senators Mark Warner (D-VA) and Deb Fischer (R-NE) introduced the “Deceptive Experiences To Online Users Reduction (DETOUR) Act” (S.1084) in 2019. Moreover, again, as in the previous article, one might think the Federal Trade Commission (FTC) and its overseas counterparts might be interested in policing dark patterns.
  • A PR screwup draws unwanted attention to Google’s Saudi data centers” By Issie Lapowsky — Protocol. The best case scenario is that Google and Snap misstated what cloud infrastructure and content are in the Kingdom of Saudi Arabia. And in this case, privacy and civil liberties groups are unfairly pouncing on the companies over essentially garbling the truth. On the other hand, it may turn out that the companies are routing traffic and content through the repressive regime, allowing a government with an abysmal human rights record to access the data of people. Time may tell what is actually happening, but the two companies are furiously telling the world that there’s nothing to see here.
  • China’s Leader Attacks His Greatest Threat” By John Pomfret — The Atlantic. Xi Jinping, President of the People’s Republic of China (PRC) and Chairman of the Chinese Communist Party (CCP) has accelerated a crack down on entrepreneurs and technology companies started by his predecessors. This would ultimately impair the PRC’s ambitions of becoming the world’s dominant power through technological superiority.
  • Why Is Big Tech Policing Speech? Because the Government Isn’t” By Emily Bazelon — The New York Times. The First Amendment to the United States (U.S.) Constitution is invariably cited in the online speech debate as a reason why people cannot be silenced and as to why social media platforms can silence whom they like. This is an interesting survey of this right in the U.S. and how democracies in Europe have a different understanding of permissible speech.

Other Developments

  • In a recent press conference, White House Press Secretary Jen Psaki shed light on how the Biden Administration will change United States (U.S.) policy towards the People’s Republic of China (PRC). In response to a question about how the U.S. government will deal with TikTok and the PRC generally, Psaki stated:
    • I think our approach to China remains what it has been since — for the last months, if not longer.  We’re in a serious competition with China.  Strategic competition with China is a defining feature of the 21st century.  China is engaged in conduct that it hurts American workers, blunts our technological edge, and threatens our alliances and our influence in international organizations.  
    • What we’ve seen over the last few years is that China is growing more authoritarian at home and more assertive abroad.  And Beijing is now challenging our security, prosperity, and values in significant ways that require a new U.S. approach. 
    • And this is one of the reasons, as we were talking about a little bit earlier, that we want to approach this with some strategic patience, and we want to conduct reviews internally, through our interagency….We wanted to engage more with Republicans and Democrats in Congress to discuss the path forward.  And most importantly, we want to discuss this with our allies. 
    • We believe that this moment requires a strategic and a new approach forward.
    • [T]echnology, as I just noted, is, of course, at the center of the U.S.-China competition.  China has been willing to do whatever it takes to gain a technological advantage — stealing intellectual property, engaging in industrial espionage, and forcing technology transfer.
    • Our view — the President’s view is we need to play a better defense, which must include holding China accountable for its unfair and illegal practices and making sure that American technologies aren’t facilitating China’s military buildup.
    • So he’s firmly committed to making sure that Chinese companies cannot misappropriate and misuse American data.  And we need a comprehensive strategy, as I’ve said, and a more systematic approach that actually addresses the full range of these issues.
    • So there is, again, an ongoing review of a range of these issues.  We want to look at them carefully, and we’ll be committed to approaching them through the lens of ensuring we’re protecting U.S. data and America’s technological edge. 
  • The top Republican on the House Foreign Affairs Committee is calling on Senate Republicans to block Governor Gina Raimondo’s nomination to be the Secretary of Commerce until the White House indicates whether they will keep Huawei on a list of entities to whom the United States (U.S.) restricts exports. Ranking Member Michael McCaul (R-TX) asserted:
    • It is incredibly alarming the Biden Administration has refused to commit to keeping Huawei on the Department of Commerce’s Entity List. Huawei is not a normal telecommunications company – it is a CCP military company that threatens 5G security in our country, steals U.S. intellectual property, and supports the Chinese Communist Party’s genocide in Xinjiang and their human rights abuses across the country. We need a Commerce Department with strong national security credentials and a Secretary with a clear understanding of the CCP threat. Saying people should not use Huawei and actually keeping them on the Entity List are two very different things that result in very different outcomes. I again strongly urge the Biden Administration to reconsider this dangerous position. Until they make their intentions clear on whether they will keep Huawei on the Entity List, I urge my Senate colleagues to hold Ms. Raimondo’s confirmation.
    • McCaul added this background:
      • After the Biden Administration’s nominee for Commerce Secretary, Gina Raimondo, caused heads to turn by refusing to commit to keeping Huawei on the Entity List, White House Press Secretary Jen Psaki seemed to double down by declining on two separate occasions when directly asked to say where President Biden stood on the issue.
      • Huawei was placed on the Commerce Department’s Entity List in August of 2019. Their addition to the Entity List was also one of the recommendations of the [House Republican’s] China Task Force Report.
  • The National Highway Traffic Safety Administration (NHTSA), an agency of the United States (U.S.) Department of Transportation (DOT) is asking for comment “on the Agency’s updated draft cybersecurity best practices document titled Cybersecurity Best Practices for the Safety of Modern Vehicles” according to the notice published in the Federal Register. Comments are due by 15 March 2021. NHTSA explained:
    • In October 2016, NHTSA issued its first best practices document focusing on the cybersecurity of motor vehicles and motor vehicle equipment.Cybersecurity Best Practices for Modern Vehicles (“2016 Best Practices”) was the culmination of years of extensive engagement with public and private stakeholders and NHTSA research on vehicle cybersecurity and methods of enhancing vehicle cybersecurity industry-wide. As explained in the accompanying Federal Register document, NHTSA’s 2016 Best Practices was released with the goal of supporting industry-led efforts to improve the industry’s cybersecurity posture and provide the Agency’s views on how the automotive industry could develop and apply sound risk-based cybersecurity management processes during the vehicle’s entire lifecycle.
    • The 2016 Best Practices leveraged existing automotive domain research as well as non-automotive and IT-focused standards such as the National Institute of Standards and Technology (NIST) Cybersecurity Framework and the Center for internet Security’s Critical Security Controls framework. NHTSA considered these sources to be reasonably applicable and appropriate to augment the limited industry-specific guidance that was available at the time. At publication, NHTSA noted that the 2016 Best Practices were intended to be updated with new information, research, and other cybersecurity best practices related to the automotive industry. NHTSA invited comments from stakeholders and interested parties in response to the document.
    • NHTSA is docketing a draft update to the agency’s 2016 Best Practices, titled Cybersecurity Best Practices for the Safety of Modern Vehicles (2020 Best Practices) for public comments. This update builds upon agency research and industry progress since 2016, including emerging voluntary industry standards, such as the ISO/SAE Draft International Standard (DIS) 21434, “Road Vehicles—Cybersecurity Engineering.” In addition, the draft update references a series of industry best practice guides developed by the Auto-ISAC through its members.
    • The 2020 Best Practices also reflect findings from NHTSA’s continued research in motor vehicle cybersecurity, including over-the-air updates, encryption methods, and building our capability in cybersecurity penetration testing and diagnostics, and the new learnings obtained through researcher and stakeholder engagement. Finally, the updates included in the 2020 Best Practices incorporate insights gained from public comments received in response to the 2016 guidance and from information obtained during the annual SAE/NHTSA Vehicle Cybersecurity Workshops.
  • Ireland’s Data Protection Commission (DPC) has released a draft Fundamentals for a Child-Oriented Approach to Data Processing Draft Version for Consultation (Fundamentals) for consultation until 31 March 2021. The DPC asserted the
    • Fundamentals have been drawn up by the Data Protection Commission (DPC) to drive improvements in standards of data processing. They introduce child-specific data protection interpretative principles and recommended measures that will enhance the level of protection afforded to children against the data processing risks posed to them by their use of/ access to services in both an online and offline world. In tandem, the Fundamentals will assist organisations that process children’s data by clarifying the principles, arising from the high-level obligations under the GDPR, to which the DPC expects such organisations to adhere.
    • The DPC “identified the following 14 Fundamentals that organisations should follow to enhance protections for children in the processing of their personal data:
      • 1. FLOOR OF PROTECTION: Online service providers should provide a “floor” of protection for all users, unless they take a risk-based approach to verifying the age of their users so that the protections set out in these Fundamentals are applied to all processing of children’s data (Section 1.4 “Complying with the Fundamentals”).
      • 2. CLEAR-CUT CONSENT: When a child has given consent for their data to be processed, that consent must be freely given, specific, informed and unambiguous, made by way of a clear statement or affirmative action (Section2.4 “Legal bases for processing children’s data”).
      • 3. ZERO INTERFERENCE: Online service providers processing children’s data should ensure that the pursuit of legitimate interests do not interfere with, conflict with or negatively impact, at any level, the best interests of the child (Section 2.4 “Legal bases for processing children’s data”).
      • 4. KNOW YOUR AUDIENCE: Online service providers should take steps to identify their users and ensure that services directed at/ intended for or likely to be accessed by children have child-specific data protection measures in place (Section 3.1 “Knowing your audience”)
      • 5. INFORMATION IN EVERY INSTANCE: Children are entitled to receive information about the processing of their own personal data irrespective of the legal basis relied on and even if consent was given by a parent on their behalf to the processing of their personal data (Section 3 “Transparency and children”).
      • 6. CHILD-ORIENTED TRANSPARENCY: Privacy information about how personal data is used must be provided in a concise, transparent, intelligible and accessible way, using clear and plain language that is comprehensible and suited to the age of the child (Section 3 “Transparency and children”).
      • 7 .LET CHILDREN HAVE THEIR SAY: Online service providers shouldn’t forget that children are data subjects in their own right and have rights in relation to their personal data at any age. The DPC considers that a child may exercise these rights at any time, as long as they have the capacity to do so and it is in their best interests. (Section 4.1 “The position of children as rights holders”)
      • 8. CONSENT DOESN’T CHANGE CHILDHOOD: Consent obtained from children or from the guardians/ parents should not be used as a justification to treat children of all ages as if they were adults (Section 5.1 “Age of digital consent”).
      • 9. YOUR PLATFORM, YOUR RESPONSIBILITY: Companies who derive revenue from providing or selling services through digital and online technologies pose particular risks to the rights and freedoms of children. Where such a company uses age verification and/ or relies on parental consent for processing, the DPC will expect it to go the extra mile in proving that its measures around age verification and verification of parental consent are effective. (Section 5.2 “Verification of parental consent)
      • 10. DON’T SHUT OUT CHILD USERS OR DOWNGRADE THEIR EXPERIENCE: If your service is directed at, intended for, or likely to be accessed by children, you can’t bypass your obligations simply by shutting them out or depriving them of a rich service experience. (Section 5.4 “Age verification and the child’s user experience”)
      • 11. MINIMUM USER AGES AREN’T AN EXCUSE: Theoretical user age thresholds for accessing services don’t displace the obligations of organisations to comply with the controller obligations under the GDPR and the standards and expectations set out in these Fundamentals where “underage” users are concerned. (Section 5.5 “Minimum user ages”)
      • 12. PROHIBITION ON PROFILING: Online service providers should not profile children and/ or carry out automated decision making in relation to children, or otherwise use their personal data, for marketing/advertising purposes due to their particular vulnerability and susceptibility to behavioural advertising, unless they can clearly demonstrate how and why it is in the best interests of the child to do so (Section 6.2 “Profiling and automated decision making”).
      • 13. DO A DPIA: Online service providers should undertake data protection impact assessments to minimise the data protection risks of their services, and in particular the specific risks to children which arise from the processing of their personal data. The principle of the best interests of the child must be a key criterion in any DPIA and must prevail over the commercial interests of an organisation in the event of a conflict between the two sets of interests (Section 7.1 “Data Protection Impact Assessments”).
      • 14. BAKE IT IN: Online service providers that routinely process children’s personal data should, by design and by default, have a consistently high level of data protection which is “baked in” across their services (Section 7.2 “Data Protection by Design and Default”)
  • The United Kingdom’s (UK) Competition and Markets Authority (CMA) “is now seeking evidence from academics and industry experts on the potential harms to competition and consumers caused by the deliberate or unintended misuse of algorithms…[and] is also looking for intelligence on specific issues with particular firms that the CMA could examine and consider for future action.” CMA stated “[t]he research and feedback will inform the CMA’s future work in digital markets, including its programme on analysing algorithms and the operation of the new Digital Markets Unit (DMU), and the brand-new regulatory regime that the DMU will oversee.” The CMA stated:
    • Algorithms can be used to personalise services in ways that are difficult to detect, leading to search results that can be manipulated to reduce choice or artificially change consumers’ perceptions. An example of this is misleading messages which suggest a product is in short supply.
    • Companies can also use algorithms to change the way they rank products on websites, preferencing their own products and excluding competitors. More complex algorithms could aid collusion between businesses without firms directly sharing information. This could lead to sustained higher prices for products and services.
    • The majority of algorithms used by private firms online are currently subject to little or no regulatory oversight and the research concludes that more monitoring and action is required by regulators, including the CMA. The CMA has already considered the impact of algorithms on competition and consumers in previous investigations, for example monitoring the pricing practices of online travel agents.
    • In the algorithms paper, the CMA explained:
      • The publication of this paper, and the accompanying call for information mark the launch of a new CMA programme of work on analysing algorithms, which aims to develop our knowledge and help us better identify and address harms. This paper reviews the potential harms to competition and consumers from the use of algorithms, focussing on those the CMA or other national competition or consumer authorities may be best placed to address.
      • We first describe direct harms to consumers, many of which involve personalisation. Personalisation can be harmful because it is difficult to detect either by consumers or others, targets vulnerable consumers or has unfair distributive effects. These harms often occur through the manipulation of consumer choices, without the awareness of the consumer.
      • The paper then explores how the use of algorithms can exclude competitors and so reduce competition (for example, a platform preferencing its own products). We outline the most recent developments in the algorithmic collusion literature; collusion appears an increasingly significant risk if the use of more complex pricing algorithms becomes widespread. We also describe how using ineffective algorithms to oversee platform activity fails to prevent harm.
      • Next, we summarise techniques that could be used to analyse algorithmic systems. Potentially problematic systems can be identified even without access to underlying algorithms and data. However, to understand fully how an algorithmic system works and whether consumer or competition law is being breached, regulators need appropriate methods to audit the system. We finally discuss the role of regulators. Regulators can help to set standards and facilitate better accountability of algorithmic systems, including support for the development of ethical approaches, guidelines, tools and principles. They can also use their information gathering powers to identify and remedy harms on either a case-by-case basis or as part of an ex-ante regime overseen by a regulator of technology firms, such as the proposed Digital Markets Unit (DMU) in the UK.
  • The National Institute of Standards and Technology (NIST) is making available for comment a draft of NIST Special Publication (SP) 800-47 Revision 1, Managing the Security of Information Exchanges, that “provides guidance on identifying information exchanges; risk-based considerations for protecting exchanged information before, during, and after the exchange; and example agreements for managing the protection of the exchanged information.” NIST is accepting comments through 12 March 2021. The agency stated:
    • Rather than focus on any particular type of technology-based connection or information access, this draft publication has been updated to define the scope of information exchange, describe the benefits of securely managing the information exchange, identify types of information exchanges, discuss potential security risks associated with information exchange, and detail a four-phase methodology to securely manage information exchange between systems and organizations. Organizations are expected to further tailor the guidance to meet specific organizational needs and requirements.
    • NIST is specifically interested in feedback on:
      • Whether the agreements addressed in the draft publication represent a comprehensive set of agreements needed to manage the security of information exchange.
      • Whether the matrix provided to determine what types of agreements are needed is helpful in determining appropriate agreement types.
      • Whether additional agreement types are needed, as well as examples of additional agreements.
      • Additional resources to help manage the security of information exchange.

Coming Events

  • On 3 February, the Senate Commerce, Science, and Transportation Committee will consider the nomination of Rhode Island Governor Gina Raimondo to be the Secretary of Commerce.
  • On 17 February, the Federal Communications Commission (FCC) will hold an open meeting, its first under acting Chair Jessica Rosenworcel, with this tentative agenda:
    • Presentation on the Emergency Broadband Benefit Program. The Commission will hear a presentation on the creation of an Emergency Broadband Benefit Program. Congress charged the FCC with developing a new $3.2 billion program to help Americans who are struggling to pay for internet service during the pandemic.
    • Presentation on COVID-19 Telehealth Program. The Commission will hear a presentation about the next steps for the agency’s COVID-19 Telehealth program. Congress recently provided an additional $249.95 million to support the FCC’s efforts to expand connected care throughout the country and help more patients receive health care safely.
    • Presentation on Improving Broadband Mapping Data. The Commission will hear a presentation on the work the agency is doing to improve its broadband maps. Congress directly appropriated $65 million to help the agency develop better data for improved maps.
    • Addressing 911 Fee Diversion. The Commission will consider a Notice of Proposed Rulemaking that would implement section 902 of the Don’t Break Up the T-Band Act of 2020, which requires the Commission to take action to help address the diversion of 911 fees by states and other jurisdictions for purposes unrelated to 911. (PS Docket Nos. 20-291, 09-14)
    • Implementing the Secure and Trusted Communications Networks Act. The Commission will consider a Third Further Notice of Proposed Rulemaking that proposes to modify FCC rules consistent with changes that were made to the Secure and Trusted Communications Networks Act in the Consolidated Appropriations Act, 2021. (WC Docket No. 18-89)
  • On 27 July 2021, the Federal Trade Commission (FTC) will hold PrivacyCon 2021.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2021. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by John Howard from Pixabay

Further Reading, Other Developments, and Coming Events (1 February 2021)

Further Reading

  • Facebook and Apple Are Beefing Over the Future of the Internet” By Gilad Edelman — WIRED. The battle over coming changes to Apple’s iOS continues to escalate. Apple CEO Tim Cook said the changes that will change the app set up for iPhone users to an opt-in system for tracking people across the internet would help protect both privacy and democracy. This latter claim is a shot at Facebook and its role in the rise of extremist groups in the United States and elsewhere. Facebook CEP Mark Zuckerberg claimed this change was of a piece with Apple’s long term interests in driving the app market from a free to paid model that would benefit the Cupertino giant through its 30% fees on all in-app purchases. Zuckerberg also reiterated Facebook’s arguments that such a change by Apple will harm small businesses that will have a harder time advertising. Facebook is also making noise about suing Apple in the same way Epic Games has for its allegedly anti-competitive app store practices. Experts expect Apple’s change will take as much as 10% off of Facebook’s bottom line until it and other advertising players adjust their tactics. This will not be the last shots fired between the two tech giants.
  • Democratic Congress Prepares to Take On Big Tech” By Cecilia Kang — The New York Times. Senator Amy Klobuchar (D-MN) is vowing to introduce antitrust legislation this spring that could rein in big technology companies in the future. Klobuchar’s proposal will receive serious consideration because she now chairs the Senate Judiciary Committee’s subcommittee with jurisdiction over antitrust and competition policy. Klobuchar also plans to release a book this spring with her views on antitrust. Any proposal to reform antitrust law faces a steep uphill battle to 60 votes in the Senate.
  • Pressure builds on Biden, Democrats to revive net neutrality rules” By Tony Romm — The Washington Post. Until the Federal Communications Commission (FCC) has a third Democratic vote, pressure from the left will be on whom the Biden Administration will choose to nominate. Once a Democratic majority is in place, the pressure will be substantial to re-promulgate the Obama Administration net neutrality order.
  • Why Google’s Internet-Beaming Balloons Ran Out of Air” By Aaron Mak — Slate. Among the reasons Alphabet pulled the plug on Loon, its attempt to provide internet service in areas without it, include: the costs, lack of revenue since the areas without service tend to be poorer, the price barriers to people getting 4G devices, and resistance or indifference from governments and regulators.
  • A big hurdle for older Americans trying to get vaccinated: Using the internet” By Rebecca Heilweil — recode. Not surprisingly, the digital divide and basic digital literacy are barriers to the elderly, especially poorer and minorities segment of that demographic, securing online appointments for COVID-19 vaccination.

Other Developments

  • A group of House and Senate Democrats have reintroduced the “Public Health Emergency Privacy Act,” a bill that follows legislation of the same title introduced last spring to address gaps in United States (U.S.) privacy law turned up by the promise of widespread use of COVID-19 tracking apps. And while adoption and usage of these apps have largely underperformed expectations, the gaps and issues have not. And, so Representatives Suzan DelBene (D-WA), Anna Eshoo (D-CA), and Jan Schakowsky (D-IL) and Senators Richard Blumenthal (D-CT) and Mark Warner (D-VA) have introduced the “Public Health Emergency Privacy Act” (S.81) but did not make available bill text, so it is not possible at this point to determine how closely it matches last year’s bill, the “Public Health Emergency Privacy Act” (S.3749/H.R.6866) (see here for my analysis of last year’s bill.) However, in a sign that the bills may be identical or very close in their wording, the summary provided in May 2020 and the one provided last week are exactly the same:
    • Ensure that data collected for public health is strictly limited for use in public health;
    • Explicitly prohibit the use of health data for discriminatory, unrelated, or intrusive purposes, including commercial advertising, e-commerce, or efforts to gate access to employment, finance, insurance, housing, or education opportunities;
    • Prevent the potential misuse of health data by government agencies with no role in public health;
    • Require meaningful data security and data integrity protections – including data minimization and accuracy – and mandate deletion by tech firms after the public health emergency;
    • Protect voting rights by prohibiting conditioning the right to vote based on a medical condition or use of contact tracing apps;
    • Require regular reports on the impact of digital collection tools on civil rights;
    • Give the public control over their participation in these efforts by mandating meaningful transparency and requiring opt-in consent; and
    • Provide for robust private and public enforcement, with rulemaking from an expert agency while recognizing the continuing role of states in legislation and enforcement.
  • The United States Department of Justice (DOJ) filed charges against a United States (U.S.) national for “conspiring with others in advance of the 2016 U.S. Presidential Election to use various social media platforms to disseminate misinformation designed to deprive individuals of their constitutional right to vote.” In its complaint, the DOJ foes out of its way not to mention which candidate in the presidential election the accused was working to elect, contemporaneous reporting on the individual made clear he supported Donald Trump and sought to depress the vote for former Secretary of State Hillary Clinton. In its press release, the DOJ asserted:
    • The complaint alleges that in 2016, Mackey established an audience on Twitter with approximately 58,000 followers. A February 2016 analysis by the MIT Media Lab ranked Mackey as the 107th most important influencer of the then-upcoming Election, ranking his account above outlets and individuals such as NBC News (#114), Stephen Colbert (#119) and Newt Gingrich (#141).
    • As alleged in the complaint, between September 2016 and November 2016, in the lead up to the Nov. 8, 2016, U.S. Presidential Election, Mackey conspired with others to use social media platforms, including Twitter, to disseminate fraudulent messages designed to encourage supporters of one of the presidential candidates (the “Candidate”) to “vote” via text message or social media, a legally invalid method of voting.
    • For example, on Nov. 1, 2016, Mackey allegedly tweeted an image that featured an African American woman standing in front of an “African Americans for [the Candidate]” sign.  The image included the following text: “Avoid the Line. Vote from Home. Text ‘[Candidate’s first name]’ to 59925[.] Vote for [the Candidate] and be a part of history.”  The fine print at the bottom of the image stated: “Must be 18 or older to vote. One vote per person. Must be a legal citizen of the United States. Voting by text not available in Guam, Puerto Rico, Alaska or Hawaii. Paid for by [Candidate] for President 2016.”
    • The tweet included the typed hashtags “#Go [Candidate]” and another slogan frequently used by the Candidate. On or about and before Election Day 2016, at least 4,900 unique telephone numbers texted “[Candidate’s first name]” or some derivative to the 59925 text number, which was used in multiple deceptive campaign images tweeted by the defendant and his co-conspirators.
  • Six European and two North American nations worked in coordinated fashion to take down a botnet. Europol announced that “[l]aw enforcement and judicial authorities worldwide have this week disrupted one of most significant botnets of the past decade: EMOTET…[and] [i]nvestigators have now taken control of its infrastructure in an international coordinated action” per their press release. Europol added:
    • EMOTET has been one of the most professional and long lasting cybercrime services out there. First discovered as a banking Trojan in 2014, the malware evolved into the go-to solution for cybercriminals over the years. The EMOTET infrastructure essentially acted as a primary door opener for computer systems on a global scale. Once this unauthorised access was established, these were sold to other top-level criminal groups to deploy further illicit activities such data theft and extortion through ransomware.
  • On 26 January, Senator Ed Markey (D-MA) “asked Facebook why it continues to recommend political groups to users despite committing to stopping the practice” at an October 2020 hearing. Markey pressed CEO Mark Zuckerberg to “explain the apparent discrepancy between its promises to stop recommending political groups and what it has delivered.” Markey added:
    • Unfortunately, it appears that Facebook has failed to keep commitments on this topic that you made to me, other members of Congress, and your users. You and other senior Facebook officials have committed, and reiterated your commitment, to stop your platform’s practice of recommending political groups. First, on October 28, 2020, you appeared before the U.S. Senate Committee on Commerce, Science, and Transportation and stated that Facebook had stopped recommending groups with political content and social issues. When I raised concerns about Facebook’s system of recommending groups, you stated, “Senator, we have taken the step of stopping recommendations in groups for all political content or social issue groups as a precaution for this.”
    • It does not appear, however, that Facebook has kept these commitments. According to The Markup, Facebook “continued to recommend political groups to its users throughout December[of 2020]” — well after you responded to my question at the Commerce Committee hearing.
    • On 27 January, Zuckerberg announced on an earnings call that the platform would stop recommending political and civic groups to users.
  •  The United States (U.S.) Department of Transportation’s National Highway Traffic Safety Administration “announced the expansion of the Automated Vehicle Transparency and Engagement for Safe Testing (AV TEST) Initiative from a pilot to a full program” according to a press release. NHTSA announced the “new web pilot of the Department initiative to improve the safety and testing transparency of automated driving systems” in June 2020 that “aligns with the Department’s leadership on automated driving system vehicles, including AV 4.0:  Ensuring American Leadership in Automated Vehicle Technologies.”
  • The United Kingdom’s (UK) House of Lords amended the government’s Trade Bill that would allow for an agreement with the United States (U.S.) in a way that would block the U.S.’s position that essentially exports 47 USC 230 (Section 230) to the UK. The Lords agreed to this language:
    • (1)The United Kingdom may only become a signatory to an international trade agreement if the conditions in subsection (2) are satisfied.
    • (2) International trade agreements must be consistent with—
      • (a) other international treaties to which the United Kingdom is a party, and the domestic law of England and Wales (including any changes to the law after the trade agreement is signed), regarding the protection of children and other vulnerable user groups using the internet;
      • (b) the provisions on data protection for children, as set out in the age appropriate design code under section 123 of the Data Protection Act 2018 (age-appropriate design code) and other provisions of that Act which impact children; and(c)online protections provided for children in the United Kingdom that the Secretary of State considers necessary.
    • However, the House of Commons disagreed with this change, arguing “it is not an effective means of ensuring the protection of children online.”
    • In a House of Lords briefing document, it is explained:
      • The bill introduces measures to support the UK in implementing an independent trade policy, having left the European Union. It would:
        • enable the UK to implement obligations arising from acceding to the international Agreement on Government Procurement in its own right;
        • enable the UK to implement in domestic law obligations arising under international trade agreements the UK signs with countries that had an existing international trade agreement with the EU;
        • formally establish a new Trade Remedies Authority;
        • enable HM Revenue and Customs (HMRC) to collect information on the number of exporters in the UK; and
        • enable data sharing between HMRC and other private and public sector bodies to fulfil public functions relating to trade.
  • According to their press release, “a coalition of education advocates petitioned the Federal Communications Commission (FCC) to close the remote learning gap for the estimated 15 to 16 million students who lack home internet access” through the E-rate program. This petition follows an Executive Order (EO) signed by President Joe Biden on the first day of his Administration, calling on the FCC to expand broadband connectivity for children across the United States to help them with schooling and studies.
    • In their petition, the groups argued
      • In one of his first Executive Orders, President Biden stated: “The Federal Communications Commission is encouraged, consistent with applicable law, to increase connectivity options for students lacking reliable home broadband, so that they can continue to learn if their schools are operating remotely.”
      • Consistent with [Biden’s EO], the Commission can dramatically improve circumstances for these underserved students, and for schools all over the country that are struggling to educate all of their students, by taking the temporary, limited measures requested in this Petition.
      • As shown below, these actions are well within the Commission’s authority, and in fact all of the actions requested in this Petition could be taken by the Wireline Competition Bureau on delegated authority.
      • As noted above, the Petitioners ask that the Commission issue a declaratory ruling to clarify that, for the duration of the pandemic, the off-campus use of E-rate-supported services to enable remote learning constitutes an “educational purpose” and is therefore allowed under program rules.
      • The declaratory ruling will allow schools and libraries to extend E -rate-funded broadband networks and services outside of a school or library location during Funding Years 2020 and 2021, without losing E-rate funds they are otherwise eligible to receive. Importantly, this requested action would not require the collection of any additional Universal Service funds.
      • Given the severity of our current national emergency, the Petitioners ask that the Bureau release hundreds of millions of dollars—currently not designated for use but held in the E-rate program—to support remote learning. There is little justification for keeping E-rate funds in reserve when the country is facing such an enormous educational crisis.
      • The Commission should use the program’s existing discount methodologies, which take into account socioeconomic status and rural location, in calculating the amount of funding that applicants may receive.  Applicants will have the incentive to make cost-effective purchases because they will have to pay a share of the total cost of services.  
      • To facilitate the distribution of additional funding, Petitioners ask that the Commission direct the Universal Service Administrative Company (USAC) to establish a “remote learning application window” as soon as practicable for the specific purpose of allowing applicants to submit initial or revised requests for E-rate funding for off-campus services used for educational purposes during Funding Years 2020 and 2021.  
      • The Petitioners ask the Commission to waive all rules necessary to effectuate these actions for remote learning funding applications, including the competitive bidding, eligible services, and application rules, pursuant to section 1.3 of the Commission’s rules.
      • The Petitioners respectfully request expedited review of this petition, so that schools and libraries may take action to deploy solutions as soon as possible.
  • “A group of more than 70 organizations have sent a letter to Congress and the Biden/Harris administration warning against responding to the violence in the U.S. Capitol by renewing injudicious attacks on Section 230 of the Communications Decency Act” per their press release. They further urged “lawmakers to consider impacts on marginalized communities before making changes to Section 230, and call on lawmakers to take meaningful action to hold Big Tech companies accountable, including enforcement of existing anti-trust and civil rights law, and passing Federal data privacy legislation.” The signatories characterized themselves as “racial justice, LGBTQ+, Muslim, prison justice, sex worker, free expression, immigration, HIV advocacy, child protection, gender justice, digital rights, consumer, and global human rights organizations.” In terms of the substance of their argument, they asserted:
    • Gutting Section 230 would make it more difficult for web platforms to combat the type of dangerous rhetoric that led to the attack on the Capitol. And certain carve outs to the law could threaten human rights and silence movements for social and racial justice that are needed now more than ever. 
    • Section 230 is a foundational law for free expression and human rights when it comes to digital speech. It makes it possible for websites and online forums to host the opinions, photos, videos, memes, and creativity of ordinary people, rather than just content that is backed by corporations. 
    • The danger posed by uncareful changes to Section 230 is not theoretical. The last major change to the law, the passage of SESTA/FOSTA in 2018, put lives in danger. The impacts of this law were immediate and destructive, limiting the accounts of sex workers and making it more difficult to find and help those who were being trafficked online. This was widely seen as a disaster that made vulnerable communities less safe and led to widespread removal of speech online.
    • We share lawmakers’ concerns with the growing power of Big Tech companies and their unwillingness to address the harm their products are causing. Google and Facebook are just some of the many companies that compromise the privacy and safety of the public by harvesting our data for their own corporate gain, and allowing advertisers, racists and conspiracy theorists to use that data to target us. These surveillance-based business models are pervasive and an attack on human rights. But claims that Section 230 immunizes tech companies that break the law, or disincentivizes them from removing illegal or policy-violating content, are false. In fact, Amazon has invoked Section 230 to defend itself against a lawsuit over its decision to drop Parler from Amazon Web Services due to unchecked threats of violence on Parler’s platform. Additionally, because Section 230 protects platforms’ decisions to remove objectionable content, the law played a role in enabling the removal of Donald Trump from platforms, who could act without fear of excessive litigation.

Coming Events

  • On 3 February, the Senate Commerce, Science, and Transportation Committee will consider the nomination of Rhode Island Governor Gina Raimondo to be the Secretary of Commerce.
  • On 17 February, the Federal Communications Commission (FCC) will hold an open meeting, its first under acting Chair Jessica Rosenworcel, with this tentative agenda:
    • Presentation on the Emergency Broadband Benefit Program. The Commission will hear a presentation on the creation of an Emergency Broadband Benefit Program. Congress charged the FCC with developing a new $3.2 billion program to help Americans who are struggling to pay for internet service during the pandemic.
    • Presentation on COVID-19 Telehealth Program. The Commission will hear a presentation about the next steps for the agency’s COVID-19 Telehealth program. Congress recently provided an additional $249.95 million to support the FCC’s efforts to expand connected care throughout the country and help more patients receive health care safely.
    • Presentation on Improving Broadband Mapping Data. The Commission will hear a presentation on the work the agency is doing to improve its broadband maps. Congress directly appropriated $65 million to help the agency develop better data for improved maps.
    • Addressing 911 Fee Diversion. The Commission will consider a Notice of Proposed Rulemaking that would implement section 902 of the Don’t Break Up the T-Band Act of 2020, which requires the Commission to take action to help address the diversion of 911 fees by states and other jurisdictions for purposes unrelated to 911. (PS Docket Nos. 20-291, 09-14)
    • Implementing the Secure and Trusted Communications Networks Act. The Commission will consider a Third Further Notice of Proposed Rulemaking that proposes to modify FCC rules consistent with changes that were made to the Secure and Trusted Communications Networks Act in the Consolidated Appropriations Act, 2021. (WC Docket No. 18-89)
  • On 27 July 2021, the Federal Trade Commission (FTC) will hold PrivacyCon 2021.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2021. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Photo by Nikolai Chernichenko on Unsplash

Further Reading, Other Developments, and Coming Events (26, 27, and 28 January 2021)

Further Reading

  • President Biden’s Tech To-Do List” By Shira Ovide — The New York Times. Another survey of the pressing tech issues President Joe Biden and his Administration will grapple with.
  • Trying to improve remote learning? A refugee camp offers some surprising lessons” By Javeria Salman — The Hechinger Report. An organization that is helping refugee children advises that digital literacy is the necessary first step in helping all children have positive online learning experiences (assuming of course they have devices and internet access). This means more than being adept with Instagram, TikTok, and Snapchat. They also suggest that children work on projects as opposed to busy work.
  • Silicon Valley Takes the Battlespace” By Jonathan Guyer — The American Prospect. A company funded, in part, by former Google CEO Eric Schmidt, Rebellion Defense, landed two members on then President-elect Joe Biden’s official transition team, causing some to wonder about the group. This starts up writes artificial intelligence (AI) with defense industry applications, among other products. Schmidt chairs the National Security Commission on Artificial Intelligence and is widely seen as a bridge between Washington and Silicon Valley. Some see the rise of this company as the classic inside the Beltway tale of blurring interests and capitalizing on connections and know how.
  • The fight to make Netflix and Hulu pay cable fees” By Adi Robertson — The Verge. Municipalities are suing platforms like Netflix, Hulu, Dish Network, DirecTV and others, claiming they are not paying the franchise fees and quarterly fees traditional cable companies have been subject to for the use of the localities’ rights of way and broadband service. The companies are, of course, arguing they are not subject to these laws because they are not cable companies. There have been a host of such suits filed throughout the United States (U.S.) and bear watching.
  • Twitter’s misinformation problem is much bigger than Trump. The crowd may help solve it.” By Elizabeth Dwoskin — The Washington Post. Sounds like Twitter is going the route of Wikipedia with a pilot in which volunteers would fact check and provide context to problematic content. Perhaps this helps address the problems posed by social media platforms.
  • Biden’s clean up of Silicon Valley poses a problem for Scott Morrison” By Harley Dennett — The Canberra Times. The concern down under is that the Biden Administration will press the Morrison government into weakening the “Treasury Laws Amendment (News Media and Digital Platforms Mandatory Bargaining Code) Bill 2020” that “establishes a mandatory code of conduct to help support the sustainability of the Australian news media sector by addressing bargaining power imbalances between digital platforms and Australian news businesses” according to the Explanatory Memorandum. Doing so would please Google, Facebook, and others, supposedly making them more amenable to the coming policy changes Democrats want to unleash on tech companies. It remains to be seen what the Biden Administration would get in return.
  • China turbocharges bid to discredit Western vaccines, spread virus conspiracy theories” By Gerry Shih — The Washington Post. In light of more effective vaccines developed by United States (U.S.) companies and a World Health Organization (WHO) team in Wuhan investigating, the People’s Republic of China (PRC) has kicked its propaganda campaign into high gear. All sorts of unsubstantiated claims are being made about the safety and effectiveness of the U.S. vaccines and the source of COVID-19 (allegedly from the U.S.)
  • A Chinese hacking group is stealing airline passenger details” By Catalin Cimpanu — ZDNet.  Hackers associated with the People’s Republic of China (PRC) apparently hacked into one of the companies that generates Passenger Name Records (PNR) that details who flies where and when. There are many uses for these data, including identifying likely foreign intelligence operatives such as Central Intelligence Agency (CIA) agents stationed abroad.
  • Biden Has a Peloton Bike. That Raises Issues at the White House.” By Sheryl Gay Stolberg — The New York Times. This is the level of coverage of the new President. His predecessor used an insecure iPhone that other nations’ intelligence agencies were likely tapping and was famously careless with classified information. And yet, President Joe Biden’s Peloton worries cybersecurity experts. Buried inside the story are the revelations that during the Digital Age, Presidents present cybersecurity challenges and tailored solutions are found.
  • Ministry of Electronics asks Whatsapp to withdraw changes to privacy policy, disclose data sharing practice” By Bismah Malik — The New Indian Express. India’s Ministry of Electronics and Information Technology (MeitY) is asking WhatsApp to scrap plans to roll out an already delayed change to privacy policies. India is the company’s largest market and has already flexed its muscle against other foreign apps it claimed posed dangers to its people like TikTok. WhatsApp would likely be blocked under a proposed Indian law from moving ahead with its plan to make data people share with WhatsApp business accounts available to Facebook and for advertising. The Data Protection Bill is expected to pass the Parliament his year.
  • WhatsApp Fueled A Global Misinformation Crisis. Now, It’s Stuck In One.” By Pranav Dixit — BuzzFeed News. A nice overview of how WhatsApp and Facebook’s missteps and limited credibility with people resulted in a widely believed misrepresentation about the changes to WhatsApp’s Terms of Service announced earlier this year.
  • Amazon, Facebook, other tech giants spent roughly $65 million to lobby Washington last year” By Tony Romm — The Washington Post. While Amazon and Facebook increased their federal lobbying, Google cut back. It bears note these totals are only for the lobbying these entities are doing directly to the federal government and does not include what they spend on firms and lobbyists in Washington (which is plenty) or their contributions to organizations like the Information Technology Industry Council or the Center for Democracy and Technology (which, again, is a lot.) Let’s also not forget political contributions or fundraising by the leadership and senior employees of these companies and political action committees (PAC). Finally, these totals exclude funds spent in state capitals, and I expect tech companies dropped a ton of cash in places like Sacramento and Olympia last year as major privacy legislation was under consideration. Moreover, this article does not take in whatever the companies are spending in Brussels and other capitals around the world.
  • Google won’t donate to members of Congress who voted against election results” By Ashley Gold — Axios. Speaking of using money to influence the political process, Google has joined other tech companies in pausing donations to Members who voted against certifying President Joe Biden’s victory in the Electoral College (i.e., Senators Ted Cruz (R-TX) and Josh Hawley (R-MO), to name two). We’ll see how long this lasts.
  • FCC’S acting chair says agency reviewing reports of U.S. East Coast internet outages” By Staff — Reuters; “Big Internet outages hit the East Coast, causing issues for Verizon, Zoom, Slack, Gmail” By Rachel Lerman — The Washington Post. On 26 January, there were widespread internet outages on the east coast of the United States (U.S.) that the Federal Communications Commission (FCC) is vowing to investigate. Acting FCC Chair Jessica Rosenworcel tweeted:
    • We have seen reports of internet-related outages on the East Coast, making it difficult for people to work remotely and go to school online. The @FCC Public Safety and Homeland Security Bureau is working to get to the bottom of what is going on.
    • It is not clear where and why the roughly hour long outage occurred, but early fingers are being pointed at Verizon FIOS.
  • Police Say They Can Use Facial Recognition, Despite Bans” By Alfred Ng — The Markup. No one should be surprised that many police departments are reading bans on using facial recognition technology as narrowly as possible. Nevertheless, legislators and advocates are fighting over the interpretations of these recently passed statutes, almost all of which have been put in place by municipalities. Jurisdictions in the United States may also soon choose to address the use of facial recognition technology by businesses.
  • Why Are Moscow and Beijing Happy to Host the U.S. Far-Right Online?” By Fergus Ryan — Foreign Policy. The enemy of my enemy is my friend, supposedly. Hence, extremist right-wingers, white supremacists, and others are making common cause with the companies of the People’s Republic of China and the Russian Federation by moving their websites and materials to those jurisdictions after getting banned by western companies. Given how closely Beijing and Moscow monitor their nations’ internet, this is surely done with the tacit permission of those governments and quite possibly to the same end as their disinformation campaigns: to disrupt the United States and neutralize it as a rival.
  • After Huawei, Europe’s telcos want ‘open’ 5G networks “ By Laurens Cerulus — Politico EU. Europe’s major telecommunications companies, Deutsche Telekom, Telefónica, Vodafone and Orange, have banded together to support and buy Open RAN technology to roll out 5G instead of buying from Ericsson or Nokia who are promising to do it all. The Open RAN would allow for smaller companies to build pieces of 5G networks that would be interchangeable since everyone is working from the same standards. Huawei, of course, has been shut out of many European nations and see the development as more evidence that western nations are ganging up on it.

Other Developments

  • White House Press Secretary Jen Psaki confirmed that President Joe Biden has directed the United Intelligence Community (IC) to investigate and report to him on the SolarWinds breach perpetrated by the Russian Federation’s foreign intelligence service, Sluzhba vneshney razvedki Rossiyskoy Federatsii (SVR). Thus far, it appears that many United States (U.S.) agencies and private sector entities were quietly breached in early 2020 and then surveilled for months until FireEye, a private sector cybersecurity company, divulged it had been breached. Given former President Donald Trump’s aversion to acknowledging the malicious acts of Russia, it seemed likely the Biden Administration would start the U.S. response. Interestingly, the Biden Administration is extending two nuclear weapons control treaties at the same time it seeks to undertake this assessment of Russian hacking. And, whatever the results of the assessment, experts are in agreement that the Biden Administration would seem to have few good options to retaliate and deter future action.
    • At a 21 January press briefing, Psaki stated
      • I can confirm that the United States intends to seek a five-year extension of New START, as the treaty permits.  The President has long been clear that the New START Treaty is in the national security interests of the United States.  And this extension makes even more sense when the relationship with Russia is adversarial, as it is at this time.
      • New START is the only remaining treaty constraining Russian nuclear forces and is an anchor of strategic stability between our two countries.
      • And to the other part of your question: Even as we work with Russia to advance U.S. interests, so too we work to hold Russia to account for its reckless and adversarial actions.  And to this end, the President is also issuing a tasking to the intelligence community for its full assessment of the SolarWinds cyber breach, Russian interference in the 2020 election, its use of chemical weapons against opposition leader Alexei Navalny, and the alleged bounties on U.S. soldiers in Afghanistan.
  • A group of 40 organizations urged President Joe Biden “to avoid appointing to key antitrust enforcement positions individuals who have served as lawyers, lobbyists, or consultants for Amazon, Apple, Facebook, and Google” in a letter sent before his inauguration. Instead, they encouraged him “to appoint experienced litigators or public servants who have recognized the dangers of, rather than helped to exacerbate, these corporations’ market power.” They closed the letter with this paragraph:
    • With your historic election, and the groundbreaking mandate Americans have entrusted you with, you face the challenge of not only rebuilding the country, but also rebuilding trust in government. We believe that appointing antitrust enforcers with no ties to dominant corporations in the industries they will be tasked with overseeing –particularly in regard to the technology sector –willhelp re-establish public trust in government at a critically important moment in our country’s history. We look forward to working with your administration to ensure powerful technology corporations are held accountable for wrongdoing in the months of years ahead.
    • The signatories include:
      • Public Citizen
      • American Economic Liberties Project
      • Open Markets Institute
      • Revolving Door Project
  • The National Security Agency (NSA) issued an advisory “Adopting Encrypted DNS in Enterprise Environments,” “explaining the benefits and risks of adopting the encrypted domain name system (DNS) protocol, DNS over HTTPs (DoH), in enterprise environments.” This advisory is entirely voluntary and does not bind any class of entities. Moreover, it is the latest in a series of public advisories that has seen the heretofore secretive NSA seek to rival the Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) in advising the owners and operators of cyber infrastructure. The NSA explained:
    • Use of the Internet relies on translating domain names (like “nsa.gov”) to Internet Protocol addresses. This is the job of the Domain Name System (DNS). In the past, DNS lookups were generally unencrypted, since they have to be handled by the network to direct traffic to the right locations. DNS over Hypertext Transfer Protocol over Transport Layer Security (HTTPS), often referred to as DNS over HTTPS (DoH), encrypts DNS requests by using HTTPS to provide privacy, integrity, and “last mile” source authentication with a client’s DNS resolver. Itis useful to prevent eavesdropping and manipulation of DNS traffic.While DoH can help protect the privacy of DNS requests and the integrity of responses, enterprises that use DoH will lose some of the control needed to govern DNS usage within their networks unless they allow only their chosen DoH resolver to be used. Enterprise DNS controls can prevent numerous threat techniques used by cyber threat actors for initial access, command and control, and exfiltration.
    • Using DoH with external resolvers can be good for home or mobile users and networks that do not use DNS security controls. For enterprise networks, however, NSA recommends using only designated enterprise DNS resolvers in order to properly leverage essential enterprise cybersecurity defenses, facilitate access to local network resources, and protect internal network information. The enterprise DNS resolver may be either an enterprise-operated DNS server or an externally hosted service. Either way, the enterprise resolver should support encrypted DNS requests, such as DoH, for local privacy and integrity protections, but all other encrypted DNS resolvers should be disabled and blocked. However, if the enterprise DNS resolver does not support DoH, the enterprise DNS resolver should still be used and all encrypted DNS should be disabled and blocked until encrypted DNS capabilities can be fully integrated into the enterprise DNS infrastructure.
  • The United States (U.S.) Government Accountability Office (GAO) has sent a report to the chair of the House Oversight Committee on its own initiative that “examines: (1) the Department of Defense’s (DOD) efforts to revise the process for identifying and protecting its critical technologies, and (2) opportunities for DOD’s revised process to inform U.S. government protection programs.” The GAO stated:
    • DOD’s critical technologies—including those associated with an acquisition program throughout its lifecycle or those still early in development—are DOD funded efforts that provide new or improved capabilities necessary to maintain the U.S. technological advantage. For the purposes of this report, we refer to these as critical acquisition programs and technologies. Also for the purposes of this report, U.S. government protection programs are those GAO previously identified across the federal government that are designed to protect critical technologies such as the Arms Export Control System, National Industrial Security Program, and the Committee on Foreign Investment in the U.S
    • Critical technologies are pivotal to maintaining the U.S. military advantage and, as such, are a frequent target for unauthorized access by adversaries such as through theft, espionage, illegal export, and reverse engineering. DOD has long recognized the need to effectively identify and ensure the consistent protection of these technologies from adversaries, but past efforts have not been fully successful. Recent efforts to revise its process for identifying and protecting its critical acquisition programs and technologies—led by DOD’s Protecting Critical Technology Task Force— offer some improvements.
    • However, DOD can further strengthen its revised process by determining the approach for completing key steps. These steps include ensuring its critical acquisition programs and technologies list is formally communicated to all relevant internal entities and other federal agencies, such as the Department of the Treasury as chair of the Committee on Foreign Investment in the United States, to promote a consistent understanding of what DOD deems critical to protect. They also include developing appropriate metrics that DOD program offices as well as organizations—such as the military departments and Under Secretary of Defense level offices—can use to assess the implementation and sufficiency of the assigned protection measures. Finally, DOD has not yet designated an organization to oversee critical technology protection efforts beyond 2020. As DOD works to develop a policy for its revised process, addressing these issues will not only help improve and ensure continuity in DOD’s protection efforts, but also help ensure government- wide protection efforts are better coordinated as called for in the 2020 National Strategy for Critical and Emerging Technologies.
    • The GAO made three recommendations to the DOD:
      • The Secretary of Defense should direct the Deputy Secretary of Defense in conjunction with the Protecting Critical Technology Task Force to determine a process for formally communicating future critical acquisition programs and technologies lists to all relevant DOD organizations and federal agencies. (Recommendation 1)
      • The Secretary of Defense should direct the Deputy Secretary of Defense in conjunction with the Protecting Critical Technology Task Force to identify, develop, and periodically review appropriate metrics to assess the implementation and sufficiency of the assigned protection measures. (Recommendation 2)
      • The Secretary of Defense should direct the Deputy Secretary of Defense in conjunction with the Protecting Critical Technology Task Force to finalize the decision as to which DOD organization will oversee protection efforts beyond 2020. (Recommendation 3)
  • The National Telecommunications and Information Administration (NTIA) “under sponsorship of and in collaboration with the Department of Defense (DOD) 5G Initiative” “issued a Notice of Inquiry (NOI)…to explore a “5G Challenge” aiming to accelerate the development of an open source 5G ecosystem that can support DOD missions.” The NTIA explained:
    • A key innovation in 5G that is becoming more pervasive in the larger 5G ecosystem is the trend toward “open 5G” architectures that emphasize open interfaces in the network stack. NTIA, under sponsorship of and in collaboration with the DOD 5G Initiative, is seeking comments and recommendations from all interested stakeholders to explore the creation of a 5G Challenge that would accelerate the development of the open 5G stack ecosystem in support of DOD missions.
    • For the purposes of this Notice, NTIA has organized these questions into three broad categories: (1) Challenge structure and goals; (2) incentives and scope; and (3) timeframe and infrastructure support. NTIA seeks public input on any and/or all of these three categories.
  • The Court of Justice for the European Union’s (CJEU) Advocate General has released his opinion in a case on whether a different data protection authority (DPA) from the lead agency in a case may also bring actions in its court system. The General Data Protection Regulation (GDPR) has a mechanism that organizes the regulation of data protection in that one agency, often the first to act, becomes the lead supervisory authority (LSA) and other DPAs must follow its lead. Most famously, Ireland’s Data Protection Commission (DPC) has been the LSA for the action Maximillian Schrems brought against Facebook that led to the demise of two adequacy agreements between the United States (U.S.) and the European Union (EU). In each case, the DPC was the LSA. The CJEU is not obligated to follow the Advocate General’s opinions, but they frequently prove persuasive. In any event, the Advocate General found DPAs may, under some circumstances, bring cases for cross border infringement even if another DPA is LSA. Advocate General Michal Bobek summarized the facts of the case:
    • In September 2015, the Belgian data protection authority commenced proceedings before the Belgian courts against several companies belonging to the Facebook group (Facebook), namely Facebook INC, Facebook Ireland Ltd, which is the group’s main establishment in the EU, and Facebook Belgium BVBA (Facebook Belgium). In those proceedings, the data protection authority requested that Facebook be ordered to cease, with respect to any internet user established in Belgium, to place, without their consent, certain cookies on the device those individuals use when they browse a web page in the Facebook.com domain or when they end up on a third party’s website, as well as to collect data by means of social plugins and pixels on third party websites in an excessive manner. In addition, it requested the destruction of all personal data obtained by means of cookies and social plugins, about each internet user established in Belgium.
    • The proceedings at issue are at present in progress before the Hof van beroep te Brussel (Court of Appeal, Brussels, Belgium) with however their scope being limited to Facebook Belgium, as that court previously established that it had no jurisdiction with regard to the actions against Facebook INC and Facebook Ireland Ltd. In this context, Facebook Belgium asserts that, as of thed ate on which the General Data Protection Regulation (GDPR)1has become applicable,the Belgian data protection authority has lost competence to continue the judicial proceedings at issue against Facebook. It contends that, under the GDPR, only the data protection authority of the State of Facebook’s main establishment in the EU (the so-called ‘lead’ data protection authority in the EU for Facebook), namely the Irish Data Protection Commission, is empowered to engage in judicial proceedings against Facebook for infringements of the GDPR in relation to cross-border data processing.
    • Bobek summed up the legal questions presented to the CJEU:
      • Does the GDPR permit a supervisory authority of a Member State to bring proceedings before a court of that State for an alleged infringement of that regulation with respect to cross-border data processing, where that authority is not the lead supervisory authority with regard to that processing?
      • Or does the new ‘one-stop-shop’ mechanism, heralded as one of the major innovations brought about by the GDPR, prevent such a situation from happening? If a controller were called upon to defend itself against a legal challenge concerning cross-border data processing brought by a supervisory authority in a court outside the place of the controller’s main establishment, would that be ‘one-stop-too-many’ and therefore incompatible with the new GDPR mechanism?
    • Bobek made the following findings:
      • [F]irst, that it transpires from the wording of the GDPR that the lead data protection authority has a general competence over cross-border data processing, including the commencement of judicial proceedings for the breach of the GDPR, and, by implication, the other data protection authorities concerned enjoy a more limited power to act in that regard.
      • Second, the Advocate General recalls that the very reason for the introduction of the one-stop-shop mechanism enshrined in the GDPR, whereby a significant role has been given to the lead data protection authority and cooperation mechanisms have been set up to involve other data protection authorities, was to address certain shortcomings resulting from the former legislation. Indeed, economic operators used to be required to comply with the various sets of national rules implementing that legislation, and to liaise, at the same time, with all the national data protection authorities, which proved to be costly, burdensome and time-consuming for those operators, and an inevitable source of uncertainty and conflicts for them and their customers.
      • Third, the Advocate General stresses that the lead data protection authority cannot be deemed as the sole enforcer of the GDPR in cross-border situations and must, in compliance with the relevant rules and time limits provided for by the GDPR, closely cooperate with the other data protection authorities concerned, the input of which is crucial in this area.
  • The United States (U.S.) Department of Defense added more companies from the People’s Republic of China (PRC) to the list of those associated with or controlled by the Chinese Communist Party or the People’s Liberation Army (PLA) “in accordance with the statutory requirement of Section 1237 of the National Defense Authorization Act for Fiscal Year 1999.” The previous lists were released last year (here, here and here.) This designation will almost certainly make doing business in the United States (U.S.) and elsewhere more difficult.
    • The first part of Section 1237 grants the President authority to “exercise International Emergency Economic Powers Act (IEEPA) authorities (other than authorities relating to importation) without regard to section 202 of the IEEPA (50 U.S.C. 1701) in the case of any commercial activity in the United States by a person that is on the list.” IEEPA grants the President sweeping powers to prohibit transactions and block property and property interests for nations and other groups subject to an IEEPA national emergency declaration. Consequently, those companies identified by the DOD on a list per Section 1237 could be blocked and prohibited from doing business with U.S. entities and others and those that do business with such Chinese companies could be subject to enforcement actions by the U.S. government.
    • The statute defines a “Communist Chinese military company” as “any person identified in the Defense Intelligence Agency publication numbered VP-1920-271-90, dated September 1990, or PC-1921-57-95, dated October 1995, and any update of those publications for the purposes of this section; and any other person that is owned or controlled by the People’s Liberation Army; and is engaged in providing commercial services, manufacturing, producing, or exporting.” Considering that the terms “owned” and “controlled” are not spelled out in this section, the executive branch may have very wide latitude in deeming a non-Chinese company as owned or controlled and therefore subject to the President’s use of IEEPA powers. Moreover, since the President already has the authority to declare an emergency and then use IEEPA powers, this language would seem to allow the President to bypass any such declaration and immediately use such powers, except those regarding importation, against any Chinese entities identified on this list by the Pentagon.
  • A group of 13 House Democrats wrote Attorney General designate Merrick Garland asking that the Biden Administration “to withdraw from the United States (U.S.) federal government’s lawsuit against the State of California over its net neutrality law as one of the first actions after inauguration.” The Trump Administration had sued California after a measure became law in 2018, mandating net neutrality there in the wake of the Federal Communications Commission’s (FCC) rollback of federal net neutrality. The Members argued:
    • In September 2018, then-Governor Jerry Brown signed into law SB 822, the strongest net neutrality law in the country. The Trump Department of Justice (DOJ) sued to overturn California’s law hours later, and associations of telecommunications providers sued within days. Parties to the case agreed to put the case on hold until Mozilla v. FCC was resolved. In that case, the Court of Appeals for the D.C. Circuit vacated the part of the Federal Communications Commission (FCC)’s 2018 Restoring Internet Order (RIF) that preempted state net neutrality laws.
    • The arguments of the Trump DOJ and telecommunications associations in U.S. v. California extend further than even the FCC’s RIF and have implications on the ability of California and other states to regulate many communications and technology policy issues.
    • The Eastern District of California has scheduled a hearing in U.S. v. California for a request for an injunction on January 26, 2021. It is for these reasons, we ask that the federal DOJ withdraw from U.S. v. California shortly after President-elect Biden is inaugurated.
  • On its first day in power, the Biden Administration issued its “National Strategy for the COVID-19 Response and Pandemic Preparedness.” In the cover letter, President Joe Biden stated:
    • For the past year, we could not turn to the federal government for a national plan to answer prayers with action — until today. In the following pages, you will find my Administration’s national strategy to beat the COVID-19 pandemic. It is a comprehensive plan that starts with restoring public trust and mounting an aggressive, safe, and effective vaccination campaign. It continues with the steps we know that stop the spread liked expanded masking, testing, and social distancing. It’s a plan where the federal government works with states, cities, Tribal communities, and private industry to increase supply and administer testing and the vaccines that will help reopen schools and businesses safely. Equity will also be central to our strategy so that the communities and people being disproportionately infected and killed by the pandemic receive the care they need and deserve.
    • Given the numerous cyber-attacks and intrusions throughout the pandemic and growing risks to the entire vaccine supply chain, the President asked the Director of National Intelligence Avril Haines to “lead an assessment of ongoing cyber threats and foreign interference campaigns targeting COVID-19 vaccines and related public health efforts” in order to “counter any threat to the vaccination program.” The Administration stated “[t]he U.S. Government will take steps to address cyber threats to the fight against COVID-19, including cyber attacks on COVID-19 research, vaccination efforts, the health care systems and the public health infrastructure.”
    • Specifically, the strategy requires the following:
      • To assist in the Federal Government’s efforts to provide warning of pandemics, protect our biotechnology infrastructure from cyber attacks and intellectual property theft, identify and monitor biological threats from states and non-state actors, provide validation of foreign data and response efforts, and assess strategic challenges and opportunities from emerging biotechnologies, the Director of National Intelligence shall:
        • (i) Review the collection and reporting capabilities in the United States Intelligence Community (IC) related to pandemics and the full range of high-consequence biological threats and develop a plan for how the IC may strengthen and prioritize such capabilities, including through organizational changes or the creation of National Intelligence Manager and National Intelligence Officer positions focused on biological threats, global public health, and biotechnology;
        • (ii) Develop and submit to the President, through the Assistant to the President for National Security Affairs (APNSA) and the COVID-19 Response Coordinator, a National Intelligence Estimate on
          • (A) the impact of COVID-19 on national and economic security; and
          • (B) current, emerging, reemerging, potential, and future biological risks to national and economic security; and
        • (iii)  In coordination with the Secretary of State, the Secretary of Defense, the Secretary of Health and Human Services (HHS), the Director of the Centers for Disease Control and Prevention (CDC), the Administrator of United States Agency for International Development (USAID), the Director of the Office of Science and Technology Policy, and the heads of other relevant agencies, promptly develop and submit to the APNSA an analysis of the security implications of biological threats that can be incorporated into modeling, simulation, course of action analysis, and other analyses.
  • Before the end of the Trump Administration, the Departments of State and Treasury imposed sanctions on a group of Russians for taking part in “a Russia-linked foreign influence network associated with Andrii Derkach, who was designated on September 10, 2020, pursuant to Executive Order (E.O.) 13848 for his attempt to influence the 2020 U.S. Presidential election” according to the Trump Administration Department of State press release. These sanctions emanate from a narrative pushed by Derkach, a likely Russian agent, that the Biden family were engaged in corrupt dealings in Ukraine. Allies of the Trump Campaign pushed this narrative, too, until it failed to gain traction in the public sphere. It is little wonder the last administration waited until the tail end of the Trump presidency to levy such sanctions. State went on to explain:
    • Former Ukraine Government officials Konstantin Kulyk, Oleksandr Onyshchenko, Andriy Telizhenko, and current member of the Ukrainian parliament Oleksandr Dubinsky, have publicly appeared with or affiliated themselves with Derkach through the coordinated dissemination and promotion of fraudulent or unsubstantiated allegations involving a U.S. political candidate.  They have made repeated public statements advancing malicious narratives that U.S. Government officials have engaged in corrupt dealings in Ukraine.  These efforts and narratives are consistent with or in support of Derkach’s objectives to influence the 2020 U.S. presidential election.  As such, these individuals have been designated pursuant to E.O. 13848 for having directly or indirectly engaged in, sponsored, concealed, or otherwise been complicit in foreign influence in an attempt to undermine the 2020 U.S. elections.
    • NabuLeaks, Era-Media, Only News, and Skeptik TOV are media front companies in Ukraine that disseminate false narratives at the behest of Derkach’s and his associates.  They are being designated pursuant to E.O. 13848 for being owned or controlled by Derkach or his media team.  Today’s action also includes the designation of Petro Zhuravel, Dmytro Kovalchuk, and Anton Simonenko for having materially assisted, sponsored, or provided financial, material, or technological support for, or goods or services to or in support of, Derkach.
    • Additionally, the Department of the Treasury’s Office of Foreign Assets Control (OFAC) “took additional action against seven individuals and four entities that are part of a Russia-linked foreign influence network associated with Andrii Derkach” according to the agency’s press release. OFAC stated “[a]s a result of today’s designations, all property and interests in property of these targets that are subject to U.S. jurisdiction are blocked, and U.S. persons are generally prohibited from engaging in transactions with them. Additionally, any entities 50 percent or more owned by one or more designated persons are also blocked.”
  • The United States (U.S.) Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) published “a draft of the Trusted Internet Connections (TIC) 3.0 Remote User Use Case and the draft National Cybersecurity Protection System (NCPS) Cloud Interface Reference Architecture (NCIRA): Volume 2.” The agency remarked in its press release:
    • The TIC initiative was launched under former President George W. Bush to limit the access points to the wider internet federal agencies used based on the logic of physical defense. And so, fewer entry and exit points made for a safer compound. However, over time, this proved problematic, especially as new technology came into use. Consequently, in the aforementioned OMB memorandum, the Trump Administration began a revamp from which these documents flow:
      • To continue to promote a consistent baseline of security capabilities, the Department of Homeland Security (DHS) will define TIC initiative requirements in documentation called TIC Use Cases (refer to Appendix A). TIC Use Case documentation will outline which alternative security controls, such as endpoint and user-based protections, must be in place for specific scenarios in which traffic may not be required to flow through a physical TIC access point. To promote flexibility while maintaining a focus on security outcomes, the capabilities used to meet TIC Use Case requirements may be separate from an agency’s existing network boundary solutions provided by a Trusted Internet Connection Access Provider (TICAP) or Managed Trusted Internet Protocol Services (MTIPS). Given the diversity of platforms and implementations across the Federal Government, TIC Use Cases will highlight proven, secure scenarios, where agencies have met requirements for government-wide intrusion detection and prevention efforts, such as the National Cybersecurity Protection System (including the EINSTEIN suite), without being required to route traffic through a TICAP/MTIPS solution.
    • In the Remote User Use Case, it is explained that
      • The TIC 3.0 Remote User Use Case (Remote User Use Case) defines how network and multi-boundary security should be applied when an agency permits remote users on their network. A remote user is an agency user that performs sanctioned business functions outside of a physical agency premises. The remote user scenario has two distinguishing characteristics:
        • 1. Remote user devices are not directly connected to network infrastructure that is managed and maintained by the agency.
        • 2. Remote user devices are intended for individual use (i.e., not a server).
      • In contrast, when remote user devices are directly connected to local area networks and other devices that are managed and maintained by the agency, it would be considered either an agency campus or a branch office scenario. TIC architectures for agency campus and branch office scenarios are enumerated in the TIC 3.0 Traditional TIC Use Case and the TIC 3.0 Branch Office Use Case respectively.
    • In NCIRA, it is stated:
      • The NCPS Cloud Interface Reference Architecture is being released as two individual volumes. The first volume provides an overview of changes to NCPS to accommodate the collection of relevant data from agencies’ cloud environments and provides general reporting patterns for sending cloud telemetry to CISA. This second volume builds upon the concepts presented in NCPS Cloud Interface Reference Architecture: Volume One and provides an index of common cloud telemetry reporting patterns and characteristics for how agencies can send cloud-specific data to the NCPS cloud-based architecture. Individual cloud service providers (CSPs) can refer to the reporting patterns in this volume to offer guidance on their solutions that allow agencies to send cloud telemetry to CISA in fulfillment of NCPS requirements.
  • The Congressional-Executive Commission on China (CECC) published its “2020 Annual Report” “on human rights and the rule of law in China.” The CECC found that:
    • the Chinese government and Communist Party have taken unprecedented steps to extend their repressive policies through censorship, intimidation, and the detention of people in China for exercising their fundamental human rights. Nowhere is this more evident than in the Xinjiang Uyghur Autonomous Region (XUAR) where new evidence emerged that crimes against humanity—and possibly genocide—are occurring, and in Hong Kong, where the ‘‘one country, two systems’’ frame-work has been effectively dismantled.
    • These policies are in direct violation of China’s Constitution, which guarantees ‘‘freedom of speech, of the press, of assembly, of association, of procession and of demonstration,’’ as well as ‘‘freedom of religious belief.’’ The actions of the Chinese government also contravene both the letter and the spirit of the Universal Declaration of Human Rights; violate its obligations under the Inter-national Covenant on Civil and Political Rights, which the Chinese government has signed but not ratified; and violate the Inter-national Covenant on Economic, Social, and Cultural Rights, ratified in 2001. Further, the Chinese government has abandoned any pretense of adhering to the legally binding commitments it made to the international community when it signed the 1984 Sino-British Joint Declaration on the future of Hong Kong.
    • President and Party General Secretary Xi Jinping has tightened his grip over China’s one-party authoritarian system, and the Party has further absorbed key government functions while also enhancing its control over universities and businesses. Authorities promoted the official ideology of ‘‘Xi Jinping Thought’’ on social media and required Party members, government officials, journalists, and students to study it, making the ideology both pervasive, and for much of the country, mandatory.
    • Regarding freedom of expression, the CECC recommended:
      • Give greater public expression, including at the highest levels of the U.S. Government, to the issue of press freedom in China, condemning: the harassment and detention of both domestic and foreign journalists; the denial, threat of denial, or delay of visas for foreign journalists; and the censorship of foreign media websites. Consistently link press freedom to U.S. interests, noting that censorship and restrictions on journalists and media websites prevent the free flow of information on issues of public concern, including public health and environ-mental crises, food safety problems, and corruption, and act as trade barriers for foreign companies attempting to access the Chinese market. Assess the extent to which China’s treatment of foreign journalists contravenes its World Trade Organization commitments and other obligations.
      • Sustain, and where appropriate, expand, programs that develop and widely distribute technologies that will assist Chinese human rights advocates and civil society organizations in circumventing internet restrictions, in order to access and share content protected under international human rights standards. Continue to maintain internet freedom programs for China at the U.S. Department of State and the United States Agency for Global Media to provide digital security training and capacity-building efforts for bloggers, journalists, civil society organizations, and human rights and internet freedom advocates in China.
      • Raise with Chinese officials, during all appropriate bilateral discussions, the cost to U.S.-China relations and to the Chinese public’s confidence in government institutions that is incurred when the Chinese government restricts political debate, advocacy for democracy or human rights, and other forms of peaceful  political  expression.  Emphasize  that  such  restrictions  violate  international  standards  for  free  expression,  particularly  those  contained  in  Article  19  of  the  International  Covenant  on  Civil  and  Political  Rights  and  Article  19  of  the  Universal  Declaration of Human Rights.
  • The Center for Democracy and Technology (CDT) issued its “Recommendations to the Biden Administration and 117th Congress to Advance Civil Rights & Civil Liberties in the Digital Age” that called for reform to content moderation, election law, privacy, big data, and other policy areas.
  • A United States (U.S.) federal court denied Parler’s request for a preliminary injunction against Amazon Web Services (AWS) after the latter shut down the former’s website for repeated violations of their contract, including the use of the conservative tilting platform during the 6 January 2021 insurrection at the United States Capitol. Parler was essentially asking the court to force AWS to once again host its website while its litigation was pending. The court reviewed Parler’s claims and clarified the scope of the case:
    • In its Complaint, Parler asserts three claims: (1) for conspiracy in restraint of trade, in violation of the Sherman Act, 15 U.S.C. § 1; (2) for breach of contract; and (3) for tortious interference with business expectancy. AWS disputes all three claims, asserting that it is Parler, not AWS, that has violated the terms of the parties’ Agreement, and in particular AWS’s Acceptable Use Policy, which prohibits the “illegal, harmful, or offensive” use of AWS services.
    • It is important to note what this case is not about. Parler is not asserting a violation of any First Amendment rights, which exist only against a governmental entity, and not against a private company like AWS. And indeed, Parler has not disputed that at least some of the abusive and violent posts that gave rise to the issues in this case violate AWS’s Acceptable Use Policy. This motion also does not ask the Court to make a final ruling on the merits of Parler’s claims. As a motion for a preliminary injunction, before any discovery has been conducted, Parler seeks only to have the Court determine the likelihood that Parler will ultimately prevail on its claims, and to order AWS to restore service to Parler pending a full and fair litigation of the issues raised in the Complaint.
    • However, the court ruled against Parler:
      • Parler has failed to meet the standard set by Ninth Circuit and U.S. Supreme Court precedent for issuance of a preliminary injunction. To be clear, the Court is not dismissing Parler’s substantive underlying claims at this time. Parler has fallen far short, however, of demonstrating, as it must, that it has raised serious questions going to the merits of its claims, or that the balance of hardships tips sharply in its favor. It has also failed to demonstrate that it is likely to prevail on the merits of any of its three claims; that the balance of equities tips in its favor, let alone strongly so; or that the public interests lie in granting the injunction.
  • The United States (U.S.) Department of Commerce’s National Telecommunications and Information Administration (NTIA) issued a statutorily required “National Strategy to Secure 5G Implementation Plan” and Appendices. The NTIA explained:
    • In accordance with the Secure 5G and Beyond Act of 2020, the Executive Branch has developed a comprehensive implementation plan. This implementation will be managed under the leadership of the National Security Council and the National Economic Council, supported by the National Telecommunications and Information Administration (NTIA), and with contributions from and coordination among a wide range of departments and agencies. The implementation plan took into account the 69 substantive comments in response to NTIA’s Request for Comments received from companies, industry associations, and think tanks representing a range of interests and aspects of the telecommunications ecosystem. Consistent with the National Strategy to Secure 5G, the implementation plan encompasses four lines of effort:
      • Line of Effort One: Facilitate Domestic 5G Rollout: The first line of effort establishes a new research and development initiative to develop advanced communications and networking capabilities to achieve security, resilience, safety, privacy, and coverage of 5G and beyond at an affordable cost. Advancement of United States leadership in Secure 5G and beyond systems and applications will be accomplished by enhancing centers of research and development and manufacturing. These efforts will leverage public-private partnerships spanning government, industry, academia, national laboratories, and international allies. This line of effort also intends to identify incentives and options to leverage trusted international suppliers, both to facilitate secure and competitive 5G buildouts, and to ensure the global competitiveness of United States manufacturers and suppliers.
      • Line of Effort Two: Assess Risks to & Identify Core Security Principles of 5G Infrastructure: The second line of effort is oriented toward identifying and assessing risks and vulnerabilities to 5G infrastructure, building on existing capabilities in assessing and managing supply chain risk. This work will also involve the development of criteria for trusted suppliers and the application of a vendor supply chain risk management template to enable security-conscious acquisition decision-making. Several agencies have responsibilities for assessing threats as the United States’ manages risks associated with the global and regional adoption of 5G network technology as well as developing mitigation strategies to combat any identified threats. These threat assessments take into account, as appropriate, requirements from entities such as the Committee on Foreign Investment in the United States (CFIUS), the Executive Order (E.O.) on Establishing the Committee for the Assessment of Foreign Participation in the United States Telecommunications Services Sector (Team Telecom), and the Federal Acquisition Security Council (FASC). In addition, this line of effort will identify security gaps in United States and international supply chains and an assessment of the global competitiveness and economic vulnerabilities of United States manufacturers and suppliers. Finally, this set of activities will include working closely with the private sector and other stakeholders to identify, develop, and apply core security principles for 5G infrastructure. These efforts will include leveraging the Enduring Security Framework (ESF), a working group under the Critical Infrastructure Partnership Advisory Council (CIPAC). These emerging security principles will be synchronized with or complementary to other 5G security principles, such as the “Prague Proposals” from the Prague 5G Security Conference held in May 2019.
      • Line of Effort Three: Address Risks to United States Economic and National Security during Development and Deployment of 5G Infrastructure Worldwide: The third line of effort involves addressing the risks to United States economic and national security during the development and deployment of 5G infrastructure worldwide. As a part of this effort, the United States will identify the incentives and policies necessary to close identified security gaps in close coordination with the private sector and through the continuous evaluation of commercial, security, and technological developments in 5G networks. A related activity is the identification of policies that can ensure the economic viability of the United States domestic industrial base, in coordination with the private sector through listening sessions and reviews of best practices. An equally important activity relates to the identification and assessment of “high risk” vendors in United States5G infrastructure, through efforts such as the Implementation of E.O. 13873, on “Securing the Information and Communications Technology and Services Supply Chain.” These efforts will build on the work of the CFIUS, the FASC, and Team Telecom reviews of certain Federal Communications Commission (FCC) licenses involving foreign ownership. This element of the implementation plan will also involve more intense engagement with the owners and operators of private sector communications infrastructure, systems equipment developers, and other critical infrastructure owners and operators. The engagements will involve sharing information on 5G and future generation wireless communications systems and infrastructure equipment. Such work will be conducted through the Network Security Information Exchange, the IT and Communications Sector and Government Coordinating Councils, the National Security Telecommunications Advisory Committee, and NTIA’s Communications Supply Chain Risk Information Partnership (C-SCRIP).
      • Line of Effort Four: Promote Responsible Global Development and Deployment of 5G: The fourth line of effort addresses the responsible global development and deployment of 5G technology. A key component of this line of effort is diplomatic outreach and engagement to advocate for the adoption and implementation of 5G security measures that prohibit the use of untrusted vendors in all parts of 5G networks. A related component involves the provision of technical assistance to mutual defense treaty allies and strategic partners of the United States to maximize the security oftheir5G and future generations of wireless communications systems and infrastructure. The goal of providing financing support and technical assistance is to help enable countries and private companies to develop secure and trusted next generation networks that are free of untrusted vendors and that increase global connectivity. A key part of 5G deployment involves international standards development, thus the implementation plan outlines several steps in support of the goal of strengthening and expanding United States leadership in international standards bodies and voluntary consensus-based standards organizations, including strengthening coordination with and among the private sector. This line of effort will also include collaboration with allies and partners with regard to testing programs to ensure secure 5G and future wireless communications systems and infrastructure equipment, including spectrum-related testing. To successfully execute this work, continued close coordination between the United States Government, private sector, academic, and international government partners is required to ensure adoption of policies, standards, guidelines, and procurement strategies that reinforce 5G vendor diversity and foster market competition. The overarching goals of this line of effort are to promote United States-led or linked technology solutions in the global market; remove and reduce regulatory and trade barriers that harm United States competitiveness; provide support for trusted vendors; and advocate for policies and laws that promote open, competitive markets for United States technology companies. This will also be supported through close collaboration with partners on options to advance the development and deployment of open interfaced, standards-based, and interoperable 5G networks.
  • The Federal Communications Commission (FCC) issued its annual “Broadband Deployment Report,” one of the last reports on FCC policy under the stewardship of former Chair Ajit Pai. In the agency’s press release, Pai claimed “[i]n just three years, the number of American consumers living in areas without access to fixed broadband at 25/3 Mbps has been nearly cut in half.” He added:
    • These successes resulted from forward-thinking policies that removed barriers to infrastructure investment and promoted competition and innovation.  I look forward to seeing the Commission continue its efforts to ensure that all Americans have broadband access.  Especially with the success of last year’s Rural Digital Opportunity Fund Phase I auction, I have no doubt that these figures will continue to improve as auction winners deploy networks in the areas for which they got FCC funding.
    • In relevant part, the FCC claimed:
      • Moreover, more than three-quarters of those in newly served areas, nearly 3.7 million, are located in rural areas, bringing the number of rural Americans in areas served by at least 25/3 Mbps to nearly 83%. Since 2016, the number of Americans living in rural areas lacking access to 25/3 Mbps service has fallen more than 46%.  As a result, the rural–urban divide is rapidly closing; the gap between the percentage of urban Americans and the percentage of rural Americans with access to 25/3 Mbps fixed broadband has been nearly halved, falling from 30 points at the end of 2016 to just 16 points at the end of 2019.
      • With regard to mobile broadband, since 2018, the number of Americans lacking access to 4G LTE mobile broadband with a median speed of 10/3 Mbps was reduced by more than 57%, including a nearly 54% decrease among rural Americans.  As of the end of 2019, the vast majority of Americans, 94% had access to both 25/3 Mbps fixed broadband service and mobile broadband service with a median speed of 10/3 Mbps. Also as of the end of 2019, mobile providers now provide access to 5G capability to approximately 60% of Americans. These strides in mobile broadband deployment were fueled by more than $29 billion of capital expenditures in 2019 (roughly 18% of global mobile capital spending), the largest mobile broadband investment since 2015.
      • .  With this Report, the Commission fulfills the Congressional directive to report each year on the progress made in deploying broadband to all Americans. Despite this finding, our work to close the digital divide is not complete.  The Commission will continue its efforts to ensure that all Americans have the ability to access broadband.
  • The chair of the House Oversight and Reform Committee wrote a letter asking Federal Bureau of Investigation (FBI) Director Christopher Wray to conduct “a comprehensive investigation into the role that the social media site Parler played in the assault on the Capitol on January 6.” Chair Carolyn Maloney (D-NY) indicated her committee is also investigating the events of 6 January, suggesting there could be hearings soon on the matter. In the letter, Maloney asserted:
    • It is clear that Parler houses additional evidence critical to investigations of the attack on the Capitol. One commentator has already used geolocation data associated with Parler to track 1,200 videos that were uploaded in Washington, D.C. on January 6.
    • Questions have also been raised about Parler’s financing and its ties to Russia, which the Intelligence Community has warned is continuing to use social media and other measures to sow discord in the United States and interfere with our democracy. For example, posters on Parler have reportedly been traced back to Russian disinformation campaigns. The company was founded by John Matze shortly after he traveled in Russia with his wife, who is Russian and whose family reportedly has ties to the Russian government. Concerns about the company’s connections to Russia have grown since the company re-emerged on a Russian hosting service, DDos-Guard, after being denied services by Amazon Web Services. DDos-Guard has ties to the Russian government and hosts the websites of other far-right extremist groups, as well as the terrorist group Hamas.According to another recent report, “DDoS-Guard’s other clients include the Russian ministry of defence, as well as media organisations in Moscow.”
    • Given these concerns, we ask that the FBI undertake a robust review of the role played by Parler in the January 6 attacks, including (1) as a potential facilitator of planning and incitement related to the attacks, (2) as a repository of key evidence posted by users on its site, and (3) as potential conduit for foreign governments who may be financing civil unrest in the United States.
  • Microsoft released further detailed, technical findings from its investigation into the wide-ranging SolarWinds hack. Last month, Microsoft revealed that its source code had been accessed as part of the Russian hack and stressed that source code for its products had not been changed or tampered with. In its update on its SolarWinds investigation, Microsoft explained:
    • As we continue to gain deeper understanding of the Solorigate attack, we get a clearer picture of the skill level of the attackers and the extent of planning they put into pulling off one of the most sophisticated attacks in recent history. The combination of a complex attack chain and a protracted operation means that defensive solutions need to have comprehensive cross-domain visibility into attacker activity and provide months of historical data with powerful hunting tools to investigate as far back as necessary.
    • More than a month into the discovery of Solorigate, investigations continue to unearth new details that prove it is one of the most sophisticated and protracted intrusion attacks of the decade. Our continued analysis of threat data shows that the attackers behind Solorigate are skilled campaign operators who carefully planned and executed the attack, remaining elusive while maintaining persistence. These attackers appear to be knowledgeable about operations security and performing malicious activity with minimal footprint. In this blog, we’ll share new information to help better understand how the attack transpired. Our goal is to continue empowering the defender community by helping to increase their ability to hunt for the earliest artifacts of compromise and protect their networks from this threat.
    • As mentioned, in a 31 December 2020 blog posting, Microsoft revealed:
      • Our investigation has, however, revealed attempted activities beyond just the presence of malicious SolarWinds code in our environment. This activity has not put at risk the security of our services or any customer data, but we want to be transparent and share what we’re learning as we combat what we believe is a very sophisticated nation-state actor.
      • We detected unusual activity with a small number of internal accounts and upon review, we discovered one account had been used to view source code in a number of source code repositories. The account did not have permissions to modify any code or engineering systems and our investigation further confirmed no changes were made. These accounts were investigated and remediated.
  • The Trump Administration’s United States Trade Representative (USTR) weighed in on Australia’s proposed law to make Google, Facebook, and other technology companies pay for using Australian media content. The USTR reiterated the United States (U.S.) position that forcing U.S. firms to pay for content, as proposed, in unacceptable. It is likely the view of a Biden Administration is not likely to change. The Australian Senate committee considering the “Treasury Laws Amendment (News Media and Digital Platforms Mandatory Bargaining Code) Bill 2020” had asked for input. In relevant part, the USTR argued:
    • the U.S. Government is concerned that an attempt, through legislation, to regulate the competitive positions of specific players in a fast-evolving digital market, to the clear detriment of two U.S. firms, may result in harmful outcomes. There may also be long-lasting negative consequences for U.S. and Australian firms, as well as Australian consumers. While the revised draft has partially addressed some U.S. concerns—including an effort to move towards a more balanced evaluation of the value news businesses and platforms offer each other in the context of mandatory arbitration—significant issues remain.
  • Plaintiffs have filed suit in California state court against WeChat and Tencent by Plaintiff Citizen Power Initiatives for China (CPIFC) and six unnamed California residents who use WeChat. They argue that the government of the People’s Republic of China (PRC) controls WeChat and forces it and its parent, Tencent, to turn over user data to the PRC in violation of California law. They make other allegations of unlawful conduct, including denying users in California the right to access funds though the app in the PRC. They are seeking class action status in order to bring a larger action against the PRC company. The plaintiffs claimed:
    • This case arises from Tencent’s practices of profiting from politically motivated, pro-Chinese Communist Party (“CCP”) censorship and surveillance of California WeChat users (“challenged practices”), which includes the practice of turning over private user data and communications to the government of the People’s Republic of China (“PRC government,” and, together with the CCP, the “Party-state”), and which inflicts an array of harms. Specifically, the challenged practices include Tencent’s practices of: (i) turning over private California WeChat user data and communications to the Party-state; (ii) profiting by using California WeChat user data and communications to improve Tencent’s censorship and surveillance algorithms; (iii) censoring and surveilling California WeChat user communications for content perceived as critical of the Party-state; (iv) suspending, blocking, or deleting California WeChat user accounts and/or data over such content; and (v) prohibiting California WeChat users from withdrawing funds stored in their WeChat accounts when those users do not possess an account with a PRC financial institution subject to monitoring by the Party-state.
    • This action also challenges provisions in Tencent’s terms of service and privacy policy  which,  taken  together,  are  oppressive,  obfuscatory,  and  incoherent  (“challenged provisions”). The challenged provisions include privacy-related terms that are deliberately vague and ambiguous with respect to whether the challenged practices are permitted or prohibited (“vague and ambiguous privacy provisions”), which in turn benefits Tencent by reserving to it the right to adopt self-interested interpretations. However, California WeChat users are entitled to clear, unambiguous, and testable language with respect to the nature and scope of their privacy on WeChat—in other words, to honesty and transparency.
    • Yet, even if the challenged practices were unambiguously prohibited under the challenged provisions, the challenged provisions include terms that make it practically impossible for California WeChat users to seek meaningful redress for the harms caused by those practices (“remedy-limiting provisions”). 
    • Finally, the challenged provisions include terms that impermissibly discriminate against California WeChat users who happen to be citizens of the PRC (“long-arm provisions”).
  • Representatives Anna Eshoo (D-CA) and Tom Malinowski (D-NJ) wrote the CEOs of Facebook, Twitter, and YouTube “urging the companies to address the fundamental design features of their social networks that facilitate the spread of extreme, radicalizing content to their users” per their press release. Last fall, Eshoo and Malinowski introduced the “Protecting Americans from Dangerous Algorithms Act” (H.R.8636) that would subject platforms like Facebook, Twitter, and YouTube to civil suits on the basis of the algorithms used to amplify content that violates the civil rights of others or results in international terrorism. They asserted:
    • The lawmakers note that the rioters who attacked the Capitol earlier this month were radicalized in part in digital echo chambers that these platforms designed, built, and maintained, and that the platforms are partially responsible for undermining our shared sense of objective reality, for intensifying fringe political beliefs, for facilitating connections between extremists, leading some of them to commit real-world, physical violence.
  • The United States (U.S.) Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) announced “[u]sing enterprise risk management best practices will be a focus for CISA in 2021, and today the National Risk Management Center (NRMC) is launching a Systemic Cyber Risk Reduction Venture to organize our work to reduce shared risk to the Nation’s security and economic security.” CISA explained that “[w]e anticipate three overarching lines of effort:
    • Build the Underlying Architecture for Cyber Risk Analysis to Critical Infrastructure. The critical infrastructure community is underpinned by a dependent web of hardware, software, services, and other connected componentry.
    • Cyber Risk Metric Development. Supporting efforts to better understand the impact of cyber risk across the critical infrastructure community will require developing usable metrics to quantify cyber risk in terms of functional loss. There’s no need to get bogged down with Greek equations with decimal place-level specificity. Metrics that provide even directional or comparative indicators are enormously helpful.
    • Promoting Tools to Address Concentrated Sources of Cyber Risk. Central to our venture to reduce systemic cyber risk is finding concentrated sources of risk that, if mitigated, provide heightened risk management bang for the buck if addressed.
  • The President’s Council of Advisors on Science and Technology (PCAST) issued its first assessment of a government program to fund research and development of advanced information technology for the first time since 2015. PCAST explained:
    • As required by statute, PCAST is tasked with periodically reviewing the Networking and Information Technology Research and Development (NITRD) Program, the Nation’s primary source of federally funded research and development in advanced information technologies such as computing, networking, and software. This report examines the NITRD Program’s progress since the last review was conducted in 2015, explores emerging areas of interest relevant to the NITRD Program, and presents PCAST’s findings and recommendations.
    • PCAST made the following recommendations:
      • Recommendation 1: The current NITRD Program model and its approach to coordinating foundational research in NIT fields across participating agencies should continue as constituted, with the following modifications:
        • NITRD groups should continue to review the PCAs regularly using a fast track action committee (FTAC) and adjust as needed (with a frequency of perhaps every 3 years rather than every 5–6 years, as had been recommended in the 2015 NITRD Review). It should also continue to review IWGs periodically, as recommended in the 2015 NITRD Review.
        • The NITRD Program should continue to pursue incremental modifications of existing structures (e.g., IWGs, PCAs) rather than engage in wholesale reorganizations at this time.
        • When launching wholly new IWGs and PCAs (e.g., such as the AI IWG and AI PCA), the NITRD Program should consider showing clearly in the annual NITRD Supplement to the President’s Budget which lines of effort derive from previous structures and which are wholly new programmatic areas and funding lines. This will be especially important should NITRD groups increase the frequency with which they review and modify PCAs.
      • Recommendation 2: The NITRD Program should examine current structures and operations to identify opportunities for greater multi-sector engagement in its activities. Opportunities include the following:
        • Amplify multi-sector outreach and engagement efforts. While the NITRD Program notifies the public about its convening activities, it could augment its outreach.
        • Expand the NITRD Program’s efforts to track non-U.S. coordinated NIT efforts and collaborate with international efforts where appropriate. This should be done in coordination with the NSTC International S&T Coordination Subcommittee to avoid duplicating efforts.
      • Recommendation 3: The NITRD Program should examine current structures and operations to identify opportunities for improving coordination in IotF areas related to the program. Opportunities could include:
        • AI—continue coordination efforts within the NITRD Program and between NITRD IWGs and the NSTC Select Committee on AI and the Machine Learning and Artificial Intelligence (MLAI) Subcommittee.
        • Advanced communications networks—continue coordination efforts within the NITRD Program through the Subcommittee and the LSN and WSRD IWGs.
        • QIS—increase coordination with the NQCO and the NSTC QIS Subcommittee, particularly on topics such as post-quantum cryptography R&D and other implications of the development of quantum technologies on the NIT landscape with advances in QIS.
        • Biotechnology—coordinate with NSTC bodies working in biosciences-related areas such as the Biodefense R&D (BDRD) Subcommittee and the Biological Sciences Subcommittee (BSSC).
        • Advanced manufacturing—coordinate with the NSTC Subcommittee on Advanced
        • Manufacturing and large-scale manufacturing R&D efforts such as the Manufacturing USA Institutes.
      • Recommendation 4: The NITRD Program should incorporate microelectronics R&D explicitly into its programmatic activities.
        • Could take the form of a separate IWG or incorporating hardware/components R&D into existing IWGs.
        • Should be stronger NNI-NITRD coordination to ensure alignment of R&D strategies and programmatic activities.
      • Recommendation 5: The NITRD Program should further examine ways it can coordinate its participating agencies—such as through an IWG or other multiagency bodies—to ensure they support and emphasize the following:
        • STEM education, including PhD fellowships, in NIT.
        • Programs at the intersection and convergence of computational science and other fields (CS + X) at 2-year and 4-year educational institutions.
        • Retraining and upskilling the non-technical workforce to participate in the cyber-ready workforce.
        • A diverse and inclusive NIT workforce across all levels of technical staff, engineers, and scientists.
        • Strengthen efforts to attract and retain international students, scientists, and engineers who wish to contribute to NIT R&D in the United States. These efforts should be informed by conducting studies of the role that international talent plays in the U.S. NIT workforce and any factors affecting recent changes in recruitment and retention.

Coming Events

  • The Commerce, Science, and Transportation Committee will hold a hearing on the nomination of Gina Raimondo to be the Secretary of Commerce on 26 January.
  • On 17 February, the Federal Communications Commission (FCC) will hold an open meeting, its first under acting Chair Jessica Rosenworcel, with this tentative agenda:
    • Presentation on the Emergency Broadband Benefit Program. The Commission will hear a presentation on the creation of an Emergency Broadband Benefit Program. Congress charged the FCC with developing a new $3.2 billion program to help Americans who are struggling to pay for internet service during the pandemic.
    • Presentation on COVID-19 Telehealth Program. The Commission will hear a presentation about the next steps for the agency’s COVID-19 Telehealth program. Congress recently provided an additional $249.95 million to support the FCC’s efforts to expand connected care throughout the country and help more patients receive health care safely.
    • Presentation on Improving Broadband Mapping Data. The Commission will hear a presentation on the work the agency is doing to improve its broadband maps. Congress directly appropriated $65 million to help the agency develop better data for improved maps.
    • Addressing 911 Fee Diversion. The Commission will consider a Notice of Proposed Rulemaking that would implement section 902 of the Don’t Break Up the T-Band Act of 2020, which requires the Commission to take action to help address the diversion of 911 fees by states and other jurisdictions for purposes unrelated to 911. (PS Docket Nos. 20-291, 09-14)
    • Implementing the Secure and Trusted Communications Networks Act. The Commission will consider a Third Further Notice of Proposed Rulemaking that proposes to modify FCC rules consistent with changes that were made to the Secure and Trusted Communications Networks Act in the Consolidated Appropriations Act, 2021. (WC Docket No. 18-89)
  • On 27 July 2021, the Federal Trade Commission (FTC) will hold PrivacyCon 2021.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2021. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Photo by Photoholgic on Unsplash

New Section 230 Bill Would Make Platforms Liable For Civil Rights Violations

A senior Democratic Member of the primary committee of jurisdiction over 47 USC 230 (aka Section 230) has floated a discussion draft that would remove liability protection for technology companies for targeted advertising that violates civil rights laws. This bill is likely one of the opening salvos in the coming war on Capitol Hill between Democrats, Republicans, technology companies, and other stakeholders. And even though this bill seems narrow and leaves to one side larger issues on how Section 230 should be reformed, its chances for enactment are not high.

Representative Yvette Clarke’s (D-NY) “Civil Rights Modernization Act of 2021” would amend 47 USC 230 through the addition of a new section that would make technology companies open to lawsuits for targeted advertising that violates civil rights laws. In her press release, she explained the problem her bill is designed to solve:

There is a history of discriminatory targeting of advertisements that has harmed society by allowing consumers to be excluded from seeing certain ads. These harms are not theoretical and occur in real-time – with particularly troubling implications for communities of color. Personal data such as gender, race, hobbies and interests, and zip code are used to limit the online visibility of many opportunities, thus perpetuating inequities in housing opportunities, credit and employment.

Accordingly, under the bill, Section 230 would no longer shield tech companies for targeted advertising from civil rights enforcement brought by governments, civil lawsuits brought by individuals alleging civil rights violations, or criminal prosecution for violating civil rights laws.

Clarke’s bill defines civil rights law as:

  • any Federal, State, or local law that prohibits discrimination on the basis of a protected class or status;
  • any other Federal law that is enforced, in whole or in part, by the Civil Rights Division of the Department of Justice; and
  • any Federal, State, or local law that prohibits the dissemination of false or misleading information intended, with respect to an election for public office, to prevent voters from casting their ballots, to prevent voters from voting for the candidate of their choice, to intimidate the electorate, or to undermine the integrity of the electoral process.

The first clause would seem to get around the problem of identifying protected classes or statuses through its open-ended language that makes any civil rights statute’s definition an operative one. Consequently, if California defines all members of the LGBTQI community as a protected, and Texas does not (by the way, I’m not sure which state defines protected classes under its civil rights laws), then the residents in California would be able to sue for targeted advertising that violates their civil rights as protected by law. The second clause is fairly straight forward. And, the third clause, would seem like a dealbreaker for Republicans since it would incentivize Facebook, Twitter, and others to remove targeted advertising that, say, calls into question the validity and legality of President Joe Biden’s victory over former President Donald Trump in the 2020 election. But more on that later.

Targeted advertising means the use of algorithm or other means the aiming of advertising at certain subsets of users or groups. It bears note that Clarke’s bill does not ban content that is contrary to civil rights, but rather targeted advertising (i.e., content a party is paying a platform to post.) This latter group would also seem to encompass influencers or public figures that receive some sort of remuneration other than actual money. So, again, hosting content that per se aims to impinge the civil rights of others that falls short of being a federal crime (e.g., making the case Irish Americans should not be allowed to own unicorns does not seem to violate federal laws) would still be protected by Section 230. So, to cite another possible example, if a Fox News figure uses Twitter to advise African Americans not to vote for Democrats in the 2022 mid-term election because the Democratic Party has not delivered for them and takes their votes for granted, this would be outside the scope of what this bill is trying to do so long as the person is not being paid to target that message to African Americans. What’s more, the bill would seem that troll farms would fall outside the bill. Or statements by average citizens, political figures, and others that aim to suppress, say, minority turnout or the voting of turnip farmers in Idaho.

And so, any platform hosting targeted advertising aiming to suppress the votes of African Americans, a key part of the Democratic coalition, as happened in 2016 and 2020, could be sued because the usual Section 230 protected would be gone. This would create obvious incentives for the Facebooks, Twitters, and others to better police and take down content or face lawsuits. Had this law been in place for the 2016 election, paid Russian misinformation and propaganda that aimed to depress the African American vote would have opened Facebook and Twitter to legal liability. In the same vein, hosting discriminatory housing advertisements, as the Trump Administration’s Department of Housing and Urban Development sued Facebook for allegedly doing, would result in Facebook also facing lawsuits from affected people under a range of civil rights statutes. Any such targeted advertising that does not show employment ads to certain classes of people (e.g., minorities, older workers, etc.) would also seem to lose Section 230 protection. The same would apparently be true of targeted advertising that is in the vein of the redlining that prevented many African Americans from accessing affordable mortgage financing in the post-World War II era and continues in certain forms today.

This approach follows the path of the “Allow States and Victims to Fight Online Sex Trafficking Act of 2017” (P.L. 115-164), a law enacted to remove Section 230 protection for websites that hosted or facilitated prostitution, trafficking, or child sexual exploitation. Thereafter the pages on sites like Craigslist and Backpage with content outside Section 230 went dark overnight after enactment to avoid liability. Clarke may be hoping the same happens if her bill, as drafted, is enacted. Like the earlier bill, the Civil Rights Modernization Act of 2021 is targeted and discrete, leaving to one side the larger debate about Section 230, which may lend weight to its chances of enactment.

However, there are some reasons to suggest this bill would not be acceptable to Republicans. First, they want Section 230 reform to address the bias they claim social media platforms have against them even though no serious evidence has ever been provided to prove these claims. Clarke’s bill seems to sidestep that issue entirely, and so Republicans, at the least, would likely want to add their Section 230 reform, which may sink the bill with Democrats. Additionally, Republicans generally oppose expanding civil rights in federal law, particularly the language that would make election suppression subject to civil rights laws and litigation, and so this bill seems like it would be a nonstarter with Republicans whose votes would be needed in the Senate for passage.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2021. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Photo by Rodion Kutsaev on Unsplash

Further Reading, Other Developments, and Coming Events (13 and 14 January 2021)

Further Reading

  • YouTube Suspends Trump’s Channel for at Least Seven Days” By Daisuke Wakabayashi — The New York Times. Even Google is getting further into the water. Its YouTube platform flagged a video of President Donald Trump’s for inciting violence and citing the “ongoing potential for violence,” Trump and his team will not be able to upload videos for seven days and the comments section would be permanently disabled. YouTube has been the least inclined of the major platforms to moderate content and has somehow escaped the scrutiny and opprobrium Facebook and Twitter have faced even though those platforms have been more active in policing offensive content.
  • Online misinformation that led to Capitol siege is ‘radicalization,’ say researchers” By Elizabeth Culliford — Reuters. Experts in online disinformation are saying that the different conspiracy movements that impelled followers to attack the United States (U.S.) Capitol are the result of radicalization. Online activities translated into real world violence, they say. The also decried the responsive nature of social media platforms in acting, waiting for an insurrection to take steps experts and others have been begging them to take.
  • Uganda orders all social media to be blocked – letter” — Reuters. In response to Facebook blocking a number of government related accounts for Coordinated Inauthentic Behaviour” (CIB), the Ugandan government has blocked all access to social media ahead of its elections. In a letter seen by Reuters, the Uganda Communications Commission directed telecommunications providers “to immediately suspend any access and use, direct or otherwise, of all social media platforms and online messaging applications over your network until further notice.” This may become standard practice for many regimes around the world if social media companies crack down on government propaganda.
  • BlackBerry sells 90 patents to Huawei, covering key smartphone technology advances” By Sean Silcoff — The Globe and Mail. Critics of a deal to assign 90 key BlackBerry patents to Huawei are calling on the government of Prime Minister Justin Trudeau to be more involved in protecting Canadian intellectual property and innovations.
  • ‘Threat to democracy is real’: MPs call for social media code of conduct” By David Crowe and Nick Bonyhady — The Sydney Morning Herald. There has been mixed responses in Australia’s Parliament on social media platforms banning President Donald Trump after his role in inciting the violence at the United States (U.S.) Capitol. Many agree with the platforms, some disagree strenuously in light of other inflammatory content that is not taken down, and many want greater rationality and transparency in how platforms make these decisions. And since Canberra has been among the most active governments in regulating technology, it may inform the process of drafting its “Online Safety Bill,” which may place legal obligations on social media platforms.
  • Poland plans to make censoring of social media accounts illegal” By Shaun Walker — The Guardian. Governments around the world continue to respond to a number of social media companies deciding to deplatform United States (U.S.) President Donald Trump. In Warsaw there is a draft bill that would make deplatforming a person illegal unless the offense is also contrary to Polish law. The spin is that the right wing regime in Warsaw is less interested in protecting free speech and more interested in propagating the same grievances the right wing in the United States is. Therefore, this push in Poland may be more about messaging and trying to cow social media companies and less about protecting free speech, especially speech with which the government disagrees (e.g. advocates for LGBTQI rights have been silenced in Poland.)
  • Facebook, Twitter could face punishing regulation for their role in U.S. Capitol riot, Democrats say” By Tony Romm — The Washington Post. Democrats were already furious with social media companies for what they considered their lacking governance of content that clearly violated terms of service and policies. These companies are bracing for an expected barrage of hearings and legislation with the Democrats controlling the White House, House, and Senate.
  • Georgia results sweep away tech’s regulatory logjam” By Margaret Harding McGill and Ashley Gold — Axios. This is a nice survey of possible policy priorities at the agencies and in the Congress over the next two years with the Democrats in control of both.
  • The Capitol rioters put themselves all over social media. Now they’re getting arrested.” By Sara Morrison — Recode. Will the attack on the United States (U.S.) Capitol be the first time a major crime is solved by the evidence largely provided by the accused? It is sure looking that way as law enforcement continues to use the posts of the rioters to apprehend, arrest, and charge them. Additionally, in the same way people who acted in racist and entitled ways (e.g. Amy Cooper in Central Park threatening an African American gentleman with calling the police even though he had asked her to put her dog on a leash) were caught through crowd-sourced identification pushes, rioters are also being identified.
  • CISA: SolarWinds Hackers Got Into Networks by Guessing Passwords” By Mariam Baksh — Nextgov. The Cybersecurity and Infrastructure Security Agency (CISA) has updated its alert on the SolarWinds hack to reflect its finding. CISA explained:
    • CISA incident response investigations have identified that initial access in some cases was obtained by password guessing [T1101.001], password spraying [T1101.003], and inappropriately secured administrative credentials [T1078] accessible via external remote access services [T1133]. Initial access root cause analysis is still ongoing in a number of response activities and CISA will update this section as additional initial vectors are identified.
  •  “A Facial Recognition Company Says That Viral Washington Times “Antifa” Story Is False” By Craig Silverman — BuzzFeed News. XRVIsion denied the Washington Times’ account that the company had identified antifa protestors among the rioters at the United States (U.S. Capitol) (archived here.) The company said it had identified two Neo-Nazis and a QAnon adherent. Even though the story was retracted and a corrected version issued, some still claimed the original story had merit such as Trump supporter Representative Matt Gaetz (R-FL).

Other Developments

  • The United States (U.S.) Trade Representative (USTR) announced that it would not act on the basis of three completed reports on Digital Services Taxes (DST) three nations have put in place and also that it would not proceed with tariffs in retaliation against France, one of the first nations in the world to enact a DST. Last year, the Organization for Economic Co-operation and Development convened multi-lateral talks to resolve differences on how a global digital services tax will ideally function with most of the nations involved arguing for a 2% tax to be assessed in the nation where the transaction occurs as opposed to where the company is headquartered. European Union (EU) officials claimed an agreement was possible, but the U.S. negotiators walked away from the table. It will fall to the Biden Administration to act on these USTR DST investigations if they choose.
    • In its press release, the USTR stated it would “suspend the tariff action in the Section 301 investigation of France’s Digital Services Tax (DST).”
      • The USTR added:
        • The additional tariffs on certain products of France were announced in July 2020, and were scheduled to go into effect on January 6, 2021.  The U.S. Trade Representative has decided to suspend the tariffs in light of the ongoing investigation of similar DSTs adopted or under consideration in ten other jurisdictions.  Those investigations have significantly progressed, but have not yet reached a determination on possible trade actions.  A suspension of the tariff action in the France DST investigation will promote a coordinated response in all of the ongoing DST investigations.
      • In its December 2019 report, the USTR determined “that France’s DST is unreasonable or discriminatory and burdens or restricts U.S. commerce, and therefore is actionable under sections 301(b) and 304(a) of the Trade Act (19 U.S.C. 2411(b) and 2414(a))” and proposed a range of measures in retaliation.
    • The USTR also “issued findings in Section 301 investigations of Digital Service Taxes (DSTs) adopted by India, Italy, and Turkey, concluding that each of the DSTs discriminates against U.S. companies, is inconsistent with prevailing principles of international taxation, and burden or restricts U.S. commerce.” The USTR stated it “is not taking any specific actions in connection with the findings at this time but will continue to evaluate all available options.” The USTR added:
      • The Section 301 investigations of the DSTs adopted by India, Italy, and Turkey were initiated in June 2020, along with investigations of DSTs adopted or under consideration by Austria, Brazil, the Czech Republic, the European Union, Indonesia, Spain, and the United Kingdom.  USTR expects to announce the progress or completion of additional DST investigations in the near future. 
  • The United Kingdom’s Competition and Markets Authority (CMA) has started investigating Google’s Privacy Sandbox’ project to “assess whether the proposals could cause advertising spend to become even more concentrated on Google’s ecosystem at the expense of its competitors.” The CMA asserted:
    • Third party cookies currently play a fundamental role online and in digital advertising. They help businesses target advertising effectively and fund free online content for consumers, such as newspapers. But there have also been concerns about their legality and use from a privacy perspective, as they allow consumers’ behaviour to be tracked across the web in ways that many consumers may feel uncomfortable with and may find difficult to understand.
    • Google’s announced changes – known collectively as the ‘Privacy Sandbox’ project – would disable third party cookies on the Chrome browser and Chromium browser engine and replace them with a new set of tools for targeting advertising and other functionality that they say will protect consumers’ privacy to a greater extent. The project is already under way, but Google’s final proposals have not yet been decided or implemented. In its recent market study into online platforms digital advertising, the CMA highlighted a number of concerns about their potential impact, including that they could undermine the ability of publishers to generate revenue and undermine competition in digital advertising, entrenching Google’s market power.
  • Facebook took down coordinated inauthentic behavior (CIB) originating from France and Russia, seeking to allegedly influence nations in Africa and the Middle East. Facebook asserted:
    • Each of the networks we removed today targeted people outside of their country of origin, primarily targeting Africa, and also some countries in the Middle East. We found all three of them as a result of our proactive internal investigations and worked with external researchers to assess the full scope of these activities across the internet.
    • While we’ve seen influence operations target the same regions in the past, this was the first time our team found two campaigns — from France and Russia — actively engage with one another, including by befriending, commenting and criticizing the opposing side for being fake. It appears that this Russian network was an attempt to rebuild their operations after our October 2019 takedown, which also coincided with a notable shift in focus of the French campaign to begin to post about Russia’s manipulation campaigns in Africa.
    • Unlike the operation from France, both Russia-linked networks relied on local nationals in the countries they targeted to generate content and manage their activity across internet services. This is consistent with cases we exposed in the past, including in Ghana and the US, where we saw the Russian campaigns co-opt authentic voices to join their influence operations, likely to avoid detection and help appear more authentic. Despite these efforts, our investigation identified some links between these two Russian campaigns and also with our past enforcements.
  • Two of the top Democrats on the House Energy and Committee along with another Democrat wrote nine internet service providers (ISP) “questioning their commitment to consumers amid ISPs raising prices and imposing data caps during the COVID-19 pandemic.” Committee Chair Frank Pallone, Jr. (D-NJ), Communications and Technology Subcommittee Chairman Mike Doyle (D-PA), and Representative Jerry McNerney (D-CA) wrote the following ISPs:
    • Pallone, Doyle, and McNerney took issue with the companies raising prices and imposing data caps after having pledged not to do so at the behest of the Federal Communications Commission (FCC). They asked the companies to answer a series of questions:
      • Did the company participate in the FCC’s “Keep Americans Connected” pledge?
      • Has the company increased prices for fixed or mobile consumer internet and fixed or phone service since the start of the pandemic, or do they plan to raise prices on such plans within the next six months? 
      • Prior to March 2020, did any of the company’s service plans impose a maximum data consumption threshold on its subscribers?
      • Since March 2020, has the company modified or imposed any new maximum data consumption thresholds on service plans, or do they plan to do so within the next six months? 
      • Did the company stop disconnecting customers’ internet or telephone service due to their inability to pay during the pandemic? 
      • Does the company offer a plan designed for low-income households, or a plan established in March or later to help students and families with connectivity during the pandemic?
      • Beyond service offerings for low-income customers, what steps is the company currently taking to assist individuals and families facing financial hardship due to circumstances related to COVID-19? 
  • The United States (U.S.) Department of Homeland Security (DHS) issued a “Data Security Business Advisory: Risks and Considerations for Businesses Using Data Services and Equipment from Firms Linked to the People’s Republic of China,” that “describes the data-related risks American businesses face as a result of the actions of the People’s Republic of China (PRC) and outlines steps that businesses can take to mitigate these risks.” DHS generally recommended:
    • Businesses and individuals that operate in the PRC or with PRC firms or entities should scrutinize any business relationship that provides access to data—whether business confidential, trade secrets, customer personally identifiable information (PII), or other sensitive information. Businesses should identify the sensitive personal and proprietary information in their possession. To the extent possible, they should minimize the amount of at-risk data being stored and used in the PRC or in places accessible by PRC authorities. Robust due diligence and transaction monitoring are also critical for addressing potential legal exposure, reputation risks, and unfair advantage that data and intellectual property theft would provide competitors. Businesses should seek to acquire a thorough understanding of the ownership of data service providers, location of data infrastructure, and any tangential foreign business relationships and significant foreign investors.
  • The Federal Communications Commission (FCC) is asking for comments on the $3.2 billion Emergency Broadband Benefit Program established in the “Consolidated Appropriations Act, 2021” (H.R. 133). Comments are due by 16 February 2021. The FCC noted “eligible households may receive a discount off the cost of broadband service and certain connected devices during an emergency period relating to the COVID-19 pandemic, and participating providers can receive a reimbursement for such discounts.” The FCC explained the program in further detail:
    • Pursuant to the Consolidated Appropriations Act, the Emergency Broadband Benefit Program will use available funding from the Emergency Broadband Connectivity Fund to support participating providers’ provision of certain broadband services and connected devices to qualifying households.
    • To participate in the program, a provider must elect to participate and either be designated as an eligible telecommunications carrier or be approved by the Commission. Participating providers will make available to eligible households a monthly discount off the standard rate for an Internet service offering and associated equipment, up to $50.00 per month.
    • On Tribal lands, the monthly discount may be up to $75.00 per month. Participating providers will receive reimbursement from the Emergency Broadband Benefit Program for the discounts provided.
    • Participating providers that also supply an eligible household with a laptop, desktop computer, or tablet (connected device) for use during the emergency period may receive a single reimbursement of up to $100.00 for the connected device, if the charge to the eligible household for that device is more than $10.00 but less than $50.00.  An eligible household may receive only one supported device.  Providers must submit certain certifications to the Commission to receive reimbursement from the program, and the Commission is required to adopt audit requirements to ensure provider compliance and prevent waste, fraud, and abuse.
  • The Biden-Harris transition team named National Security Agency’s (NSA) Director of Cybersecurity as the Biden White House’s Deputy National Security Advisor for Cyber and Emerging Technology. Anne Neuberger’s portfolio at the NSA included “lead[ing] NSA’s cybersecurity mission, including emerging technology areas like quantum-resistant cryptography.” At the National Security Council, Neuberger would will work to coordinate cybersecurity and emerging technology policy across agencies and funnel policy options up to the full NSC and ultimately the President. It is not clear how Neuberger’s portfolio will interact with the newly created National Cybersecurity Director, a position that, thus far, has remained without a nominee.
    • The transition noted “[p]rior to this role, she led NSA’s Election Security effort and served as Assistant Deputy Director of NSA’s Operations Directorate, overseeing foreign intelligence and cybersecurity operations…[and] also previously served as NSA’s first Chief Risk Officer, as Director of NSA’s Commercial Solutions Center, as Director of the Enduring Security Framework cybersecurity public-private partnership, as the Navy’s Deputy Chief Management Officer, and as a White House Fellow.” The transition stated that “[p]rior to joining government service, Neuberger was Senior Vice President of Operations at American Stock Transfer & Trust Company (AST), where she directed technology and operations.”
  • The Federal Communications Commission (FCC) published a final rule in response to the United States (U.S.) Court of Appeals for the District of Columbia’s decision striking down three aspects of the FCC’s rollback of net neutrality, “Restoring Internet Freedom Order.” The FCC explained the final rule:
    • responds to a remand from the U.S. Court of Appeals for the D.C. Circuit directing the Commission to assess the effects of the Commission’s Restoring Internet Freedom Order on public safety, pole attachments, and the statutory basis for broadband internet access service’s inclusion in the universal service Lifeline program. This document also amends the Commission’s rules to remove broadband internet service from the list of services supported by the universal service Lifeline program, while preserving the Commission’s authority to fund broadband internet access service through the Lifeline program.
    • In 2014, the U.S. Court of Appeals for the District of Columbia struck down a 2010 FCC net neutrality order in Verizon v. FCC, but the court did suggest a path forward. The court held the FCC “reasonably interpreted section 706 to empower it to promulgate rules governing broadband providers’ treatment of Internet traffic, and its justification for the specific rules at issue here—that they will preserve and facilitate the “virtuous circle” of innovation that has driven the explosive growth of the Internet—is reasonable and supported by substantial evidence.” The court added that “even though the Commission has general authority to regulate in this arena, it may not impose requirements that contravene express statutory mandates…[and] [g]iven that the Commission has chosen to classify broadband providers in a manner that exempts them from treatment as common carriers, the Communications Act expressly prohibits the Commission from nonetheless regulating them as such.” However, in 2016, the same court upheld the 2015 net neutrality regulations in U.S. Telecom Association v. FCC, and then upheld most of the Trump Administration’s FCC’s repeal of the its earlier net neutrality rule.
    • However, the D.C. Circuit declined to accept the FCC’s attempt to preempt all contrary state laws and struck down this part of the FCC’s rulemaking. Consequently, states and local jurisdictions may now be free to enact regulations of internet services along the lines of the FCC’s now repealed Open Internet Order. The D.C. Circuit also sent the case back to the FCC for further consideration on three points.
    • In its request for comments on how to respond to the remand, the FCC summarized the three issues: public safety, pole attachments, and the Lifeline Program:
      • Public Safety.  First, we seek to refresh the record on how the changes adopted in the Restoring Internet Freedom Order might affect public safety. In the Restoring Internet Freedom Order, the Commission predicted, for example, that permitting paid prioritization arrangements would “increase network innovation,” “lead[] to higher investment in broadband capacity as well as greater innovation on the edge provider side of the market,” and “likely . . . be used to deliver enhanced service for applications that need QoS [i.e., quality of service] guarantees.” Could the network improvements made possible by prioritization arrangements benefit public safety applications—for example, by enabling the more rapid, reliable transmission of public safety-related communications during emergencies? 
      • Pole Attachments.  Second, we seek to refresh the record on how the changes adopted in the Restoring Internet Freedom Order might affect the regulation of pole attachments in states subject to federal regulation.  To what extent are ISPs’ pole attachments subject to Commission authority in non-reverse preemption states by virtue of the ISPs’ provision of cable or telecommunications services covered by section 224?  What impact would the inapplicability of section 224 to broadband-only providers have on their access to poles?  Have pole owners, following the Order, “increase[d] pole attachment rates or inhibit[ed] broadband providers from attaching equipment”?  How could we use metrics like increases or decreases in broadband deployment to measure the impact the Order has had on pole attachment practices?  Are there any other impacts on the regulation of pole attachments from the changes adopted in the Order?  Finally, how do any potential considerations about pole attachments bear on the Commission’s underlying decision to classify broadband as a Title I information service?
      • Lifeline Program.  Third, we seek to refresh the record on how the changes adopted in the Restoring Internet Freedom Order might affect the Lifeline program.  In particular, we seek to refresh the record on the Commission’s authority to direct Lifeline support to eligible telecommunications carriers (ETCs) providing broadband service to qualifying low-income consumers.  In the 2017 Lifeline NPRM, the Commission proposed that it “has authority under Section 254(e) of the Act to provide Lifeline support to ETCs that provide broadband service over facilities-based broadband-capable networks that support voice service,” and that “[t]his legal authority does not depend on the regulatory classification of broadband Internet access service and, thus, ensures the Lifeline program has a role in closing the digital divide regardless of the regulatory classification of broadband service.”  How, if at all, does the Mozilla decision bear on that proposal, and should the Commission proceed to adopt it? 
  • The Federal Trade Commission (FTC) reached a settlement with a photo app company that allegedly did not tell users their photos would be subject to the company’s facial recognition technology. The FTC deemed this a deceptive business practice in violation of Section 5 of the FTC Act and negotiated a settlement the Commissioners approved in a 5-0 vote. The consent order includes interesting, perhaps even new language, requiring the company “to delete models and algorithms it developed by using the photos and videos uploaded by its users” according to the FTC’s press release.
    • In the complaint, the FTC asserted:
      • Since 2015, Everalbum has provided Ever, a photo storage and organization application, to consumers.
      • In February 2017, Everalbum launched its “Friends” feature, which operates on both the iOS and Android versions of the Ever app. The Friends feature uses face recognition to group users’ photos by faces of the people who appear in the photos. The user can choose to apply “tags” to identify by name (e.g., “Jane”) or alias (e.g., “Mom”) the individuals who appear in their photos. These tags are not available to other Ever users. When Everalbum launched the Friends feature, it enabled face recognition by default for all users of the Ever mobile app. At that time, Everalbum did not provide users of the Ever mobile app an option to turn off or disable the feature.
      • However, prior to April 2019, Ever mobile app users who were located anywhere other than Texas, Illinois, Washington, and the European Union did not need to, and indeed could not, take any affirmative action to “let[ Everalbum] know” that it should apply face recognition to the users’ photos. In fact, for those users, face recognition was enabled by default and the users lacked the ability to disable it. Thus, the article was misleading for Ever mobile app users located outside of Texas, Illinois, Washington, and the European Union.
      • Between September 2017 and August 2019, Everalbum combined millions of facial images that it extracted from Ever users’ photos with facial images that Everalbum obtained from publicly available datasets in order to create four new datasets to be used in the development of its face recognition technology. In each instance, Everalbum used computer scripts to identify and compile from Ever users’ photos images of faces that met certain criteria (i.e., not associated with a deactivated Ever account, not blurry, not too small, not a duplicate of another image, associated with a specified minimum number of images of the same tagged identity, and, in three of the four instances, not identified by Everalbum’s machines as being an image of someone under the age of thirteen).
      • The FTC summarized its settlement:
        • The proposed settlement requires Everalbum to delete:
          • the photos and videos of Ever app users who deactivated their accounts;
          • all face embeddings—data reflecting facial features that can be used for facial recognition purposes—the company derived from the photos of Ever users who did not give their express consent to their use; and
          • any facial recognition models or algorithms developed with Ever users’ photos or videos.
        • In addition, the proposed settlement prohibits Everalbum from misrepresenting how it collects, uses, discloses, maintains, or deletes personal information, including face embeddings created with the use of facial recognition technology, as well as the extent to which it protects the privacy and security of personal information it collects. Under the proposed settlement, if the company markets software to consumers for personal use, it must obtain a user’s express consent before using biometric information it collected from the user through that software to create face embeddings or develop facial recognition technology.
      • FTC Commissioner Rohit Chopra issued a statement, explaining his view on facial recognition technology and he settlement:
        • As outlined in the complaint, Everalbum made promises that users could choose not to have facial recognition technology applied to their images, and that users could delete the images and their account. In addition to those promises, Everalbum had clear evidence that many of the photo app’s users did not want to be roped into facial recognition. The company broke its promises, which constitutes illegal deception according to the FTC’s complaint. This matter and the FTC’s proposed resolution are noteworthy for several reasons.
        • First, the FTC’s proposed order requires Everalbum to forfeit the fruits of its deception. Specifically, the company must delete the facial recognition technologies enhanced by any improperly obtained photos. Commissioners have previously voted to allow data protection law violators to retain algorithms and technologies that derive much of their value from ill-gotten data. This is an important course correction.
        • Second, the settlement does not require the defendant to pay any penalty. This is unfortunate. To avoid this in the future, the FTC needs to take further steps to trigger penalties, damages, and other relief for facial recognition and data protection abuses. Commissioners have voted to enter into scores of settlements that address deceptive practices regarding the collection, use, and sharing of personal data. There does not appear to be any meaningful dispute that these practices are illegal. However, since Commissioners have not restated this precedent into a rule under Section 18 of the FTC Act, we are unable to seek penalties and other relief for even the most egregious offenses when we first discover them.
        • Finally, the Everalbum matter makes it clear why it is important to maintain states’ authority to protect personal data. Because the people of Illinois, Washington, and Texas passed laws related to facial recognition and biometric identifiers, Everalbum took greater care when it came to these individuals in these states. The company’s deception targeted Americans who live in states with no specific state law protections.
  • The Trump Administration issued the “National Maritime Cybersecurity Plan” that “sets forth how the United States government will defend the American economy through enhanced cybersecurity coordination, policies and practices, aimed at mitigating risks to the maritime sub-sector, promoting prosperity through information and intelligence sharing, and preserving and increasing the nation’s cyber workforce” according to the National Security Advisor Robert O’Brien. It will be up to the Biden Administration to implement, revise, or discard this strategy, but strategy documents such as this that complain anodyne recommendations tend to stay in place for the short-term, at least. It bears note that the uneven margins to the columns in the document suggests a rush to issue this document before the end of the Trump Administration. Nevertheless, O’Brien added:
    • President [Donald] Trump designated the cybersecurity of the Maritime Transportation System (MTS) as a top priority for national defense, homeland security, and economic competitiveness in the 2017 National Security Strategy. The MTS contributes to one quarter of all United States gross domestic product, or approximately $5.4 trillion. MTS operators are increasingly reliant on information technology (IT) and operational technology (OT) to maximize the reliability and efficiency of maritime commerce. This plan articulates how the United States government can buy down the potential catastrophic risks to our national security and economic prosperity created by technology innovations to strengthen maritime commerce efficiency and reliability.
    • The strategy lists a number of priority actions for the executive branch, including:
      • The United States will de- conflict government roles and responsibilities.
      • The United States will develop risk modeling to inform maritime cybersecurity standards and best practices.
      • The United States will strengthen cybersecurity requirements in port services contracts and leasing.
      • The United States will develop procedures to identify, prioritize, mitigate, and investigate cybersecurity risks in critical ship and port systems.
      • Exchange United States government information with the maritime industry.
      • Share cybersecurity intelligence with appropriate non- government entities.
      • Prioritize maritime cybersecurity intelligence collection.
  • The National Security Agency’s NSA Cybersecurity Directorate has issued its very annual review, the “2020 NSA Cybersecurity Year in Review” that encapsulates the first year of operation for the newly created part of the NSA.
    • Highlights include:
      • In 2020, NSA focused on modernizing encryption across the Department of Defense (DOD). It began with a push to eliminate cryptography that is at risk from attack due to adversarial computational advances. This applied to several systems commonly used by the Armed Services today to provide command and control, critical communications, and battlefield awareness. It also applied to operational practices concerning the handling of cryptographic keys and the implementation of modern suites of cryptography in network communications devices.
      • 2020 was notable for the number of Cybersecurity Advisories (CSAs) and other products NSA cybersecurity produced and released. These products are intended to alert network owners, specifically National Security System (NSS), Department of Defense (DOD), and Defense Industrial Base (DIB), of cyber threats and enable defenders to take immediate action to secure their systems.
      • 2020 was notable not just because it was the NSA Cybersecurity Directorate’s first year nor because of COVID-19, but also because it was an election year in the United States. Drawing on lessons learned from the 2016 presidential election and the 2018 mid-term elections, NSA was fully engaged in whole-of-government efforts to protect 2020 election from foreign interference and influence. Cybersecurity was a foundational component of NSA’s overall election defense effort.
      • This past year, NSA cybersecurity prioritized public-private collaboration, invested in cybersecurity research, and made a concerted effort to build trusted partnerships with the cybersecurity community.
      • The NSA touted the following achievements:
        • In November 2019, NSA began laying the groundwork to conduct a pilot with the Defense Cyber Crime Center and five DIB companies to monitor and block malicious network traffic based on continuous automated analysis of the domain names these companies’ networks were contacting. The pilot’s operational phase commenced in March 2020. Over six months, the Protective Domain Name Service (PDNS) examined more than 4 billion DNS queries to and from these companies. The PDNS provider identified callouts to 3,519 malicious domains and blocked upwards of 13 million connections to those domains. The pilot proved the value of DoD expanding the PDNS service to all DIB entities at scale
        • How cyber secure is cyber “ready” for combat? In response to legislation that recognized the imperative of protecting key weapons and space systems from adversary cyber intrusions, NSA partnered closely with the DoD CIO, Joint Staff, Undersecretary of Defense for Acquisition & Sustainment, and the Military Services to structure, design, and execute a new cybersecurity program, focused on the most important weapons and space systems, known as the Strategic Cybersecurity Program (SCP), with the mindset of “stop assessing and start addressing.”The program initially identified 12 key weapons and space systems that must be evaluated for cybersecurity vulnerabilities that need to be mitigated. This is either due to the existence of intelligence indicating they are being targeted by cyber adversaries or because the systems are particularly important to warfighting. These systems cover all warfighting domains (land, sea, air, cyber, and space). Under the auspices of the SCP, NSA and military service partners will conduct cybersecurity evaluations, and, most importantly, maintain cyber risk scoreboards and mitigation plans accountability in reducing cyber risk to acceptable levels
      • The NSA sees the following issue son the horizon:
        • In October 2020, NSA launched an expansive effort across the Executive Branch to understand how we can better inform, drive, and understand the activities of NSS owners to prevent, or respond to, critical cybersecurity events, and cultivate an operationally-aligned community resilient against the most advanced threats. These efforts across the community will come to fruition during the first quarter of 2021 and are expected to unify disparate elements across USG for stronger cybersecurity at scale.
        • NSA Cybersecurity is also focused on combating ransomware, a significant threat to NSS and critical infrastructure. Ransomware activity has become more destructive and impactful in nature and scope. Malicious actors target critical data and propagate ransomware across entire networks, alarmingly focusing recent attacks against U.S. hospitals. In 2020, NSA formed multiple working groups with U.S. Government agencies and other partners to identify ways to make ransomware operations more difficult for our adversaries, less scalable, and less lucrative. While the ransomware threat remains significant, NSA will continue to develop innovative ways to keep the activity at bay.
  • This week, Parler sued Amazon after it rescinded its web hosting services to the social media platform billed as the conservative, unbiased alternative to Twitter. Amazon has responded with an extensive list of the inflammatory, inciting material upon which it based its decision.
    • In its 11 January complaint, Parler asked a federal court “for injunctive relief, including a temporary restraining order and preliminary injunctive relief, and damages” because mainly “AWS’s decision to effectively terminate Parler’s account is apparently motivated by political animus…[and] is also apparently designed to reduce competition in the microblogging services market to the benefit of Twitter” in violation of federal antitrust law.
    • In its 12 January response, Amazon disagreed:
      • This case is not about suppressing speech or stifling viewpoints. It is not about a conspiracy to restrain trade. Instead, this case is about Parler’s demonstrated unwillingness and inability to remove from the servers of Amazon Web Services (“AWS”) content that threatens the public safety, such as by inciting and planning the rape, torture, and assassination of named public officials and private citizens. There is no legal basis in AWS’s customer agreements or otherwise to compel AWS to host content of this nature. AWS notified Parler repeatedly that its content violated the parties’ agreement, requested removal, and reviewed Parler’s plan to address the problem, only to determine that Parler was both unwilling and unable to do so. AWS suspended Parler’s account as a last resort to prevent further access to such content, including plans for violence to disrupt the impending Presidential transition.
    • Amazon offered a sampling of the content on Parler that caused AWS to pull the plug on the platform:
      • “Fry’em up. The whole fkn crew. #pelosi #aoc #thesquad #soros #gates #chuckschumer #hrc #obama #adamschiff #blm #antifa we are coming for you and you will know it.”
      • “#JackDorsey … you will die a bloody death alongside Mark Suckerturd [Zuckerberg]…. It has been decided and plans are being put in place. Remember the photographs inside your home while you slept? Yes, that close. You will die a sudden death!”
      • “We are going to fight in a civil War on Jan.20th, Form MILITIAS now and acquire targets.”
      • “On January 20th we need to start systematicly [sic] assassinating [sic] #liberal leaders, liberal activists, #blm leaders and supporters, members of the #nba #nfl #mlb #nhl #mainstreammedia anchors and correspondents and #antifa. I already have a news worthy event planned.”
      • Shoot the police that protect these shitbag senators right in the head then make the senator grovel a bit before capping they ass.”

Coming Events

  • On 13 January, the Federal Communications Commission (FCC) will hold its monthly open meeting, and the agency has placed the following items on its tentative agenda “Bureau, Office, and Task Force leaders will summarize the work their teams have done over the last four years in a series of presentations:
    • Panel One. The Commission will hear presentations from the Wireless Telecommunications Bureau, International Bureau, Office of Engineering and Technology, and Office of Economics and Analytics.
    • Panel Two. The Commission will hear presentations from the Wireline Competition Bureau and the Rural Broadband Auctions Task Force.
    • Panel Three. The Commission will hear presentations from the Media Bureau and the Incentive Auction Task Force.
    • Panel Four. The Commission will hear presentations from the Consumer and Governmental Affairs Bureau, Enforcement Bureau, and Public Safety and Homeland Security Bureau.
    • Panel Five. The Commission will hear presentations from the Office of Communications Business Opportunities, Office of Managing Director, and Office of General Counsel.
  • On 15 January, the Senate Intelligence Committee will hold a hearing on the nomination of Avril Haines to be the Director of National Intelligence.
  • The Senate Homeland Security and Governmental Affairs Committee will hold a hearing on the nomination of Alejandro N. Mayorkas to be Secretary of Homeland Security on 19 January.
  • On 19 January, the Senate Armed Services Committee will hold a hearing on former General Lloyd Austin III to be Secretary of Defense.
  • On 27 July, the Federal Trade Commission (FTC) will hold PrivacyCon 2021.