Further Reading, Other Development, and Coming Events (20 and 21 January 2021)

Further Reading

  • Amazon’s Ring Neighbors app exposed users’ precise locations and home addresses” By Zack Whittaker — Tech Crunch. Again Amazon’s home security platform suffers problems by way of users data being exposed or less than protected.
  • Harassment of Chinese dissidents was warning signal on disinformation” By Shawna Chen and Bethany Allen-Ebrahimian — Axios. In an example of how malicious online activities can spill into the real world as a number of Chinese dissidents were set upon by protestors.
  • How Social Media’s Obsession with Scale Supercharged Disinformation” By Joan Donovan — Harvard Business Review. Companies like Facebook and Twitter emphasized scale over safety in trying to grow as quickly as possible. This lead to a proliferation of fake accounts and proved welcome ground for the seeds of misinformation.
  • The Moderation War Is Coming to Spotify, Substack, and Clubhouse” By Alex Kantrowitz — OneZero. The same issues with objectionable and abusive content plaguing Twitter, Facebook, YouTube and others will almost certainly become an issue for the newer platforms, and in fact already are.
  • Mexican president mounts campaign against social media bans” By Mark Stevenson — The Associated Press. The leftist President of Mexico President Andrés Manuel López Obrador is vowing to lead international efforts to stop social media companies from censoring what he considers free speech. Whether this materializes into something substantial is not clear.
  • As Trump Clashes With Big Tech, China’s Censored Internet Takes His Side” By Li Yuan — The New York Times. The government in Beijing is framing the ban of former President Donald Trump after the attempted insurrection by social media platforms as proof there is no untrammeled freedom of speech. This position helps bolster the oppressive policing of online content the People’s Republic of China (PRC) wages against its citizens. And quite separately many Chinese people (or what appear to be actual people) are questioning what is often deemed the censoring of Trump in the United States (U.S.), a nation ostensibly committed to free speech. There is also widespread misunderstanding about the First Amendment rights of social media platforms not to host content with which they disagree and the power of platforms to make such determinations without fear that the U.S. government will punish them as is often the case in the PRC.
  • Trump admin slams China’s Huawei, halting shipments from Intel, others – sources” By Karen Freifeld and Alexandra Alper — Reuters. On its way out of the proverbial door, the Trump Administration delivered parting shots to Huawei and the People’s Republic of China by revoking one license and denying others to sell the PRC tech giant semiconductors. Whether the Biden Administration will reverse or stand by these actions remains to be seen. The companies, including Intel, could appeal. Additionally, there are an estimated $400 million worth of applications for similar licenses pending at the Department of Commerce that are now the domain of the new regime in Washington. It is too early to discern how the Biden Administration will maintain or modify Trump Administration policy towards the PRC.
  • Behind a Secret Deal Between Google and Facebook” By Daisuke Wakabayashi and Tiffany Hsu — The New York Times. The newspaper got its hands on an unredacted copy of the antitrust suit Texas Attorney General Ken Paxton and other attorneys general filed against Google, and it has details on the deal Facebook and Google allegedly struck to divide the online advertising world. Not only did Facebook ditch an effort launched by publishers to defeat Google’s overwhelming advantages in online advertising bidding, it joined Google’s rival effort with a guarantee that it would win a specified number of bids and more time to bid on ads. Google and Facebook naturally deny any wrongdoing.
  • Biden and Trump Voters Were Exposed to Radically Different Coverage of the Capitol Riot on Facebook” By Colin Lecher and Jon Keegan — The Markup. Using a tool on browsers the organization pays Facebook users to have, the Markup can track the type of material they see in their feed. Facebook’s algorithm fed people material about the 6 January 2021 attempted insurrection based on their political views. Many have pointed out that this very dynamic creates filter bubbles that poison democracy and public discourse.
  • Banning Trump won’t fix social media: 10 ideas to rebuild our broken internet – by experts” By Julia Carrie Wong — The Guardian. There are some fascinating proposals in this piece that could help address the problems of social media.
  • Misinformation dropped dramatically the week after Twitter banned Trump and some allies” By Elizabeth Dwoskin and Craig Timberg — The Washington Post. Research showed that lies, misinformation, and disinformation about election fraud dropped by three-quarters after former President Donald Trump was banned from Twitter and other platforms. Other research showed that a small group of conservatives were responsible for up to 20% of misinformation on this and other conspiracies.
  • This Was WhatsApp’s Plan All Along” By Shoshana Wodinsky — Gizmodo. This piece does a great job of breaking down into plain English the proposed changes to terms of service on WhatsApp that so enraged users that competitors Signal and Telegram have seen record-breaking downloads. Basically, it is all about reaping advertising dollars for Facebook through businesses and third-party partners using user data from business-related communications. Incidentally, WhatsApp has delayed changes until March because of the pushback.
  • Brussels eclipsed as EU countries roll out their own tech rules” By By Laura Kayali and Mark Scott — Politico EU. The European Union (EU) had a hard-enough task in trying to reach final language on a Digital Services Act and Digital Markets Act without nations like France, Germany, Poland, and others picking and choosing text from draft bills and enacting them into law. Brussels is not happy with this trend.

Other Developments

  • Federal Trade Commission (FTC) Chair Joseph J. Simons announced his resignation from the FTC effective on 29 January 2021 in keeping with tradition and past practice. This resignation clears the way for President Joe Biden to name the chair of the FTC, and along with FTC Commissioner Rohit Chopra’s nomination to head the Consumer Financial Protection Bureau (CFPB), the incoming President will get to nominate two Democratic FTC Commissioners, tipping the political balance of the FTC and likely ushering in a period of more regulation of the technology sector.
    • Simons also announced the resignation of senior staff: General Counsel Alden F. Abbott; Bureau of Competition Director Ian Conner; Bureau of Competition Deputy Directors Gail Levine and Daniel Francis; Bureau of Consumer Protection Director Andrew Smith; Bureau of Economics Director Andrew Sweeting; Office of Public Affairs Director Cathy MacFarlane; and Office of Policy Planning Director Bilal Sayyed.
  • In a speech last week before he sworn in, President Joe Biden announced his $1.9 trillion American Rescue Plan, and according to a summary, Biden will ask Congress to provide $10 billion for a handful of government facing programs to improve technology. Notably, Biden “is calling on Congress to launch the most ambitious effort ever to modernize and secure federal IT and networks.” Biden is proposing to dramatically increase funding for a fund that would allow agencies to borrow and then pay back funds to update their technology. Moreover, Biden is looking to push more money to a program to aid officials at agencies who oversee technology development and procurement.
    • Biden stated “[t]o remediate the SolarWinds breach and boost U.S. defenses, including of the COVID-19 vaccine process, President-elect Biden is calling on Congress to:
      • Expand and improve the Technology Modernization Fund. ​A $9 billion investment will help the U.S. launch major new IT and cybersecurity shared services at the Cyber Security and Information Security Agency (CISA) and the General Services Administration and complete modernization projects at federal agencies. ​In addition, the president-elect is calling on Congress to change the fund’s reimbursement structure in order to fund more innovative and impactful projects.
      • Surge cybersecurity technology and engineering expert hiring​. Providing the Information Technology Oversight and Reform fund with $200 million will allow for the rapid hiring of hundreds of experts to support the federal Chief Information Security Officer and U.S. Digital Service.
      • Build shared, secure services to drive transformational projects. ​Investing$300 million in no-year funding for Technology Transformation Services in the General Services Administration will drive secure IT projects forward without the need of reimbursement from agencies.
      • Improving security monitoring and incident response activities. ​An additional $690M for CISA will bolster cybersecurity across federal civilian networks, and support the piloting of new shared security and cloud computing services.
  • The United States (U.S.) Department of Commerce issued an interim final rule pursuant to an executive order (EO) issued by former President Donald Trump to secure the United States (U.S.) information and communications supply chain. This rule will undoubtedly be reviewed by the Biden Administration and may be withdrawn or modified depending on the fate on the EO on which the rule relies.
    • In the interim final rule, Commerce explained:
      • These regulations create the processes and procedures that the Secretary of Commerce will use to identify, assess, and address certain transactions, including classes of transactions, between U.S. persons and foreign persons that involve information and communications technology or services designed, developed, manufactured, or supplied, by persons owned by, controlled by, or subject to the jurisdiction or direction of a foreign adversary; and pose an undue or unacceptable risk. While this interim final rule will become effective on March 22, 2021, the Department of Commerce continues to welcome public input and is thus seeking additional public comment. Once any additional comments have been evaluated, the Department is committed to issuing a final rule.
      • On November 27, 2019, the Department of Commerce (Department) published a proposed rule to implement the terms of the Executive Order. (84 FR 65316). The proposed rule set forth processes for (1) how the Secretary would evaluate and assess transactions involving ICTS to determine whether they pose an undue risk of sabotage to or subversion of the ICTS supply chain, or an unacceptable risk to the national security of the United States or the security and safety of U.S. persons; (2) how the Secretary would notify parties to transactions under review of the Secretary’s decision regarding the ICTS Transaction, including whether the Secretary would prohibit or mitigate the transaction; and (3) how parties to transactions reviewed by the Secretary could comment on the Secretary’s preliminary decisions. The proposed rule also provided that the Secretary could act without complying with the proposed procedures where required by national security. Finally, the Secretary would establish penalties for violations of mitigation agreements, the regulations, or the Executive Order.
      • In addition to seeking general public comment, the Department requested comments from the public on five specific questions: (1) Whether the Secretary should consider categorical exclusions or whether there are classes of persons whose use of ICTS cannot violate the Executive Order; (2) whether there are categories of uses or of risks that are always capable of being reliably and adequately mitigated; (3) how the Secretary should monitor and enforce any mitigation agreements applied to a transaction; (4) how the terms, “transaction,” “dealing in,” and “use of” should be clarified in the rule; and (5) whether the Department should add record-keeping requirements for information related to transactions.
      • The list of “foreign adversaries” consists of the following foreign governments and non-government persons: The People’s Republic of China, including the Hong Kong Special Administrative Region (China); the Republic of Cuba (Cuba); the Islamic Republic of Iran (Iran); the Democratic People’s Republic of Korea (North Korea); the Russian Federation (Russia); and Venezuelan politician Nicolás Maduro (Maduro Regime).
  • The Federal Trade Commission (FTC) adjusted its penalty amounts for inflation, including a boost to the per violation penalty virtually all the privacy bills introduced in the last Congress would allow the agency to wield against first-time violators. The penalty for certain unfair and deceptive acts or practices was increased from $43,280 to $43,792.
  • The United States (U.S.) Department of State stood up its new Bureau of Cyberspace Security and Emerging Technologies (CSET) as it has long planned. At the beginning of the Trump Administration, the Department of State dismantled the Cyber Coordinator Office and gave its cybersecurity portfolio to the Bureau of Economic Affairs, which displeased Congressional stakeholders. In 2019, the department notified Congress of its plan to establish CSET. The department asserted:
    • The need to reorganize and resource America’s cyberspace and emerging technology security diplomacy through the creation of CSET is critical, as the challenges to U.S. national security presented by China, Russia, Iran, North Korea, and other cyber and emerging technology competitors and adversaries have only increased since the Department notified Congress in June 2019 of its intent to create CSET.
    • The CSET bureau will lead U.S. government diplomatic efforts on a wide range of international cyberspace security and emerging technology policy issues that affect U.S. foreign policy and national security, including securing cyberspace and critical technologies, reducing the likelihood of cyber conflict, and prevailing in strategic cyber competition.  The Secretary’s decision to establish CSET will permit the Department to posture itself appropriately and engage as effectively as possible with partners and allies on these pressing national security concerns.
    • The Congressional Members of the Cyberspace Solarium Commission made clear their disapproval of the decision. Senators Angus King (I-ME) and Ben Sasse, (R-NE) and Representatives Mike Gallagher (R-WI) and Jim Langevin (D-RI) said:
      • In our report, we emphasize the need for a greater emphasis on international cyber policy at State. However, unlike the bipartisan Cyber Diplomacy Act, the State Department’s proposed Bureau will reinforce existing silos and […] hinder the development of a holistic strategy to promote cyberspace stability on the international stage. We urge President-elect Biden to pause this reorganization when he takes office in two weeks and work with Congress to enact meaningful reform to protect our country in cyberspace.
  • The Australian Cyber Security Centre (ACSC) the Risk Identification Guidance “developed to assist organisations in identifying risks associated with their use of suppliers, manufacturers, distributors and retailers (i.e. businesses that constitute their cyber supply chain)” and the Risk Management Guidance because “[c]yber supply chain risk management can be achieved by identifying the cyber supply chain, understanding cyber supply chain risk, setting cyber security expectations, auditing for compliance, and monitoring and improving cyber supply chain security practices.”
  • The United Kingdom’s Surveillance Camera Commissioner (SCC), issued “best practice guidance, ‘Facing the Camera’, to all police forces in England and Wales” The SCC explained that “The provisions of this document only apply to the use of facial recognition technology and the inherent processing of images by the police where such use is integral to a surveillance camera system being operated in ‘live time’ or ‘near real time’ operational scenarios.” Last summer, a British appeals court overturned a decision that found that a police force’s use of facial recognition technology in a pilot program that utilized live footage to be legal. The appeals court found the use of this technology by the South Wales Police Force a violation of “the right to respect for private life under Article 8 of the European  Convention  on  Human  Rights,  data  protection  legislation,  and  the  Public  Sector Equality Duty (“PSED”) under section 149 of the Equality Act 2010.” The SCC stated:
    • The SCC considers surveillance to be an intrusive investigatory power where it is conducted by the police which impacts upon those fundamental rights and freedoms of people, as set out by the European Convention of Human Rights (ECHR) and the Human Rights Act 1998. In the context of surveillance camera systems which make use of facial recognition technology, the extent of state intrusion in such matters is significantly increased by the capabilities of algorithms which are in essence, integral to the surveillance conduct seeking to harvest information, private information, metadata, data, personal data, intelligence and evidence. Each of the aforementioned are bound by laws and rules which ought to be separately and jointly considered and applied in a manner which is demonstrably lawful and ethical and engenders public trust and confidence.
    • Whenever the police seek to use technology in pursuit of a legitimate aim, the key question arises as to whether the degree of intrusion which is caused to the fundamental freedoms of citizens by the police surveillance conduct using surveillance algorithms (biometric or otherwise) is necessary in a democratic society when considered alongside the legality and proportionality of their endeavours and intent. The type of equipment/technology/modality which they choose to use to that end (e.g. LFR, ANPR, thermal imaging, gait analysis, movement sensors etc), the manner in which such technological means are deployed, (such as using static cameras at various locations, used with body worn cameras or other mobile means), and whether such technology is used overtly alongside or networked with other surveillance technologies, are all factors which may significantly influence the depth of intrusion caused by police conduct upon citizen’s rights.
  • The Senate confirmed the nomination of Avril Haines to be the new Director of National Intelligence by an 89-10 vote after Senator Tom Cotton (R-AK) removed his hold on her nomination. However, Josh Hawley (R-MO) placed a hold on the nomination of Alejandro Mayorkas to be the next Secretary of Homeland Security and explained his action this way:
    • On Day 1 of his administration, President-elect Biden has said he plans to unveil an amnesty plan for 11 million immigrants in this nation illegally. This comes at a time when millions of American citizens remain out of work and a new migrant caravan has been attempting to reach the United States. Mr. Mayorkas has not adequately explained how he will enforce federal law and secure the southern border given President-elect Biden’s promise to roll back major enforcement and security measures. Just today, he declined to say he would enforce the laws Congress has already passed to secure the border wall system. Given this, I cannot consent to skip the standard vetting process and fast-track this nomination when so many questions remain unanswered.
  • Former Trump White House Cyber Coordinator Rob Joyce will replace the National Security Agency’s (NSA) Director of Cybersecurity Anne Neuberger who has been named the Biden White House’s Deputy National Security Advisor for Cyber and Emerging Technology. Anne Neuberger’s portfolio at the NSA included “lead[ing] NSA’s cybersecurity mission, including emerging technology areas like quantum-resistant cryptography.” Joyce was purged when former National Security Advisor John Bolton restructured the NSC in 2018, forcing out Joyce and his boss, former Homeland Security Advisor Tom Bossert. Presumably Joyce would have the same responsibilities. At the National Security Council, Neuberger would will work to coordinate cybersecurity and emerging technology policy across agencies and funnel policy options up to the full NSC and ultimately the President. This work would include Joyce.
  • The Supreme Court of the United States (SCOTUS) heard oral arguments on whether the Federal Trade Commission (FTC) Act gives the agency the power to seek monetary damages and restitution alongside permanent injunctions under Section 13(b). In AMG Capital Management, LLC v. FTC, the parties opposing the FTC argue the plain language of the statute does not allow for the seeking of restitution and monetary damages under this specific section of the FTC Act while the agency argues long accepted past practice and Congressional intent do, in fact, allow this relief to be sought when the FTC is seeking to punish violators of Section 5. The FTC is working a separate track to get a fix from Congress which could rewrite the FTC Act to make clear this sort of relief is legal. However, some stakeholders in the debate over privacy legislation may be using the case as leverage.
    • In October 2020, the FTC wrote the House and Senate committees with jurisdiction over the agency, asking for language to resolve the litigation over the power to seek and obtain restitution for victims of those who have violated Section 5 of the FTC Act and disgorgement of ill-gotten gains. The FTC is also asking that Congress clarify that the agency may act against violators even if their conduct has stopped as it has for more than four decades. Two federal appeals courts have ruled in ways that have limited the FTC’s long used powers, and now the Supreme Court of the United States is set to rule on these issues sometime next year. The FTC is claiming, however, that defendants are playing for time in the hopes that the FTC’s authority to seek and receive monetary penalties will ultimately be limited by the United States (U.S.) highest court. Judging by language tucked into a privacy bill introduced by the former chair of one of the committees, Congress may be willing to act soon.
    • The FTC asked the House Energy and Commerce and Senate Commerce, Science, and Transportation Committees “to take quick action to amend Section 13(b) [of the FTC Act i.e. 15 U.S.C. § 53(b)] to make clear that the Commission can bring actions in federal court under Section 13(b) even if conduct is no longer ongoing or impending when the suit is filed and can obtain monetary relief, including restitution and disgorgement, if successful.” The agency asserted “[w]ithout congressional action, the Commission’s ability to use Section 13(b) to provide refunds to consumer victims and to enjoin illegal activity is severely threatened.” All five FTC Commissioners signed the letter.
    • The FTC explained that adverse rulings by two federal appeals courts are constraining the agency from seeking relief for victims and punishment for violators of the FTC Act in federal courts below those two specific courts, but elsewhere defendants are either asking courts for a similar ruling or using delaying tactics in the hopes the Supreme Court upholds the two federal appeals courts:
      • …[C]ourts of appeals in the Third and Seventh Circuits have recently ruled that the agency cannot obtain any monetary relief under Section 13(b). Although review in the Supreme Court is pending, these lower court decisions are already inhibiting our ability to obtain monetary relief under 13(b). Not only do these decisions already prevent us from obtaining redress for consumers in the circuits where they issued, prospective defendants are routinely invoking them in refusing to settle cases with agreed-upon redress payments.
      • Moreover, defendants in our law enforcement actions pending in other circuits are seeking to expand the rulings to those circuits and taking steps to delay litigation in anticipation of a potential Supreme Court ruling that would allow them to escape liability for any monetary relief caused by their unlawful conduct. This is a significant impediment to the agency’s effectiveness, its ability to provide redress to consumer victims, and its ability to prevent entities who violate the law from profiting from their wrongdoing.
  • The United Kingdom’s Information Commissioner’s Office (ICO) issued guidance for British entities that may be affected by the massive SolarWinds hack that has compromised many key systems in the United States. The ICO advised:
    • Organisations should immediately check whether they are using a version of the software that has been compromised. These are versions 2019.4 HF 5, 2020.2 with no hotfix installed, and 2020.2 HF 1.
    • Organisations must also determine if the personal data they hold has been affected by the cyber-attack. If a reportable personal data breach is found, UK data controllers are required to inform the ICO within 72 hours of discovering the breach. Reports can be submitted online or organisations can call the ICO’s personal data breach helpline for advice on 0303 123 1113, option 2.
    • Organisations subject to the NIS Regulation will also need to determine if this incident has led to a “substantial impact on the provision’ of its digital services and report to the ICO.
  • Europol announced the takedown of “the world’s largest illegal marketplace on the dark web” in an operation coordinated by the following nations: “Germany, Australia, Denmark, Moldova, Ukraine, the United Kingdom (the National Crime Agency), and the USA (DEA, FBI, and IRS).” Europol added:
    • The Central Criminal Investigation Department in the German city of Oldenburg arrested an Australian citizen who is the alleged operator of DarkMarket near the German-Danish border over the weekend. The investigation, which was led by the cybercrime unit of the Koblenz Public Prosecutor’s Office, allowed officers to locate and close the marketplace, switch off the servers and seize the criminal infrastructure – more than 20 servers in Moldova and Ukraine supported by the German Federal Criminal Police office (BKA). The stored data will give investigators new leads to further investigate moderators, sellers, and buyers. 
  • The Enforcement Bureau (Bureau) of the Federal Communications Commission (FCC) issued an enforcement advisory intended to remind people that use of amateur and personal radios to commit crimes is itself a criminal offense that could warrant prosecution. The notice was issued because the FCC is claiming it is aware of discussion by some of how these means of communications may be superior to social media, which has been cracking down on extremist material since the attempted insurrection at the United States Capitol on 6 January. The Bureau stated:
    • The Bureau has become aware of discussions on social media platforms suggesting that certain radio services regulated by the Commission may be an alternative to social media platforms for groups to communicate and coordinate future activities.  The Bureau recognizes that these services can be used for a wide range of permitted purposes, including speech that is protected under the First Amendment of the U.S. Constitution.  Amateur and Personal Radio Services, however, may not be used to commit or facilitate crimes. 
    • Specifically, the Bureau reminds amateur licensees that they are prohibited from transmitting “communications intended to facilitate a criminal act” or “messages encoded for the purpose of obscuring their meaning.” Likewise, individuals operating radios in the Personal Radio Services, a category that includes Citizens Band radios, Family Radio Service walkie-talkies, and General Mobile Radio Service, are prohibited from using those radios “in connection with any activity which is against Federal, State or local law.” Individuals using radios in the Amateur or Personal Radio Services in this manner may be subject to severe penalties, including significant fines, seizure of the offending equipment, and, in some cases, criminal prosecution.
  • The European Data Protection Board (EDPB) issued its “Strategy for 2021-2023” in order “[t]o be effective in confronting the main challenges ahead.” The EDPB cautioned:
    • This Strategy does not provide an exhaustive overview of the work of the EDPB in the years to come. Rather it sets out the four main pillars of our strategic objectives, as well as set of key actions to help achieve those objectives. The EDPB will implement this Strategy within its Work Program, and will report on the progress achieved in relation to each Pillar as part of its annual reports.
    • The EDPB listed and explained the four pillars of its strategy:
      • PILLAR 1: ADVANCING HARMONISATION AND FACILITATING COMPLIANCE. The EDPB will continue to strive for a maximum degree of consistency in the application of data protection rules and limit fragmentation among Member States. In addition to providing practical, easily understandable and accessible guidance, the EDPB will develop and promote tools that help to implement data protection into practice, taking into account practical experiences of different stakeholders on the ground.
      • PILLAR 2: SUPPORTING EFFECTIVE ENFORCEMENT AND EFFICIENT COOPERATION BETWEEN NATIONAL SUPERVISORY AUTHORITIES. The EDPB is fully committed to support cooperation between all national supervisory authorities that work together to enforce European data protection law. We will streamline internal processes, combine expertise and promote enhanced coordination. We intend not only to ensure a more efficient functioning of the cooperation and consistency mechanisms, but also to strive for the development of a genuine EU-wide enforcement culture among supervisory authorities.
      • PILLAR 3: A FUNDAMENTAL RIGHTS APPROACH TO NEW TECHNOLOGIES. The protection of personal data helps to ensure that technology, new business models and society develop in accordance with our values, such as human dignity, autonomy and liberty. The EDPB will continuously monitor new and emerging technologies and their potential impact on the fundamental rights and daily lives of individuals. Data protection should work for all people, particularly in the face of processing activities presenting the greatest risks to individuals’ rights and freedoms (e.g. to prevent discrimination). We will help to shape Europe’s digital future in line with our common values and rules. We will continue to work with other regulators and policymakers to promote regulatory coherence and enhanced protection for individuals.
      • PILLAR 4: THE GLOBAL DIMENSION. The EDPB is determined to set and promote high EU and global standards for international data transfers to third countries in the private and the public sector, including in the law enforcement sector. We will reinforce our engagement with the international community to promote EU data protection as a global model and to ensure effective protection of personal data beyond EU borders.
  • The United Kingdom’s (UK) Information Commissioner’s Office (ICO) revealed that all but one of the videoconferencing platforms it and other data protection authorities’ (DPA) July 2020 letter urging them to “adopt principles to guide them in addressing some key privacy risks.” The ICO explained:
    • Microsoft, Cisco, Zoom and Google replied to the open letter. The joint signatories thank these companies for engaging on this important matter and for acknowledging and responding to the concerns raised. In their responses the companies highlighted various privacy and security best practices, measures, and tools that they advise are implemented or built-in to their video teleconferencing services.
    • The information provided by these companies is encouraging. It is a constructive foundation for further discussion on elements of the responses that the joint signatories feel would benefit from more clarity and additional supporting information.
    • The ICO stated:
      • The joint signatories have not received a response to the open letter from Houseparty. They strongly encourage Houseparty to engage with them and respond to the open letter to address the concerns raised.
  • The European Union Agency for Cybersecurity (ENISA) “launched a public consultation, which runs until 7 February 2021, on its draft of the candidate European Union Cybersecurity Certification Scheme on Cloud Services (EUCS)…[that] aims to further improve the Union’s internal market conditions for cloud services by enhancing and streamlining the services’ cybersecurity guarantees.” ENISA stated:
    • There are challenges to the certification of cloud services, such as a diverse set of market players, complex systems and a constantly evolving landscape of cloud services, as well as the existence of different schemes in Member States. The draft EUCS candidate scheme tackles these challenges by calling for cybersecurity best practices across three levels of assurance and by allowing for a transition from current national schemes in the EU. The draft EUCS candidate scheme is a horizontal and technological scheme that intends to provide cybersecurity assurance throughout the cloud supply chain, and form a sound basis for sectoral schemes.
    • More specifically, the draft EUCS candidate scheme:
      • Is a voluntary scheme;
      • The scheme’s certificates will be applicable across the EU Member States;
      • Is applicable for all kinds of cloud services – from infrastructure to applications;
      • Boosts trust in cloud services by defining a reference set of security requirements;
      • Covers three assurance levels: ‘Basic’, ‘Substantial’ and ‘High’;
      • Proposes a new approach inspired by existing national schemes and international standards;
      • Defines a transition path from national schemes in the EU;
      • Grants a three-year certification that can be renewed;
      • Includes transparency requirements such as the location of data processing and storage.

Coming Events

  • The Commerce, Science, and Transportation Committee will hold a hearing on the nomination of Gina Raimondo to be the Secretary of Commerce on 26 January.
  • On 27 July, the Federal Trade Commission (FTC) will hold PrivacyCon 2021.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2021. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Peggy und Marco Lachmann-Anke from Pixabay

Further Reading, Other Developments, and Coming Events (19 January 2021)

Further Reading

  • Hong Kong telecoms provider blocks website for first time, citing security law” — Reuters; “A Hong Kong Website Gets Blocked, Raising Censorship Fears” By Paul Mozur and Aaron Krolik — The New York Times. The Hong Kong Broadband Network (HKBN) blocked access to a website about the 2019 protests against the People’s Republic of China (PRC) (called HKChronicles) under a recently enacted security law critics had warned would lead to exactly this sort of outcome. Allegedly, the Hong Kong police had invoked the National Security Law for the first time, and other telecommunications companies have followed suit.
  • Biden to counter China tech by urging investment in US: adviser” By Yifan Yu — Nikkei Asia. President-elect Joe Biden’s head of the National Economic Council said at a public event that the Biden Administration would focus less on tariffs and other similar instruments to counter the People’s Republic of China (PRC). Instead, the incoming President would try to foster investment in United States companies and technologies to fend off the PRC’s growing strength in a number of crucial fields. Also, a Biden Administration would work more with traditional U.S. allies to contest policies from Beijing.
  • Revealed: walkie-talkie app Zello hosted far-right groups who stormed Capitol” By Micah Loewinger and Hampton Stall — The Guardian. Some of the rioters and insurrectionists whop attacked the United States Capitol on 6 January were using another, lesser known communications app, Zello, to coordinate their actions. The app has since taken down a number of right-wing and extremist groups that have flourished for months if not years on the platform. It remains to be seen how smaller platforms will be scrutinized under a Biden Presidency. Zello has reportedly been aware that these groups have been using their platform and opted not to police their conduct.
  • They Used to Post Selfies. Now They’re Trying to Reverse the Election.” By Stuart A. Thompson and Charlie Warzel — The New York Times. The three people who amassed considerable extremist followings seem each to be part believer and part opportunist. A fascinating series of profiles about the three.
  • Telegram tries, and fails, to remove extremist content” By Mark Scott — Politico. Platforms other than Facebook and Twiiter are struggling to moderate right wing and extremist content that violates their policies and terms of service.

Other Developments

  • The Biden-Harris transition team announced that a statutorily established science advisor will now be a member of the Cabinet and named its nominee for this and other positions. The Office of Science and Technology Policy (OSTP) was created by executive order in the Ford Administration and then codified by Congress. However, the OSTP Director has not been a member of the Cabinet alongside the Senate-confirmed Secretaries and others. President-elect Joe Biden has decided to elevate the OSTP Director to the Cabinet, likely in order to signal the importance of science and technology in his Administration. The current OSTP has exercised unusual influence in the Trump Administration under the helm of OSTP Associate Director Michael Kratsios and shaped policy in a number of realms like artificial intelligence, national security, and others.
    • In the press release, the transition team explained:
      • Dr. Eric Lander will be nominated as Director of the OSTP and serve as the Presidential Science Advisor. The president-elect is elevating the role of science within the White House, including by designating the Presidential Science Advisor as a member of the Cabinet for the first time in history. One of the country’s leading scientists, Dr. Lander was a principal leader of the Human Genome Project and has been a pioneer in the field of genomic medicine. He is the founding director of the Broad Institute of MIT and Harvard, one of the nation’s leading research institutes. During the Obama-Biden administration, he served as external Co-Chair of the President’s Council of Advisors on Science and Technology. Dr. Lander will be the first life scientist to serve as Presidential Science Advisor.
      • Dr. Alondra Nelson will serve as OSTP Deputy Director for Science and Society. A distinguished scholar of science, technology, social inequality, and race, Dr. Nelson is president of the Social Science Research Council, an independent, nonprofit organization linking social science research to practice and policy. She is also a professor at the Institute for Advanced Study, one of the nation’s most distinguished research institutes, located in Princeton, NJ.
      • Dr. Frances H. Arnold and Dr. Maria Zuber will serve as the external Co-Chairs of the President’s Council of Advisors on Science and Technology (PCAST). An expert in protein engineering, Dr. Arnold is the first American woman to win the Nobel Prize in Chemistry. Dr. Zuber, an expert in geophysics and planetary science, is the first woman to lead a NASA spacecraft mission and has chaired the National Science Board. They are the first women to serve as co-chairs of PCAST.
      • Dr. Francis Collins will continue serving in his role as Director of the National Institutes of Health.
      • Kei Koizumi will serve as OSTP Chief of Staff and is one of the nation’s leading experts on the federal science budget.
      • Narda Jones, who will serve as OSTP Legislative Affairs Director, was Senior Technology Policy Advisor and Counsel for the Democratic staff of the U.S. Senate Committee on Commerce, Science and Transportation.
  • The United States (U.S.) Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA) issued a report on supply chain security by a public-private sector advisory body, which represents one of the lines of effort of the U.S. government to better secure technology and electronics that emanate from the People’s Republic of China (PRC). CISA’s National Risk Management Center co-chairs the Information and Communications Technology (ICT) Supply Chain Risk Management (SCRM) Task Force along with the Information Technology Sector Coordinating Council and the Communications Sector Coordinating Council. The ICT SCRM published its Year 2 Report that “builds upon” its Interim Report and asserted:
    • Over the past year, the Task Force has expanded upon its first-year progress to advance meaningful partnership around supply chain risk management. Specifically, the Task Force:
      • Developed reference material to support overcoming legal obstacles to information sharing
      • Updated the Threat Evaluation Report, which evaluates threats to suppliers, with additional scenarios and mitigation measures for the corresponding threat scenarios
      • Produced a report and case studies providing in -depth descriptions of control categories and information regarding when and how to use a Qualified List to manage supply chain risks
      • Developed a template for SCRM compliance assessments and internal evaluations of alignment to industry standards
      • Analyzed the current and potential impacts from the COVID-19 pandemic, and developed a system map to visualize ICT supply chain routes and identify chokepoints
      • Surveyed supply chain related programs and initiatives that provide opportunities for potential TaskForce engagement
    • Congress established an entity to address and help police supply chain risk at the end of 2018 in the “Strengthening and Enhancing Cyber-capabilities by Utilizing Risk Exposure Technology Act” (SECURE Act) (P.L. 115-390). The Federal Acquisition Security Council (FASC) has a number of responsibilities, including:
      • developing an information sharing process for agencies to circulate decisions throughout the federal government made to exclude entities determined to be IT supply chain risks
      • establishing a process by which entities determined to be IT supply chain risks may be excluded from procurement government-wide (exclusion orders) or suspect IT must be removed from government systems (removal orders)
      • creating an exception process under which IT from an entity subject to a removal or exclusion order may be used if warranted by national interest or national security
      • issuing recommendations for agencies on excluding entities and IT from the IT supply chain and “consent for a contractor to subcontract” and mitigation steps entities would need to take in order for the Council to rescind a removal or exclusion order
      • In September 2020, the FASC released an interim regulation that took effect upon being published that “implement[s] the requirements of the laws that govern the operation of the FASC, the sharing of supply chain risk information, and the exercise of its authorities to recommend issuance of removal and exclusion orders to address supply chain security risks…”
  • The Australian government has released its bill to remake how platforms like Facebook, Google, and others may use the content of new media, including provision for payment. The “Treasury Laws Amendment (News Media and Digital Platforms Mandatory Bargaining Code) Bill 2020” “establishes a mandatory code of conduct to help support the sustainability of the Australian news media sector by addressing bargaining power imbalances between digital platforms and Australian news businesses.” The agency charged with developing legislation, the Australian Competition and Consumer Commission (ACCC), has tussled with Google in particular over what this law would look like with the technology giant threatening to withdraw from Australia altogether. The ACCC had determined in its July 2019 Digital Platform Inquiry:
    • that there is a bargaining power imbalance between digital platforms and news media businesses so that news media businesses are not able to negotiate for a share of the revenue generated by the digital platforms and to which the news content created by the news media businesses contributes. Government intervention is necessary because of the public benefit provided by the production and dissemination of news, and the importance of a strong independent media in a well-functioning democracy.
    • In an Explanatory Memorandum, it is explained:
      • The Bill establishes a mandatory code of conduct to address bargaining power imbalances between digital platform services and Australian news businesses…by setting out six main elements:
        • bargaining–which require the responsible digital platform corporations and registered news business corporations that have indicated an intention to bargain, to do so in good faith;
        • compulsory arbitration–where parties cannot come to a negotiated agreement about remuneration relating to the making available of covered news content on designated digital platform services, an arbitral panel will select between two final offers made by the bargaining parties;
        • general requirements –which, among other things, require responsible digital platform corporations to provide registered news business corporations with advance notification of planned changes to an algorithm or internal practice that will have a significant effect on covered news content;
        • non-differentiation requirements –responsible digital platform corporations must not differentiate between the news businesses participating in the Code, or between participants and non-participants, because of matters that arise in relation to their participation or non-participation in the Code;
        • contracting out–the Bill recognises that a digital platform corporation may reach a commercial bargain with a news business outside the Code about remuneration or other matters. It provides that parties who notify the ACCC of such agreements would not need to comply with the general requirements, bargaining and compulsory arbitration rules (as set out in the agreement); and
        • standard offers –digital platform corporations may make standard offers to news businesses, which are intended to reduce the time and cost associated with negotiations, particularly for smaller news businesses. If the parties notify the ACCC of an agreed standard offer, those parties do not need to comply with bargaining and compulsory arbitration (as set out in the agreement);
  • The Federal Trade Commission (FTC) has reached a settlement with an mobile advertising company over “allegations that it failed to provide in-game rewards users were promised for completing advertising offers.” The FTC unanimously agreed to the proposed settlement with Tapjoy, Inc. that bars the company “from misleading users about the rewards they can earn and must monitor its third-party advertiser partners to ensure they do what is necessary to enable Tapjoy to deliver promised rewards to consumers.” The FTC drafted a 20 year settlement that will obligate Tapjoy, Inc. to refrain from certain practices that violate the FTC Act; in this case that includes not making false claims about the rewards people can get if they take or do not take some action in an online game. Tapjoy, Inc. will also need to submit compliance reports, keep records, and make materials available to the FTC upon demand. Any failure to meet the terms of the settlement could prompt the FTC to seek redress in federal court, including more than $43,000 per violation.
    • In the complaint, the FTC outlined Tapjoy, Inc.’s illegal conduct:
      • Tapjoy operates an advertising platform within mobile gaming applications (“apps”). On the platform, Tapjoy promotes offers of in-app rewards (e.g., virtual currency) to consumers who complete an action, such as taking a survey or otherwise engaging with third-party advertising. Often, these consumers must divulge personal information or spend money. In many instances, Tapjoy never issues the promised reward to consumers who complete an action as instructed, or only issues the currency after a substantial delay. Consumers who attempt to contact Tapjoy to complain about missing rewards find it difficult to do so, and many consumers who complete an action as instructed and are able to submit a complaint nevertheless do not receive the promised reward.  Tapjoy has received hundreds of thousands of complaints concerning its failure to issue promised rewards to consumers. Tapjoy nevertheless has withheld rewards from consumers who have completed all required actions.
    • In its press release, the FTC highlighted the salient terms of the settlement:
      • As part of the proposed settlement, Tapjoy is prohibited from misrepresenting the rewards it offers consumers and the terms under which they are offered. In addition, the company must clearly and conspicuously display the terms under which consumers can receive such rewards and must specify that the third-party advertisers it works with determine if a reward should be issued. Tapjoy also will be required to monitor its advertisers to ensure they are following through on promised rewards, investigate complaints from consumers who say they did not receive their rewards, and discipline advertisers who deceive consumers.
    • FTC Commissioners Rohit Chopra and Rebecca Kelly Slaughter issued a joint statement, and in their summary section, they asserted:
      • The explosive growth of mobile gaming has led to mounting concerns about harmful practices, including unlawful surveillance, dark patterns, and facilitation of fraud.
      • Tapjoy’s failure to properly police its mobile gaming advertising platform cheated developers and gamers out of promised compensation and rewards.
      • The Commission must closely scrutinize today’s gaming gatekeepers, including app stores and advertising middlemen, to prevent harm to developers and gamers.
    • On the last point, Chopra and Kelly Slaughter argued:
      • We should all be concerned that gatekeepers can harm developers and squelch innovation. The clearest example is rent extraction: Apple and Google charge mobile app developers on their platforms up to 30 percent of sales, and even bar developers from trying to avoid this tax through offering alternative payment systems. While larger gaming companies are pursuing legal action against these practices, developers and small businesses risk severe retaliation for speaking up, including outright suspension from app stores – an effective death sentence.
      • This market structure also has cascading effects on gamers and consumers. Under heavy taxation by Apple and Google, developers have been forced to adopt alternative monetization models that rely on surveillance, manipulation, and other harmful practices.
  • The United Kingdom’s (UK) High Court ruled against the use of general warrants for online surveillance by the Uk’s security agencies (MI5, MI6, and the Government Communication Headquarters (GCHQ)). Privacy International (PI), a British advocacy organization, had brought the suit after Edward Snowden revealed the scope of the United States National Security Agency’s (NSA) surveillance activities, including bulk collection of information, a significant portion of which required hacking. PI sued in a special tribunal formed to resolve claims against British security agencies where the government asserted general warrants would suffice for purposes of mass hacking. PI disagreed and argued this was counter to 250 years of established law in the UK that warrants must be based on reasonable suspicion, specific in what is being sought, and proportionate. The High Court agreed with PI.
    • In its statement after the ruling, PI asserted:
      • Because general warrants are by definition not targeted (and could therefore apply to hundreds, thousands or even millions of people) they violate individuals’ right not to not have their property searched without lawful authority, and are therefore illegal.
      • The adaptation of these 250-year-old principles to modern government hacking and property interference is of great significance. The Court signals that fundamental constitutional principles still need to be applied in the context of surveillance and that the government cannot circumvent traditional protections afforded by the common law.
  • In Indiana, the attorney general is calling on the governor to “to adopt a safe harbor rule I proposed that would incentivize companies to take strong data protection measures, which will reduce the scale and frequency of cyberattacks in Indiana.” Attorney General Curtis Hill urged Governor Eric J. Holcomb to allow a change in the state’s data security regulations to be made effective.
    • The proposed rule provides:
      • Procedures adopted under IC 24-4.9-3-3.5(c) are presumed reasonable if the procedures comply with this section, including one (1) of the following applicable standards:
        • (1) A covered entity implements and maintains a cybersecurity program that complies with the National Institute of Standards and Technology (NIST) cybersecurity framework and follows the most recent version of one (1) of the following standards:
          • (A) NIST Special Publication 800-171.
          • (B) NIST SP 800-53.
          • (C) The Federal Risk and Authorization Management Program (FedRAMP) security assessment framework.
          • (D) International Organization for Standardization/International Electrotechnical Commission 27000 family – information security management systems.
        • (2) A covered entity is regulated by the federal or state government and complies with one (1) of the following standards as it applies to the covered entity:
          • (A) The federal USA Patriot Act (P.L. 107-56).
          • (B) Executive Order 13224.
          • (C) The federal Driver’s Privacy Protection Act (18 U.S.C. 2721 et seq.).
          • (D) The federal Fair Credit Reporting Act (15 U.S.C. 1681 et seq.).
          • (E) The federal Health Insurance Portability and Accountability Act (HIPAA) (P.L. 104-191).
        • (3) A covered entity complies with the current version of the payment card industry data security standard in place at the time of the breach of security of data, as published by the Payment Card Industry Security Standard Council.
      • The regulations further provide that if a data base owner can show “its data security plan was reasonably designed, implemented, and executed to prevent the breach of security of data” then it “will not be subject to a civil action from the office of the attorney general arising from the breach of security of data.”
  • The Tech Transparency Project (TTP) is claiming that Apple “has removed apps in China at the government’s request” the majority of which “involve activities like illegal gambling and porn.” However, TTP is asserting that its analysis “suggests Apple is proactively blocking scores of other apps that are politically sensitive for Beijing.”

Coming Events

  • On 19 January, the Senate Intelligence Committee will hold a hearing on the nomination of Avril Haines to be the Director of National Intelligence.
  • The Senate Homeland Security and Governmental Affairs Committee will hold a hearing on the nomination of Alejandro N. Mayorkas to be Secretary of Homeland Security on 19 January.
  • On 19 January, the Senate Armed Services Committee will hold a hearing on former General Lloyd Austin III to be Secretary of Defense.
  • On 27 July, the Federal Trade Commission (FTC) will hold PrivacyCon 2021.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2021. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Further Reading, Other Developments, and Coming Events (13 and 14 January 2021)

Further Reading

  • YouTube Suspends Trump’s Channel for at Least Seven Days” By Daisuke Wakabayashi — The New York Times. Even Google is getting further into the water. Its YouTube platform flagged a video of President Donald Trump’s for inciting violence and citing the “ongoing potential for violence,” Trump and his team will not be able to upload videos for seven days and the comments section would be permanently disabled. YouTube has been the least inclined of the major platforms to moderate content and has somehow escaped the scrutiny and opprobrium Facebook and Twitter have faced even though those platforms have been more active in policing offensive content.
  • Online misinformation that led to Capitol siege is ‘radicalization,’ say researchers” By Elizabeth Culliford — Reuters. Experts in online disinformation are saying that the different conspiracy movements that impelled followers to attack the United States (U.S.) Capitol are the result of radicalization. Online activities translated into real world violence, they say. The also decried the responsive nature of social media platforms in acting, waiting for an insurrection to take steps experts and others have been begging them to take.
  • Uganda orders all social media to be blocked – letter” — Reuters. In response to Facebook blocking a number of government related accounts for Coordinated Inauthentic Behaviour” (CIB), the Ugandan government has blocked all access to social media ahead of its elections. In a letter seen by Reuters, the Uganda Communications Commission directed telecommunications providers “to immediately suspend any access and use, direct or otherwise, of all social media platforms and online messaging applications over your network until further notice.” This may become standard practice for many regimes around the world if social media companies crack down on government propaganda.
  • BlackBerry sells 90 patents to Huawei, covering key smartphone technology advances” By Sean Silcoff — The Globe and Mail. Critics of a deal to assign 90 key BlackBerry patents to Huawei are calling on the government of Prime Minister Justin Trudeau to be more involved in protecting Canadian intellectual property and innovations.
  • ‘Threat to democracy is real’: MPs call for social media code of conduct” By David Crowe and Nick Bonyhady — The Sydney Morning Herald. There has been mixed responses in Australia’s Parliament on social media platforms banning President Donald Trump after his role in inciting the violence at the United States (U.S.) Capitol. Many agree with the platforms, some disagree strenuously in light of other inflammatory content that is not taken down, and many want greater rationality and transparency in how platforms make these decisions. And since Canberra has been among the most active governments in regulating technology, it may inform the process of drafting its “Online Safety Bill,” which may place legal obligations on social media platforms.
  • Poland plans to make censoring of social media accounts illegal” By Shaun Walker — The Guardian. Governments around the world continue to respond to a number of social media companies deciding to deplatform United States (U.S.) President Donald Trump. In Warsaw there is a draft bill that would make deplatforming a person illegal unless the offense is also contrary to Polish law. The spin is that the right wing regime in Warsaw is less interested in protecting free speech and more interested in propagating the same grievances the right wing in the United States is. Therefore, this push in Poland may be more about messaging and trying to cow social media companies and less about protecting free speech, especially speech with which the government disagrees (e.g. advocates for LGBTQI rights have been silenced in Poland.)
  • Facebook, Twitter could face punishing regulation for their role in U.S. Capitol riot, Democrats say” By Tony Romm — The Washington Post. Democrats were already furious with social media companies for what they considered their lacking governance of content that clearly violated terms of service and policies. These companies are bracing for an expected barrage of hearings and legislation with the Democrats controlling the White House, House, and Senate.
  • Georgia results sweep away tech’s regulatory logjam” By Margaret Harding McGill and Ashley Gold — Axios. This is a nice survey of possible policy priorities at the agencies and in the Congress over the next two years with the Democrats in control of both.
  • The Capitol rioters put themselves all over social media. Now they’re getting arrested.” By Sara Morrison — Recode. Will the attack on the United States (U.S.) Capitol be the first time a major crime is solved by the evidence largely provided by the accused? It is sure looking that way as law enforcement continues to use the posts of the rioters to apprehend, arrest, and charge them. Additionally, in the same way people who acted in racist and entitled ways (e.g. Amy Cooper in Central Park threatening an African American gentleman with calling the police even though he had asked her to put her dog on a leash) were caught through crowd-sourced identification pushes, rioters are also being identified.
  • CISA: SolarWinds Hackers Got Into Networks by Guessing Passwords” By Mariam Baksh — Nextgov. The Cybersecurity and Infrastructure Security Agency (CISA) has updated its alert on the SolarWinds hack to reflect its finding. CISA explained:
    • CISA incident response investigations have identified that initial access in some cases was obtained by password guessing [T1101.001], password spraying [T1101.003], and inappropriately secured administrative credentials [T1078] accessible via external remote access services [T1133]. Initial access root cause analysis is still ongoing in a number of response activities and CISA will update this section as additional initial vectors are identified.
  •  “A Facial Recognition Company Says That Viral Washington Times “Antifa” Story Is False” By Craig Silverman — BuzzFeed News. XRVIsion denied the Washington Times’ account that the company had identified antifa protestors among the rioters at the United States (U.S. Capitol) (archived here.) The company said it had identified two Neo-Nazis and a QAnon adherent. Even though the story was retracted and a corrected version issued, some still claimed the original story had merit such as Trump supporter Representative Matt Gaetz (R-FL).

Other Developments

  • The United States (U.S.) Trade Representative (USTR) announced that it would not act on the basis of three completed reports on Digital Services Taxes (DST) three nations have put in place and also that it would not proceed with tariffs in retaliation against France, one of the first nations in the world to enact a DST. Last year, the Organization for Economic Co-operation and Development convened multi-lateral talks to resolve differences on how a global digital services tax will ideally function with most of the nations involved arguing for a 2% tax to be assessed in the nation where the transaction occurs as opposed to where the company is headquartered. European Union (EU) officials claimed an agreement was possible, but the U.S. negotiators walked away from the table. It will fall to the Biden Administration to act on these USTR DST investigations if they choose.
    • In its press release, the USTR stated it would “suspend the tariff action in the Section 301 investigation of France’s Digital Services Tax (DST).”
      • The USTR added:
        • The additional tariffs on certain products of France were announced in July 2020, and were scheduled to go into effect on January 6, 2021.  The U.S. Trade Representative has decided to suspend the tariffs in light of the ongoing investigation of similar DSTs adopted or under consideration in ten other jurisdictions.  Those investigations have significantly progressed, but have not yet reached a determination on possible trade actions.  A suspension of the tariff action in the France DST investigation will promote a coordinated response in all of the ongoing DST investigations.
      • In its December 2019 report, the USTR determined “that France’s DST is unreasonable or discriminatory and burdens or restricts U.S. commerce, and therefore is actionable under sections 301(b) and 304(a) of the Trade Act (19 U.S.C. 2411(b) and 2414(a))” and proposed a range of measures in retaliation.
    • The USTR also “issued findings in Section 301 investigations of Digital Service Taxes (DSTs) adopted by India, Italy, and Turkey, concluding that each of the DSTs discriminates against U.S. companies, is inconsistent with prevailing principles of international taxation, and burden or restricts U.S. commerce.” The USTR stated it “is not taking any specific actions in connection with the findings at this time but will continue to evaluate all available options.” The USTR added:
      • The Section 301 investigations of the DSTs adopted by India, Italy, and Turkey were initiated in June 2020, along with investigations of DSTs adopted or under consideration by Austria, Brazil, the Czech Republic, the European Union, Indonesia, Spain, and the United Kingdom.  USTR expects to announce the progress or completion of additional DST investigations in the near future. 
  • The United Kingdom’s Competition and Markets Authority (CMA) has started investigating Google’s Privacy Sandbox’ project to “assess whether the proposals could cause advertising spend to become even more concentrated on Google’s ecosystem at the expense of its competitors.” The CMA asserted:
    • Third party cookies currently play a fundamental role online and in digital advertising. They help businesses target advertising effectively and fund free online content for consumers, such as newspapers. But there have also been concerns about their legality and use from a privacy perspective, as they allow consumers’ behaviour to be tracked across the web in ways that many consumers may feel uncomfortable with and may find difficult to understand.
    • Google’s announced changes – known collectively as the ‘Privacy Sandbox’ project – would disable third party cookies on the Chrome browser and Chromium browser engine and replace them with a new set of tools for targeting advertising and other functionality that they say will protect consumers’ privacy to a greater extent. The project is already under way, but Google’s final proposals have not yet been decided or implemented. In its recent market study into online platforms digital advertising, the CMA highlighted a number of concerns about their potential impact, including that they could undermine the ability of publishers to generate revenue and undermine competition in digital advertising, entrenching Google’s market power.
  • Facebook took down coordinated inauthentic behavior (CIB) originating from France and Russia, seeking to allegedly influence nations in Africa and the Middle East. Facebook asserted:
    • Each of the networks we removed today targeted people outside of their country of origin, primarily targeting Africa, and also some countries in the Middle East. We found all three of them as a result of our proactive internal investigations and worked with external researchers to assess the full scope of these activities across the internet.
    • While we’ve seen influence operations target the same regions in the past, this was the first time our team found two campaigns — from France and Russia — actively engage with one another, including by befriending, commenting and criticizing the opposing side for being fake. It appears that this Russian network was an attempt to rebuild their operations after our October 2019 takedown, which also coincided with a notable shift in focus of the French campaign to begin to post about Russia’s manipulation campaigns in Africa.
    • Unlike the operation from France, both Russia-linked networks relied on local nationals in the countries they targeted to generate content and manage their activity across internet services. This is consistent with cases we exposed in the past, including in Ghana and the US, where we saw the Russian campaigns co-opt authentic voices to join their influence operations, likely to avoid detection and help appear more authentic. Despite these efforts, our investigation identified some links between these two Russian campaigns and also with our past enforcements.
  • Two of the top Democrats on the House Energy and Committee along with another Democrat wrote nine internet service providers (ISP) “questioning their commitment to consumers amid ISPs raising prices and imposing data caps during the COVID-19 pandemic.” Committee Chair Frank Pallone, Jr. (D-NJ), Communications and Technology Subcommittee Chairman Mike Doyle (D-PA), and Representative Jerry McNerney (D-CA) wrote the following ISPs:
    • Pallone, Doyle, and McNerney took issue with the companies raising prices and imposing data caps after having pledged not to do so at the behest of the Federal Communications Commission (FCC). They asked the companies to answer a series of questions:
      • Did the company participate in the FCC’s “Keep Americans Connected” pledge?
      • Has the company increased prices for fixed or mobile consumer internet and fixed or phone service since the start of the pandemic, or do they plan to raise prices on such plans within the next six months? 
      • Prior to March 2020, did any of the company’s service plans impose a maximum data consumption threshold on its subscribers?
      • Since March 2020, has the company modified or imposed any new maximum data consumption thresholds on service plans, or do they plan to do so within the next six months? 
      • Did the company stop disconnecting customers’ internet or telephone service due to their inability to pay during the pandemic? 
      • Does the company offer a plan designed for low-income households, or a plan established in March or later to help students and families with connectivity during the pandemic?
      • Beyond service offerings for low-income customers, what steps is the company currently taking to assist individuals and families facing financial hardship due to circumstances related to COVID-19? 
  • The United States (U.S.) Department of Homeland Security (DHS) issued a “Data Security Business Advisory: Risks and Considerations for Businesses Using Data Services and Equipment from Firms Linked to the People’s Republic of China,” that “describes the data-related risks American businesses face as a result of the actions of the People’s Republic of China (PRC) and outlines steps that businesses can take to mitigate these risks.” DHS generally recommended:
    • Businesses and individuals that operate in the PRC or with PRC firms or entities should scrutinize any business relationship that provides access to data—whether business confidential, trade secrets, customer personally identifiable information (PII), or other sensitive information. Businesses should identify the sensitive personal and proprietary information in their possession. To the extent possible, they should minimize the amount of at-risk data being stored and used in the PRC or in places accessible by PRC authorities. Robust due diligence and transaction monitoring are also critical for addressing potential legal exposure, reputation risks, and unfair advantage that data and intellectual property theft would provide competitors. Businesses should seek to acquire a thorough understanding of the ownership of data service providers, location of data infrastructure, and any tangential foreign business relationships and significant foreign investors.
  • The Federal Communications Commission (FCC) is asking for comments on the $3.2 billion Emergency Broadband Benefit Program established in the “Consolidated Appropriations Act, 2021” (H.R. 133). Comments are due by 16 February 2021. The FCC noted “eligible households may receive a discount off the cost of broadband service and certain connected devices during an emergency period relating to the COVID-19 pandemic, and participating providers can receive a reimbursement for such discounts.” The FCC explained the program in further detail:
    • Pursuant to the Consolidated Appropriations Act, the Emergency Broadband Benefit Program will use available funding from the Emergency Broadband Connectivity Fund to support participating providers’ provision of certain broadband services and connected devices to qualifying households.
    • To participate in the program, a provider must elect to participate and either be designated as an eligible telecommunications carrier or be approved by the Commission. Participating providers will make available to eligible households a monthly discount off the standard rate for an Internet service offering and associated equipment, up to $50.00 per month.
    • On Tribal lands, the monthly discount may be up to $75.00 per month. Participating providers will receive reimbursement from the Emergency Broadband Benefit Program for the discounts provided.
    • Participating providers that also supply an eligible household with a laptop, desktop computer, or tablet (connected device) for use during the emergency period may receive a single reimbursement of up to $100.00 for the connected device, if the charge to the eligible household for that device is more than $10.00 but less than $50.00.  An eligible household may receive only one supported device.  Providers must submit certain certifications to the Commission to receive reimbursement from the program, and the Commission is required to adopt audit requirements to ensure provider compliance and prevent waste, fraud, and abuse.
  • The Biden-Harris transition team named National Security Agency’s (NSA) Director of Cybersecurity as the Biden White House’s Deputy National Security Advisor for Cyber and Emerging Technology. Anne Neuberger’s portfolio at the NSA included “lead[ing] NSA’s cybersecurity mission, including emerging technology areas like quantum-resistant cryptography.” At the National Security Council, Neuberger would will work to coordinate cybersecurity and emerging technology policy across agencies and funnel policy options up to the full NSC and ultimately the President. It is not clear how Neuberger’s portfolio will interact with the newly created National Cybersecurity Director, a position that, thus far, has remained without a nominee.
    • The transition noted “[p]rior to this role, she led NSA’s Election Security effort and served as Assistant Deputy Director of NSA’s Operations Directorate, overseeing foreign intelligence and cybersecurity operations…[and] also previously served as NSA’s first Chief Risk Officer, as Director of NSA’s Commercial Solutions Center, as Director of the Enduring Security Framework cybersecurity public-private partnership, as the Navy’s Deputy Chief Management Officer, and as a White House Fellow.” The transition stated that “[p]rior to joining government service, Neuberger was Senior Vice President of Operations at American Stock Transfer & Trust Company (AST), where she directed technology and operations.”
  • The Federal Communications Commission (FCC) published a final rule in response to the United States (U.S.) Court of Appeals for the District of Columbia’s decision striking down three aspects of the FCC’s rollback of net neutrality, “Restoring Internet Freedom Order.” The FCC explained the final rule:
    • responds to a remand from the U.S. Court of Appeals for the D.C. Circuit directing the Commission to assess the effects of the Commission’s Restoring Internet Freedom Order on public safety, pole attachments, and the statutory basis for broadband internet access service’s inclusion in the universal service Lifeline program. This document also amends the Commission’s rules to remove broadband internet service from the list of services supported by the universal service Lifeline program, while preserving the Commission’s authority to fund broadband internet access service through the Lifeline program.
    • In 2014, the U.S. Court of Appeals for the District of Columbia struck down a 2010 FCC net neutrality order in Verizon v. FCC, but the court did suggest a path forward. The court held the FCC “reasonably interpreted section 706 to empower it to promulgate rules governing broadband providers’ treatment of Internet traffic, and its justification for the specific rules at issue here—that they will preserve and facilitate the “virtuous circle” of innovation that has driven the explosive growth of the Internet—is reasonable and supported by substantial evidence.” The court added that “even though the Commission has general authority to regulate in this arena, it may not impose requirements that contravene express statutory mandates…[and] [g]iven that the Commission has chosen to classify broadband providers in a manner that exempts them from treatment as common carriers, the Communications Act expressly prohibits the Commission from nonetheless regulating them as such.” However, in 2016, the same court upheld the 2015 net neutrality regulations in U.S. Telecom Association v. FCC, and then upheld most of the Trump Administration’s FCC’s repeal of the its earlier net neutrality rule.
    • However, the D.C. Circuit declined to accept the FCC’s attempt to preempt all contrary state laws and struck down this part of the FCC’s rulemaking. Consequently, states and local jurisdictions may now be free to enact regulations of internet services along the lines of the FCC’s now repealed Open Internet Order. The D.C. Circuit also sent the case back to the FCC for further consideration on three points.
    • In its request for comments on how to respond to the remand, the FCC summarized the three issues: public safety, pole attachments, and the Lifeline Program:
      • Public Safety.  First, we seek to refresh the record on how the changes adopted in the Restoring Internet Freedom Order might affect public safety. In the Restoring Internet Freedom Order, the Commission predicted, for example, that permitting paid prioritization arrangements would “increase network innovation,” “lead[] to higher investment in broadband capacity as well as greater innovation on the edge provider side of the market,” and “likely . . . be used to deliver enhanced service for applications that need QoS [i.e., quality of service] guarantees.” Could the network improvements made possible by prioritization arrangements benefit public safety applications—for example, by enabling the more rapid, reliable transmission of public safety-related communications during emergencies? 
      • Pole Attachments.  Second, we seek to refresh the record on how the changes adopted in the Restoring Internet Freedom Order might affect the regulation of pole attachments in states subject to federal regulation.  To what extent are ISPs’ pole attachments subject to Commission authority in non-reverse preemption states by virtue of the ISPs’ provision of cable or telecommunications services covered by section 224?  What impact would the inapplicability of section 224 to broadband-only providers have on their access to poles?  Have pole owners, following the Order, “increase[d] pole attachment rates or inhibit[ed] broadband providers from attaching equipment”?  How could we use metrics like increases or decreases in broadband deployment to measure the impact the Order has had on pole attachment practices?  Are there any other impacts on the regulation of pole attachments from the changes adopted in the Order?  Finally, how do any potential considerations about pole attachments bear on the Commission’s underlying decision to classify broadband as a Title I information service?
      • Lifeline Program.  Third, we seek to refresh the record on how the changes adopted in the Restoring Internet Freedom Order might affect the Lifeline program.  In particular, we seek to refresh the record on the Commission’s authority to direct Lifeline support to eligible telecommunications carriers (ETCs) providing broadband service to qualifying low-income consumers.  In the 2017 Lifeline NPRM, the Commission proposed that it “has authority under Section 254(e) of the Act to provide Lifeline support to ETCs that provide broadband service over facilities-based broadband-capable networks that support voice service,” and that “[t]his legal authority does not depend on the regulatory classification of broadband Internet access service and, thus, ensures the Lifeline program has a role in closing the digital divide regardless of the regulatory classification of broadband service.”  How, if at all, does the Mozilla decision bear on that proposal, and should the Commission proceed to adopt it? 
  • The Federal Trade Commission (FTC) reached a settlement with a photo app company that allegedly did not tell users their photos would be subject to the company’s facial recognition technology. The FTC deemed this a deceptive business practice in violation of Section 5 of the FTC Act and negotiated a settlement the Commissioners approved in a 5-0 vote. The consent order includes interesting, perhaps even new language, requiring the company “to delete models and algorithms it developed by using the photos and videos uploaded by its users” according to the FTC’s press release.
    • In the complaint, the FTC asserted:
      • Since 2015, Everalbum has provided Ever, a photo storage and organization application, to consumers.
      • In February 2017, Everalbum launched its “Friends” feature, which operates on both the iOS and Android versions of the Ever app. The Friends feature uses face recognition to group users’ photos by faces of the people who appear in the photos. The user can choose to apply “tags” to identify by name (e.g., “Jane”) or alias (e.g., “Mom”) the individuals who appear in their photos. These tags are not available to other Ever users. When Everalbum launched the Friends feature, it enabled face recognition by default for all users of the Ever mobile app. At that time, Everalbum did not provide users of the Ever mobile app an option to turn off or disable the feature.
      • However, prior to April 2019, Ever mobile app users who were located anywhere other than Texas, Illinois, Washington, and the European Union did not need to, and indeed could not, take any affirmative action to “let[ Everalbum] know” that it should apply face recognition to the users’ photos. In fact, for those users, face recognition was enabled by default and the users lacked the ability to disable it. Thus, the article was misleading for Ever mobile app users located outside of Texas, Illinois, Washington, and the European Union.
      • Between September 2017 and August 2019, Everalbum combined millions of facial images that it extracted from Ever users’ photos with facial images that Everalbum obtained from publicly available datasets in order to create four new datasets to be used in the development of its face recognition technology. In each instance, Everalbum used computer scripts to identify and compile from Ever users’ photos images of faces that met certain criteria (i.e., not associated with a deactivated Ever account, not blurry, not too small, not a duplicate of another image, associated with a specified minimum number of images of the same tagged identity, and, in three of the four instances, not identified by Everalbum’s machines as being an image of someone under the age of thirteen).
      • The FTC summarized its settlement:
        • The proposed settlement requires Everalbum to delete:
          • the photos and videos of Ever app users who deactivated their accounts;
          • all face embeddings—data reflecting facial features that can be used for facial recognition purposes—the company derived from the photos of Ever users who did not give their express consent to their use; and
          • any facial recognition models or algorithms developed with Ever users’ photos or videos.
        • In addition, the proposed settlement prohibits Everalbum from misrepresenting how it collects, uses, discloses, maintains, or deletes personal information, including face embeddings created with the use of facial recognition technology, as well as the extent to which it protects the privacy and security of personal information it collects. Under the proposed settlement, if the company markets software to consumers for personal use, it must obtain a user’s express consent before using biometric information it collected from the user through that software to create face embeddings or develop facial recognition technology.
      • FTC Commissioner Rohit Chopra issued a statement, explaining his view on facial recognition technology and he settlement:
        • As outlined in the complaint, Everalbum made promises that users could choose not to have facial recognition technology applied to their images, and that users could delete the images and their account. In addition to those promises, Everalbum had clear evidence that many of the photo app’s users did not want to be roped into facial recognition. The company broke its promises, which constitutes illegal deception according to the FTC’s complaint. This matter and the FTC’s proposed resolution are noteworthy for several reasons.
        • First, the FTC’s proposed order requires Everalbum to forfeit the fruits of its deception. Specifically, the company must delete the facial recognition technologies enhanced by any improperly obtained photos. Commissioners have previously voted to allow data protection law violators to retain algorithms and technologies that derive much of their value from ill-gotten data. This is an important course correction.
        • Second, the settlement does not require the defendant to pay any penalty. This is unfortunate. To avoid this in the future, the FTC needs to take further steps to trigger penalties, damages, and other relief for facial recognition and data protection abuses. Commissioners have voted to enter into scores of settlements that address deceptive practices regarding the collection, use, and sharing of personal data. There does not appear to be any meaningful dispute that these practices are illegal. However, since Commissioners have not restated this precedent into a rule under Section 18 of the FTC Act, we are unable to seek penalties and other relief for even the most egregious offenses when we first discover them.
        • Finally, the Everalbum matter makes it clear why it is important to maintain states’ authority to protect personal data. Because the people of Illinois, Washington, and Texas passed laws related to facial recognition and biometric identifiers, Everalbum took greater care when it came to these individuals in these states. The company’s deception targeted Americans who live in states with no specific state law protections.
  • The Trump Administration issued the “National Maritime Cybersecurity Plan” that “sets forth how the United States government will defend the American economy through enhanced cybersecurity coordination, policies and practices, aimed at mitigating risks to the maritime sub-sector, promoting prosperity through information and intelligence sharing, and preserving and increasing the nation’s cyber workforce” according to the National Security Advisor Robert O’Brien. It will be up to the Biden Administration to implement, revise, or discard this strategy, but strategy documents such as this that complain anodyne recommendations tend to stay in place for the short-term, at least. It bears note that the uneven margins to the columns in the document suggests a rush to issue this document before the end of the Trump Administration. Nevertheless, O’Brien added:
    • President [Donald] Trump designated the cybersecurity of the Maritime Transportation System (MTS) as a top priority for national defense, homeland security, and economic competitiveness in the 2017 National Security Strategy. The MTS contributes to one quarter of all United States gross domestic product, or approximately $5.4 trillion. MTS operators are increasingly reliant on information technology (IT) and operational technology (OT) to maximize the reliability and efficiency of maritime commerce. This plan articulates how the United States government can buy down the potential catastrophic risks to our national security and economic prosperity created by technology innovations to strengthen maritime commerce efficiency and reliability.
    • The strategy lists a number of priority actions for the executive branch, including:
      • The United States will de- conflict government roles and responsibilities.
      • The United States will develop risk modeling to inform maritime cybersecurity standards and best practices.
      • The United States will strengthen cybersecurity requirements in port services contracts and leasing.
      • The United States will develop procedures to identify, prioritize, mitigate, and investigate cybersecurity risks in critical ship and port systems.
      • Exchange United States government information with the maritime industry.
      • Share cybersecurity intelligence with appropriate non- government entities.
      • Prioritize maritime cybersecurity intelligence collection.
  • The National Security Agency’s NSA Cybersecurity Directorate has issued its very annual review, the “2020 NSA Cybersecurity Year in Review” that encapsulates the first year of operation for the newly created part of the NSA.
    • Highlights include:
      • In 2020, NSA focused on modernizing encryption across the Department of Defense (DOD). It began with a push to eliminate cryptography that is at risk from attack due to adversarial computational advances. This applied to several systems commonly used by the Armed Services today to provide command and control, critical communications, and battlefield awareness. It also applied to operational practices concerning the handling of cryptographic keys and the implementation of modern suites of cryptography in network communications devices.
      • 2020 was notable for the number of Cybersecurity Advisories (CSAs) and other products NSA cybersecurity produced and released. These products are intended to alert network owners, specifically National Security System (NSS), Department of Defense (DOD), and Defense Industrial Base (DIB), of cyber threats and enable defenders to take immediate action to secure their systems.
      • 2020 was notable not just because it was the NSA Cybersecurity Directorate’s first year nor because of COVID-19, but also because it was an election year in the United States. Drawing on lessons learned from the 2016 presidential election and the 2018 mid-term elections, NSA was fully engaged in whole-of-government efforts to protect 2020 election from foreign interference and influence. Cybersecurity was a foundational component of NSA’s overall election defense effort.
      • This past year, NSA cybersecurity prioritized public-private collaboration, invested in cybersecurity research, and made a concerted effort to build trusted partnerships with the cybersecurity community.
      • The NSA touted the following achievements:
        • In November 2019, NSA began laying the groundwork to conduct a pilot with the Defense Cyber Crime Center and five DIB companies to monitor and block malicious network traffic based on continuous automated analysis of the domain names these companies’ networks were contacting. The pilot’s operational phase commenced in March 2020. Over six months, the Protective Domain Name Service (PDNS) examined more than 4 billion DNS queries to and from these companies. The PDNS provider identified callouts to 3,519 malicious domains and blocked upwards of 13 million connections to those domains. The pilot proved the value of DoD expanding the PDNS service to all DIB entities at scale
        • How cyber secure is cyber “ready” for combat? In response to legislation that recognized the imperative of protecting key weapons and space systems from adversary cyber intrusions, NSA partnered closely with the DoD CIO, Joint Staff, Undersecretary of Defense for Acquisition & Sustainment, and the Military Services to structure, design, and execute a new cybersecurity program, focused on the most important weapons and space systems, known as the Strategic Cybersecurity Program (SCP), with the mindset of “stop assessing and start addressing.”The program initially identified 12 key weapons and space systems that must be evaluated for cybersecurity vulnerabilities that need to be mitigated. This is either due to the existence of intelligence indicating they are being targeted by cyber adversaries or because the systems are particularly important to warfighting. These systems cover all warfighting domains (land, sea, air, cyber, and space). Under the auspices of the SCP, NSA and military service partners will conduct cybersecurity evaluations, and, most importantly, maintain cyber risk scoreboards and mitigation plans accountability in reducing cyber risk to acceptable levels
      • The NSA sees the following issue son the horizon:
        • In October 2020, NSA launched an expansive effort across the Executive Branch to understand how we can better inform, drive, and understand the activities of NSS owners to prevent, or respond to, critical cybersecurity events, and cultivate an operationally-aligned community resilient against the most advanced threats. These efforts across the community will come to fruition during the first quarter of 2021 and are expected to unify disparate elements across USG for stronger cybersecurity at scale.
        • NSA Cybersecurity is also focused on combating ransomware, a significant threat to NSS and critical infrastructure. Ransomware activity has become more destructive and impactful in nature and scope. Malicious actors target critical data and propagate ransomware across entire networks, alarmingly focusing recent attacks against U.S. hospitals. In 2020, NSA formed multiple working groups with U.S. Government agencies and other partners to identify ways to make ransomware operations more difficult for our adversaries, less scalable, and less lucrative. While the ransomware threat remains significant, NSA will continue to develop innovative ways to keep the activity at bay.
  • This week, Parler sued Amazon after it rescinded its web hosting services to the social media platform billed as the conservative, unbiased alternative to Twitter. Amazon has responded with an extensive list of the inflammatory, inciting material upon which it based its decision.
    • In its 11 January complaint, Parler asked a federal court “for injunctive relief, including a temporary restraining order and preliminary injunctive relief, and damages” because mainly “AWS’s decision to effectively terminate Parler’s account is apparently motivated by political animus…[and] is also apparently designed to reduce competition in the microblogging services market to the benefit of Twitter” in violation of federal antitrust law.
    • In its 12 January response, Amazon disagreed:
      • This case is not about suppressing speech or stifling viewpoints. It is not about a conspiracy to restrain trade. Instead, this case is about Parler’s demonstrated unwillingness and inability to remove from the servers of Amazon Web Services (“AWS”) content that threatens the public safety, such as by inciting and planning the rape, torture, and assassination of named public officials and private citizens. There is no legal basis in AWS’s customer agreements or otherwise to compel AWS to host content of this nature. AWS notified Parler repeatedly that its content violated the parties’ agreement, requested removal, and reviewed Parler’s plan to address the problem, only to determine that Parler was both unwilling and unable to do so. AWS suspended Parler’s account as a last resort to prevent further access to such content, including plans for violence to disrupt the impending Presidential transition.
    • Amazon offered a sampling of the content on Parler that caused AWS to pull the plug on the platform:
      • “Fry’em up. The whole fkn crew. #pelosi #aoc #thesquad #soros #gates #chuckschumer #hrc #obama #adamschiff #blm #antifa we are coming for you and you will know it.”
      • “#JackDorsey … you will die a bloody death alongside Mark Suckerturd [Zuckerberg]…. It has been decided and plans are being put in place. Remember the photographs inside your home while you slept? Yes, that close. You will die a sudden death!”
      • “We are going to fight in a civil War on Jan.20th, Form MILITIAS now and acquire targets.”
      • “On January 20th we need to start systematicly [sic] assassinating [sic] #liberal leaders, liberal activists, #blm leaders and supporters, members of the #nba #nfl #mlb #nhl #mainstreammedia anchors and correspondents and #antifa. I already have a news worthy event planned.”
      • Shoot the police that protect these shitbag senators right in the head then make the senator grovel a bit before capping they ass.”

Coming Events

  • On 13 January, the Federal Communications Commission (FCC) will hold its monthly open meeting, and the agency has placed the following items on its tentative agenda “Bureau, Office, and Task Force leaders will summarize the work their teams have done over the last four years in a series of presentations:
    • Panel One. The Commission will hear presentations from the Wireless Telecommunications Bureau, International Bureau, Office of Engineering and Technology, and Office of Economics and Analytics.
    • Panel Two. The Commission will hear presentations from the Wireline Competition Bureau and the Rural Broadband Auctions Task Force.
    • Panel Three. The Commission will hear presentations from the Media Bureau and the Incentive Auction Task Force.
    • Panel Four. The Commission will hear presentations from the Consumer and Governmental Affairs Bureau, Enforcement Bureau, and Public Safety and Homeland Security Bureau.
    • Panel Five. The Commission will hear presentations from the Office of Communications Business Opportunities, Office of Managing Director, and Office of General Counsel.
  • On 15 January, the Senate Intelligence Committee will hold a hearing on the nomination of Avril Haines to be the Director of National Intelligence.
  • The Senate Homeland Security and Governmental Affairs Committee will hold a hearing on the nomination of Alejandro N. Mayorkas to be Secretary of Homeland Security on 19 January.
  • On 19 January, the Senate Armed Services Committee will hold a hearing on former General Lloyd Austin III to be Secretary of Defense.
  • On 27 July, the Federal Trade Commission (FTC) will hold PrivacyCon 2021.

Further Reading, Other Developments, and Coming Events (12 January 2021)

Further Reading

  • Biden’s NSC to focus on global health, climate, cyber and human rights, as well as China and Russia” By Karen DeYoung — The Washington Post. Like almost every incoming White House, the Biden team has announced a restructuring of the National Security Council (NSC) to better effectuate the President-elect’s policy priorities. To not surprise, the volume on cybersecurity policy will be turned up. Other notable change is plans to take “cross-cutting” approaches to issues that will likely meld foreign and domestic and national security and civil issues, meaning there could be a new look on offensive cyber operations, for example. It is possible President Biden decides to put the genie back in the bottle, so to speak, by re-imposing an interagency decision-making process as opposed to the Trump Administration’s approach of delegating discretion to the National Security Agency/Cyber Command head. Also, the NSC will focus on emerging technology, a likely response to the technology arms race the United States finds itself in against the People’s Republic of China.
  • Exclusive: Pandemic relief aid went to media that promoted COVID misinformation” By Caitlin Dickson — yahoo! news. The consulting firm Alethea Group and the nonprofit Global Disinformation Index are claiming the COVID stimulus Paycheck Protection Program (PPP) provided loans and assistance to five firms that “were publishing false or misleading information about the pandemic, thus profiting off the infodemic” according to an Alethea Group vice president. This report follows an NBC News article claiming that 14 white supremacist and racist organizations have also received PPP loans. The Alethea Group and Global Disinformation Index named five entities who took PPP funds and kept spreading pandemic misinformation: Epoch Media Group, Newsmax Media, The Federalist, Liftable Media, and Prager University.
  • Facebook shuts Uganda accounts ahead of vote” — France24. The social media company shuttered a number of Facebook and Instagram accounts related to government officials in Uganda ahead of an election on account of “Coordinated Inauthentic Behaviour” (CIB). This follows the platform shutting down accounts related to the French Army and Russia seeking to influence events in Africa. These and other actions may indicate the platform is starting to pay the same attention to the non-western world as at least one former employee has argued the platform was negligent at best and reckless at worst in not properly resourcing efforts to police CIB throughout the Third World.
  • China tried to punish European states for Huawei bans by adding eleventh-hour rule to EU investment deal” By Finbarr Bermingham — South China Morning Post. At nearly the end of talks on a People’s Republic of China (PRC)-European Union (EU) trade deal, PRC negotiators tried slipping in language that would have barred entry to the PRC’s cloud computing market to any country or company from a country that restricts Huawei’s services and products. This is alternately being seen as either standard Chinese negotiating tactics or an attempt to avenge the thwarting of the crown jewel in its telecommunications ambitions.
  • Chinese regulators to push tech giants to share consumer credit data – sources” By Julie Zhu — Reuters. Ostensibly in a move to better manage the risks of too much unsafe lending, tech giants in the People’s Republic of China (PRC) will soon need to share data on consumer loans. It seems inevitable that such data will be used by Beijing to further crack down on undesirable people and elements within the PRC.
  • The mafia turns social media influencer to reinforce its brand” By Miles Johnson — The Financial Times. Even Italy’s feared ’Ndrangheta is creating and curating a social media presence.

Other Developments

  • President Donald Trump signed an executive order (EO) that bans eight applications from the People’s Republic of China on much the same grounds as the EOs prohibiting TikTok and WeChat. If this EO is not rescinded by the Biden Administration, federal courts may block its implementation as has happened with the TikTok and WeChat EOs to date. Notably, courts have found that the Trump Administration exceeded its authority under the International Emergency Economic Powers Act (IEEPA), which may also be an issue in the proposed prohibition on Alipay, CamScanner, QQ Wallet, SHAREit, Tencent QQ, VMate, WeChat Pay, and WPS Office. Trump found:
    • that additional steps must be taken to deal with the national emergency with respect to the information and communications technology and services supply chain declared in Executive Order 13873 of May 15, 2019 (Securing the Information and Communications Technology and Services Supply Chain).  Specifically, the pace and pervasiveness of the spread in the United States of certain connected mobile and desktop applications and other software developed or controlled by persons in the People’s Republic of China, to include Hong Kong and Macau (China), continue to threaten the national security, foreign policy, and economy of the United States.  At this time, action must be taken to address the threat posed by these Chinese connected software applications.
    • Trump directed that within 45 days of issuance of the EO, there shall be a prohibition on “any transaction by any person, or with respect to any property, subject to the jurisdiction of the United States, with persons that develop or control the following Chinese connected software applications, or with their subsidiaries, as those transactions and persons are identified by the Secretary of Commerce (Secretary) under subsection (e) of this section: Alipay, CamScanner, QQ Wallet, SHAREit, Tencent QQ, VMate, WeChat Pay, and WPS Office.”
  • The Government Accountability Office (GAO) issued its first statutorily required annual assessment of how well the United States Department of Defense (DOD) is managing its major information technology (IT) procurements. The DOD spent more than $36 billion of the $90 billion the federal government was provided for IT in FY 2020. The GAO was tasked with assessing how well the DOD did in using iterative development, managing costs and schedules, and implementing cybersecurity measures. The GAO found progress in the first two realms but a continued lag in deploying long recommended best practices to ensure the security of the IT the DOD buys or builds. Nonetheless, the GAO focused on 15 major IT acquisitions that qualify as administrative (i.e. “business”) and communications and information security (i.e. “non-business.”) While there were no explicit recommendations made, the GAO found:
    • Ten of the 15 selected major IT programs exceeded their planned schedules, with delays ranging from 1 month for the Marine Corps’ CAC2S Inc 1 to 5 years for the Air Force’s Defense Enterprise Accounting and Management System-Increment 1.
    • …eight of the 10 selected major IT programs that had tested their then-current technical performance targets reported having met all of their targets…. As of December 2019, four programs had not yet conducted testing activities—Army’s ACWS, Air Force’s AFIPPS Inc 1, Air Force’s MROi, and Navy ePS. Testing data for one program, Air Force’s ISPAN Inc 4, were classified.
    • …officials from the 15 selected major IT programs we reviewed reported using software development approaches that may help to limit risks to cost and schedule outcomes. For example, major business IT programs reported using COTS software. In addition, most programs reported using an iterative software development approach and using a minimum deployable product. With respect to cybersecurity practices, all the programs reported developing cybersecurity strategies, but programs reported mixed experiences with respect to conducting cybersecurity testing. Most programs reported using operational cybersecurity testing, but less than half reported conducting developmental cybersecurity testing. In addition, programs that reported conducting cybersecurity vulnerability assessments experienced fewer increases in planned program costs and fewer schedule delays. Programs also reported a variety of challenges associated with their software development and cybersecurity staff.
    • 14 of the 15 programs reported using an iterative software development approach which, according to leading practices, may help reduce cost growth and deliver better results to the customer. However, programs also reported using an older approach to software development, known as waterfall, which could introduce risk for program cost growth because of its linear and sequential phases of development that may be implemented over a longer period of time. Specifically, two programs reported using a waterfall approach in conjunction with an iterative approach, while one was solely using a waterfall approach.
    • With respect to cybersecurity, programs reported mixed implementation of specific practices, contributing to program risks that might impact cost and schedule outcomes. For example, all 15 programs reported developing cybersecurity strategies, which are intended to help ensure that programs are planning for and documenting cybersecurity risk management efforts.
    • In contrast, only eight of the 15 programs reported conducting cybersecurity vulnerability assessments—systematic examinations of an information system or product intended to, among other things, determine the adequacy of security measures and identify security deficiencies. These eight programs experienced fewer increases in planned program costs and fewer schedule delays relative to the programs that did not report using cybersecurity vulnerability assessments.
  • The United States (U.S.) Department of Energy gave notice of a “Prohibition Order prohibiting the acquisition, importation, transfer, or installation of specified bulk-power system (BPS) electric equipment that directly serves Critical Defense Facilities (CDFs), pursuant to Executive Order 13920.” (See here for analysis of the executive order.) The Department explained:
    • Executive Order No. 13920 of May 1, 2020, Securing the United States Bulk-Power System (85 FR 26595 (May 4, 2020)) (E.O. 13920) declares that threats by foreign adversaries to the security of the BPS constitute a national emergency. A current list of such adversaries is provided in a Request for Information (RFI), issued by the Department of Energy (Department or DOE) on July 8, 2020 seeking public input to aid in its implementation of E.O. 13920. The Department has reason to believe, as detailed below, that the government of the People’s Republic of China (PRC or China), one of the listed adversaries, is equipped and actively planning to undermine the BPS. The Department has thus determined that certain BPS electric equipment or programmable components subject to China’s ownership, control, or influence, constitute undue risk to the security of the BPS and to U.S. national security. The purpose of this Order is to prohibit the acquisition, importation, transfer, or subsequent installation of such BPS electric equipment or programmable components in certain sections of the BPS.
  • The United States’ (U.S.) Department of Commerce’s Bureau of Industry and Security (BIS) added the People’s Republic of China’s (PRC) Semiconductor Manufacturing International Corporation (SMIC) to its Entity List in a move intended to starve the company of key U.S. technology needed to manufacture high end semiconductors. Therefore, any U.S. entity wishing to do business with SMIC will need a license which the Trump Administration may not be likely to grant. The Department of Commerce explained in its press release:
    • The Entity List designation limits SMIC’s ability to acquire certain U.S. technology by requiring U.S. exporters to apply for a license to sell to the company.  Items uniquely required to produce semiconductors at advanced technology nodes—10 nanometers or below—will be subject to a presumption of denial to prevent such key enabling technology from supporting China’s military-civil fusion efforts.
    • BIS also added more than sixty other entities to the Entity List for actions deemed contrary to the national security or foreign policy interest of the United States.  These include entities in China that enable human rights abuses, entities that supported the militarization and unlawful maritime claims in the South China Sea, entities that acquired U.S.-origin items in support of the People’s Liberation Army’s programs, and entities and persons that engaged in the theft of U.S. trade secrets.
    • As explained in the Federal Register notice:
      • SMIC is added to the Entity List as a result of China’s military-civil fusion (MCF) doctrine and evidence of activities between SMIC and entities of concern in the Chinese military industrial complex. The Entity List designation limits SMIC’s ability to acquire certain U.S. technology by requiring exporters, reexporters, and in-country transferors of such technology to apply for a license to sell to the company. Items uniquely required to produce semiconductors at advanced technology nodes 10 nanometers or below will be subject to a presumption of denial to prevent such key enabling technology from supporting China’s military modernization efforts. This rule adds SMIC and the following ten entities related to SMIC: Semiconductor Manufacturing International (Beijing) Corporation; Semiconductor Manufacturing International (Tianjin) Corporation; Semiconductor Manufacturing International (Shenzhen) Corporation; SMIC Semiconductor Manufacturing (Shanghai) Co., Ltd.; SMIC Holdings Limited; Semiconductor Manufacturing South China Corporation; SMIC Northern Integrated Circuit Manufacturing (Beijing) Co., Ltd.; SMIC Hong Kong International Company Limited; SJ Semiconductor; and Ningbo Semiconductor International Corporation (NSI).
  • The United States’ (U.S.) Department of Commerce’s Bureau of Industry and Security (BIS) amended its Export Administration Regulations “by adding a new ‘Military End User’ (MEU) List, as well as the first tranche of 103 entities, which includes 58 Chinese and 45 Russian companies” per its press release. The Department asserted:
    • The U.S. Government has determined that these companies are ‘military end users’ for purposes of the ‘military end user’ control in the EAR that applies to specified items for exports, reexports, or transfers (in-country) to the China, Russia, and Venezuela when such items are destined for a prohibited ‘military end user.’
  • The Australia Competition and Consumer Commission (ACCC) rolled out another piece of the Consumer Data Right (CDR) scheme under the Competition and Consumer Act 2010, specifically accreditation guidelines “to provide information and guidance to assist applicants with lodging a valid application to become an accredited person” to whom Australians may direct data holders share their data. The ACCC explained:
    • The CDR aims to give consumers more access to and control over their personal data.
    • Being able to easily and efficiently share data will improve consumers’ ability to compare and switch between products and services and encourage competition between service providers, leading to more innovative products and services for consumers and the potential for lower prices.
    • Banking is the first sector to be brought into the CDR.
    • Accredited persons may receive a CDR consumer’s data from a data holder at the request and consent of the consumer. Any person, in Australia or overseas, who wishes to receive CDR data to provide products or services to consumers under the CDR regime, must be accredited
  • Australia’s government has released its “Data Availability and Transparency Bill 2020” that “establishes a new data sharing scheme for federal government data, underpinned by strong safeguards to mitigate risks and simplified processes to make it easier to manage data sharing requests” according to the summary provided in Parliament by the government’s point person. In the accompanying “Explanatory Memorandum,” the following summary was provided:
    • The Bill establishes a new data sharing scheme which will serve as a pathway and regulatory framework for sharing public sector data. ‘Sharing’ involves providing controlled access to data, as distinct from open release to the public.
    • To oversee the scheme and support best practice, the Bill creates a new independent regulator, the National Data Commissioner (the Commissioner). The Commissioner’s role is modelled on other regulators such as the Australian Information Commissioner, with whom the Commissioner will cooperate.
    • The data sharing scheme comprises the Bill and disallowable legislative instruments (regulations, Minister-made rules, and any data codes issued by the Commissioner). The Commissioner may also issue non-legislative guidelines that participating entities must have regard to, and may release other guidance as necessary.
    • Participants in the scheme are known as data scheme entities:
      • Data custodians are Commonwealth bodies that control public sector data, and have the right to deal with that data.
      • Accredited users are entities accredited by the Commissioner to access to public sector data. To become accredited, entities must satisfy the security, privacy, infrastructure and governance requirements set out in the accreditation framework.
      • Accredited data service providers (ADSPs) are entities accredited by the Commissioner to perform data services such as data integration. Government agencies and users will be able to draw upon ADSPs’ expertise to help them to share and use data safely.
    • The Bill does not compel sharing. Data custodians are responsible for assessing each sharing request, and deciding whether to share their data if satisfied the risks can be managed.
    • The data sharing scheme contains robust safeguards to ensure sharing occurs in a consistent and transparent manner, in accordance with community expectations. The Bill authorises data custodians to share public sector data with accredited users, directly or through an ADSP, where:
      • Sharing is for a permitted purpose – government service delivery, informing government policy and programs, or research and development;
      • The data sharing principles have been applied to manage the risks of sharing; and
      • The terms of the arrangement are recorded in a data sharing agreement.
    • Where the above requirements are met, the Bill provides limited statutory authority to share public sector data, despite other Commonwealth, State and Territory laws that prevent sharing. This override of non-disclosure laws is ‘limited’ because it occurs only when the Bill’s requirements are met, and only to the extent necessary to facilitate sharing.
  • The United Kingdom’s Competition and Markets Authority’s (CMA) is asking interested parties to provide input on the proposed acquisition of British semiconductor company by a United States (U.S.) company before it launches a formal investigation later this year. However, CMA is limited to competition considerations, and any national security aspects of the proposed deal would need to be investigated by Prime Minister Boris Johnson’s government. CMA stated:
    • US-based chip designer and producer NVIDIA Corporation (NVIDIA) plans to purchase the Intellectual Property Group business of UK-based Arm Limited (Arm) in a deal worth $40 billion. Arm develops and licenses intellectual property (IP) and software tools for chip designs. The products and services supplied by the companies support a wide range of applications used by businesses and consumers across the UK, including desktop computers and mobile devices, game consoles and vehicle computer systems.
    • CMA added:
      • The CMA will look at the deal’s possible effect on competition in the UK. The CMA is likely to consider whether, following the takeover, Arm has an incentive to withdraw, raise prices or reduce the quality of its IP licensing services to NVIDIA’s rivals.
  • The Israeli firm, NSO Group, has been accused by an entity associated with a British university of using real-time cell phone data to sell its COVID-19 contact tracing app, Fleming, in ways that may have broken the laws of a handful of nations. Forensic Architecture,  a research agency, based at Goldsmiths, University of London, argued:
    • In March 2020, with the rise of COVID-19, Israeli cyber-weapons manufacturer NSO Group launched a contact-tracing technology named ‘Fleming’. Two months later, a database belonging to NSO’s Fleming program was found unprotected online. It contained more than five hundred thousand datapoints for more than thirty thousand distinct mobile phones. NSO Group denied there was a security breach. Forensic Architecture received and analysed a sample of the exposed database, which suggested that the data was based on ‘real’ personal data belonging to unsuspecting civilians, putting their private information in risk
    • Forensic Architecture added:
      • Leaving a database with genuine location data unprotected is a serious violation of the applicable data protection laws. That a surveillance company with access to personal data could have overseen this breach is all the more concerning.
      • This could constitute a violation of the General Data Protection Regulation (GDPR) based on where the database was discovered as well as the laws of the nations where NSO Group allegedly collected personal data
    • The NSO Group denied the claims and was quoted by Tech Crunch:
      • “We have not seen the supposed examination and have to question how these conclusions were reached. Nevertheless, we stand by our previous response of May 6, 2020. The demo material was not based on real and genuine data related to infected COVID-19 individuals,” said an unnamed spokesperson. (NSO’s earlier statement made no reference to individuals with COVID-19.)
      • “As our last statement details, the data used for the demonstrations did not contain any personally identifiable information (PII). And, also as previously stated, this demo was a simulation based on obfuscated data. The Fleming system is a tool that analyzes data provided by end users to help healthcare decision-makers during this global pandemic. NSO does not collect any data for the system, nor does NSO have any access to collected data.”

Coming Events

  • On 13 January, the Federal Communications Commission (FCC) will hold its monthly open meeting, and the agency has placed the following items on its tentative agenda “Bureau, Office, and Task Force leaders will summarize the work their teams have done over the last four years in a series of presentations:
    • Panel One. The Commission will hear presentations from the Wireless Telecommunications Bureau, International Bureau, Office of Engineering and Technology, and Office of Economics and Analytics.
    • Panel Two. The Commission will hear presentations from the Wireline Competition Bureau and the Rural Broadband Auctions Task Force.
    • Panel Three. The Commission will hear presentations from the Media Bureau and the Incentive Auction Task Force.
    • Panel Four. The Commission will hear presentations from the Consumer and Governmental Affairs Bureau, Enforcement Bureau, and Public Safety and Homeland Security Bureau.
    • Panel Five. The Commission will hear presentations from the Office of Communications Business Opportunities, Office of Managing Director, and Office of General Counsel.
  • On 27 July, the Federal Trade Commission (FTC) will hold PrivacyCon 2021.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2021. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Judith Scharnowski from Pixabay

Further Reading, Other Developments, and Coming Events (14 December)

Further Reading

  • Russian Hackers Broke Into Federal Agencies, U.S. Officials Suspect” By David Sanger — The New York Times.; “Russian government hackers are behind a broad espionage campaign that has compromised U.S. agencies, including Treasury and Commerce” By Ellen Nakashima and Craig Timberg — The Washington Post; “Suspected Russian hackers spied on U.S. Treasury emails – sources” By Chris Bing — Reuters. Apparently, Sluzhba vneshney razvedki Rossiyskoy Federatsii (SVR), the Russian Federation’s Foreign Intelligence Service, has exploited a vulnerability in SolarWinds’ update system used by many United States (U.S.) government systems, Fortune 500 companies, and the U.S.’ top ten largest telecommunications companies. Reportedly, APT29 (aka Cozy Bear) has had free reign in the email systems of the Departments of the Treasury and Commerce among other possible victims. The hackers may have also accessed a range of other entities around the world using the same SolarWind system. Moreover, these penetrations may be related to the recently announced theft of hacking tools a private firm, FireEye, used to test clients’ systems.
  • Hackers steal Pfizer/BioNTech COVID-19 vaccine data in Europe, companies say” By Jack Stubbs — Reuters. The European Union’s (EU) agency that oversees and approve medications has been hacked, and documents related to one of the new COVID-19 vaccines may have been stolen. The European Medicines Agency (EMA) was apparently penetrated, and materials related to Pfizer and BioNTech’s vaccine were exfiltrated. The scope of the theft is not yet known, but this is the latest in many attempts to hack into the entities conducting research on the virus and potential vaccines.
  • The AI Girlfriend Seducing China’s Lonely Men” By Zhang Wanqing — Sixth Tone. A chat bot powered by artificial intelligence that some men in the People’s Republic of China (PRC) are using extensively raises all sorts of ethical and privacy issues. Lonely people have turned to this AI technology and have confided their deepest feelings, which are stored by the company. It seems like a matter of time until these data are mined for commercial value or hacked. Also, the chatbot has run afoul of PRC’s censorship policies. Finally, is this a preview of the world to come, much like the 2013 film, Her, in which humans have relationships with AI beings?
  • YouTube will now remove videos disputing Joe Biden’s election victory” By Makena Kelly — The Verge. The Google subsidiary announced that because the safe harbor deadline has been reached and a sufficient number of states have certified President-elect Joe Biden, the platform will begin taking down misleading election videos. This change in policy may have come about, in part, because of pressure from Democrats in Congress about what they see as Google’s lackluster efforts to find and remove lies, misinformation, and disinformation about the 2020 election.
  • Lots of people are gunning for Google. Meet the man who might have the best shot.” By Emily Birnbaum — Protocol. Colorado Attorney General Phil Weiser may be uniquely qualified to lead state attorneys general on a second antitrust and anti-competition action against Google given his background as a law professor steeped in antitrust and his background in the Department of Justice and White House during the Obama Administration.

Other Developments

  • Cybersecurity firm, FireEye, revealed it was “attacked by a highly sophisticated threat actor, one whose discipline, operational security, and techniques lead us to believe it was a state-sponsored attack” according to CEO Kevin Mandia. This hacking may be related to vast penetration of United States (U.S.) government systems revealed over the weekend. Mandia stated FireEye has “found that the attacker targeted and accessed certain Red Team assessment tools that we use to test our customers’ security…[that] mimic the behavior of many cyber threat actors and enable FireEye to provide essential diagnostic security services to our customers.” Mandia claimed none of these tools were zero-day exploits. FireEye is “proactively releasing methods and means to detect the use of our stolen Red Team tools…[and] out of an abundance of caution, we have developed more than 300 countermeasures for our customers, and the community at large, to use in order to minimize the potential impact of the theft of these tools.
    • Mandia added:
      • Consistent with a nation-state cyber-espionage effort, the attacker primarily sought information related to certain government customers. While the attacker was able to access some of our internal systems, at this point in our investigation, we have seen no evidence that the attacker exfiltrated data from our primary systems that store customer information from our incident response or consulting engagements, or the metadata collected by our products in our dynamic threat intelligence systems. If we discover that customer information was taken, we will contact them directly.
      • Based on my 25 years in cyber security and responding to incidents, I’ve concluded we are witnessing an attack by a nation with top-tier offensive capabilities. This attack is different from the tens of thousands of incidents we have responded to throughout the years. The attackers tailored their world-class capabilities specifically to target and attack FireEye. They are highly trained in operational security and executed with discipline and focus. They operated clandestinely, using methods that counter security tools and forensic examination. They used a novel combination of techniques not witnessed by us or our partners in the past.
      • We are actively investigating in coordination with the Federal Bureau of Investigation and other key partners, including Microsoft. Their initial analysis supports our conclusion that this was the work of a highly sophisticated state-sponsored attacker utilizing novel techniques.    
  • The United States’ (U.S.) Department of Justice filed suit against Facebook for “tactics that discriminated against U.S. workers and routinely preferred temporary visa holders (including H-1B visa holders) for jobs in connection with the permanent labor certification (PERM) process.” The DOJ is asking for injunction to stop Facebook from engaging in the alleged conduct, civil penalties, and damages for workers harmed by this conduct.
    • The DOJ contended:
      • The department’s lawsuit alleges that beginning no later than Jan. 1, 2018 and lasting until at least Sept. 18, 2019, Facebook employed tactics that discriminated against U.S. workers and routinely preferred temporary visa holders (including H-1B visa holders) for jobs in connection with the PERM process. Rather than conducting a genuine search for qualified and available U.S. workers for permanent positions sought by these temporary visa holders, Facebook reserved the positions for temporary visa holders because of their immigration status, according to the complaint. The complaint also alleges that Facebook sought to channel jobs to temporary visa holders at the expense of U.S. workers by failing to advertise those vacancies on its careers website, requiring applicants to apply by physical mail only, and refusing to consider any U.S. workers who applied for those positions. In contrast, Facebook’s usual hiring process relies on recruitment methods designed to encourage applications by advertising positions on its careers website, accepting electronic applications, and not pre-selecting candidates to be hired based on a candidate’s immigration status, according to the lawsuit.
      • In its investigation, the department determined that Facebook’s ineffective recruitment methods dissuaded U.S. workers from applying to its PERM positions. The department concluded that, during the relevant period, Facebook received zero or one U.S. worker applicants for 99.7 percent of its PERM positions, while comparable positions at Facebook that were advertised on its careers website during a similar time period typically attracted 100 or more applicants each. These U.S. workers were denied an opportunity to be considered for the jobs Facebook sought to channel to temporary visa holders, according to the lawsuit. 
      • Not only do Facebook’s alleged practices discriminate against U.S. workers, they have adverse consequences on temporary visa holders by creating an employment relationship that is not on equal terms. An employer that engages in the practices alleged in the lawsuit against Facebook can expect more temporary visa holders to apply for positions and increased retention post-hire. Such temporary visa holders often have limited job mobility and thus are likely to remain with their company until they can adjust status, which for some can be decades.
      • The United States’ complaint seeks civil penalties, back pay on behalf of U.S. workers denied employment at Facebook due to the alleged discrimination in favor of temporary visa holders, and other relief to ensure Facebook stops the alleged violations in the future. According to the lawsuit, and based on the department’s nearly two-year investigation, Facebook’s discrimination against U.S. workers was intentional, widespread, and in violation of a provision of the Immigration and Nationality Act (INA), 8 U.S.C. § 1324b(a)(1), that the Department of Justice’s Civil Rights Division enforces. 
  • A trio of consumer authority regulators took the lead in coming into agreement with Apple to add “a new section to each app’s product page in its App Store, containing key information about the data the app collects and an accessible summary of the most important information from the privacy policy.” The United Kingdom’s UK’s Competition and Markets Authority (CMA), the Netherlands Authority for Consumers and Markets and the Norwegian Consumer Authority led the effort that “ongoing work from the International Consumer Protection and Enforcement Network (ICPEN), involving 27 of its consumer authority members across the world.” The three agencies explained:
    • Consumer protection authorities, including the CMA, became concerned that people were not being given clear information on how their personal data would be used before choosing an app, including on whether the app developer would share their personal data with a third party. Without this information, consumers are unable to compare and choose apps based on how they use personal data.
  • Australia’s Council of Financial Regulators (CFR) has released a Cyber Operational Resilience Intelligence-led Exercises (CORIE) framework “to test and demonstrate the cyber maturity and resilience of institutions within the Australian financial services industry.”

Coming Events

  • On 15 December, the Senate Judiciary Committee’s Intellectual Property Subcommittee will hold a hearing titled “The Role of Private Agreements and Existing Technology in Curbing Online Piracy” with these witnesses:
    • Panel I
      • Ms. Ruth Vitale, Chief Executive Officer, CreativeFuture
      • Mr. Probir Mehta, Head of Global Intellectual Property and Trade Policy, Facebook, Inc.
      • Mr. Mitch Glazier, Chairman and CEO, Recording Industry Association of America
      • Mr. Joshua Lamel, Executive Director, Re:Create
    • Panel II
      • Ms. Katherine Oyama, Global Director of Business Public Policy, YouTube
      • Mr. Keith Kupferschmid, Chief Executive Officer, Copyright Alliance
      • Mr. Noah Becker, President and Co-Founder, AdRev
      • Mr. Dean S. Marks, Executive Director and Legal Counsel, Coalition for Online Accountability
  • The Senate Armed Services Committee’s Cybersecurity Subcommittee will hold a closed briefing on Department of Defense Cyber Operations on 15 December with these witnesses:
    • Mr. Thomas C. Wingfield, Deputy Assistant Secretary of Defense for Cyber Policy, Office of the Under Secretary of Defense for Policy
    • Mr. Jeffrey R. Jones, Vice Director, Command, Control, Communications and Computers/Cyber, Joint Staff, J-6
    • Ms. Katherine E. Arrington, Chief Information Security Officer for the Assistant Secretary of Defense for Acquisition, Office of the Under Secretary of Defense for Acquisition and Sustainment
    • Rear Admiral Jeffrey Czerewko, United States Navy, Deputy Director, Global Operations, J39, J3, Joint Staff
  • The Senate Banking, Housing, and Urban Affairs Committee’s Economic Policy Subcommittee will conduct a hearing titled “US-China: Winning the Economic Competition, Part II” on 16 December with these witnesses:
    • The Honorable Will Hurd, Member, United States House of Representatives;
    • Derek Scissors, Resident Scholar, American Enterprise Institute;
    • Melanie M. Hart, Ph.D., Senior Fellow and Director for China Policy, Center for American Progress; and
    • Roy Houseman, Legislative Director, United Steelworkers (USW).
  • On 17 December the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency’s (CISA) Information and Communications Technology (ICT) Supply Chain Risk Management (SCRM) Task Force will convene for a virtual event, “Partnership in Action: Driving Supply Chain Security.”

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Photo by stein egil liland from Pexels

Further Reading, Other Developments, and Coming Events (10 December)

Further Reading

  • Social media superspreaders: Why Instagram, not Facebook, will be the real battleground for COVID-19 vaccine misinformation” By Isobel Asher Hamilton — Business Insider. According to one group, COVID-19 anti-vaccination lies and misinformation are proliferating on Instagram despite its parent company’s, Facebook, efforts to find and remove such content. There has been dramatic growth in such content on Instagram, and Facebook seems to be applying COVID-19 standards more loosely on Instagram. In fact, some people kicked off of Facebook for violating that platform’s standards on COVID-19 are still on Instagram spreading the same lies, misinformation, and disinformation. For example, British anti-vaccination figure David Icke was removed from Facebook for making claims that COVID-19 was caused by or related to 5G, but he has a significant following on Instagram.
  • ‘Grey area’: China’s trolling drives home reality of social media war” By Chris Zappone — The Sydney Morning Herald. The same concept that is fueling aggressive cyber activity at a level below outright war has spread to diplomacy. The People’s Republic of China (PRC) has been waging “gray” social media campaigns against a number of Western nations, including Australia, mainly be propagating lies and misinformation. The most recent example is the spreading a fake photo of an Australian soldier appearing to kill an Afghan child. This false material seems designed to distract from the real issues between the two nations arising from clashing policies on trade and human rights. The PRC’s activities do not appear to violate Australia’s foreign interference laws and seem to have left Canberra at a loss as to how to respond effectively.
  • Facebook to start policing anti-Black hate speech more aggressively than anti-White comments, documents show” By Elizabeth Dwoskin, Nitasha Tiku and Heather Kelly — The Washington Post. Facebook will apparently seek to revamp its algorithms to target the types of hate speech that have traditionally targeted women and minority groups. Up until now all attacks were treated equally so that something like “white people suck” would be treated the same way as anti-Semitic content. Facebook has resisted changes for years even though experts and civil rights groups made the case that people of color, women, and LGBTI people endure far more abuse online. There is probably no connection between Facebook’s more aggressive content moderation policies and the advent of a new administration in Washington more receptive to claims that social media platforms allow the abuse of these people.
  • How Joe Biden’s Digital Team Tamed the MAGA Internet” By Kevin Roose — The New York Times. Take this piece with a block of salt. The why they won articles are almost always rife with fallacies, including the rationale that if a candidate won, his or her strategy must have worked. It is not clear that the Biden Campaign’s online messaging strategy of being nice and emphasizing positive values actually beat the Trump Campaign’s “Death Star” so much as the President’s mishandling of the pandemic response and cratering of the economy did him in.
  • Coronavirus Apps Show Promise but Prove a Tough Sell” By Jennifer Valentino-DeVries — The New York Times. It appears the intersection of concerns about private and public sector surveillance from two very different groups has worked to keep down rates of adopting smartphone COVID tracking apps in the United States. There are people wary of private sector practices to hoover up as much data as possible, and others concerned about the government’s surveillance activities. Consequently, many are shunning Google and Apple’s COVID contact tracing apps to the surprise of government, industry, and academia. A pair of studies show resistance to downloading or using such apps even if there are very strong privacy safeguards. This result may well be a foreseeable outcome from U.S. policies that have allowed companies and the security services to collect and use vast quantities of personal information.
  • UAE target of cyber attacks after Israel deal, official says” — Reuters. A top cybersecurity official in the United Arab Emirates claimed his nation’s financial services industries were targeted for cyber attack and implied Iran and affiliated hackers were responsible.

Other Developments

  • President-elect Joe Biden announced his intention to nominate California Attorney General Xavier Becerra to serve as the next Secretary of Health and Human Services (HHS). If confirmed by the Senate, California Governor Gavin Newsom would name Becerra’s successor who would need to continue enforcement of the “California Consumer Privacy Act” (CCPA) (AB 375) while also working towards the transition to the “California Privacy Rights Act” (Proposition 24) approved by California voters last month. The new statute establishes the California Privacy Protection Agency that will assume the Attorney General’s responsibilities regarding the enforcement of California’s privacy laws. However, Becerra’s successor may play a pivotal role in the transition between the two regulators and the creation of the new regulations needed to implement Proposition 24.
  • The Senate approved the nomination of Nathan Simington to be a Commissioner of the Federal Communications Commission (FCC) by a 49-46 vote. Once FCC Chair Ajit Pai steps down, the agency will be left with two Democratic and two Republican Commissioners, pending the Biden Administration’s nominee to fill Pai’s spot. If the Senate stays Republican, it is possible the calculation could be made that a deadlocked FCC is better than a Democratic agency that could revive net neutrality rules among other Democratic and progressive policies. Consequently, Simington’s confirmation may be the first step in a FCC unable to develop substantive policy.
  • Another federal court has broadened the injunction against the Trump Administration’s ban on TikTok to encompass the entirety of the Department of Commerce’s September order meant to stop the usage of the application in the United States (U.S.) It is unclear as to whether the Trump Administration will appeal, and if it should, whether a court would decide the case before the Biden Administration begins in mid-January. The United States Court for the District of Columbia found that TikTok “established that  the government likely exceeded IEEPA’s express limitations as part of an agency action that was arbitrary and capricious” and would likely suffer irreparable harm, making an injunction an appropriate remedy.
  • The United States’ National Security Agency (NSA) “released a Cybersecurity Advisory on Russian state-sponsored actors exploiting CVE-2020-4006, a command-injection vulnerability in VMware Workspace One Access, Access Connector, Identity Manager, and Identity Manager Connector” and provided “mitigation and detection guidance.”
  • The United States (U.S.) Cybersecurity and Infrastructure Security Agency (CISA) and the Federal Bureau of Investigation (FBI) issued a joint alert, warning that U.S. think tanks are being targeted by “persistent continued cyber intrusions by advanced persistent threat (APT) actors.” The agencies stated “[t]his malicious activity is often, but not exclusively, directed at individuals and organizations that focus on international affairs or national security policy.” CISA and the FBI stated its “guidance may assist U.S. think tanks in developing network defense procedures to prevent or rapidly detect these attacks.” The agencies added:
    • APT actors have relied on multiple avenues for initial access. These have included low-effort capabilities such as spearphishing emails and third-party message services directed at both corporate and personal accounts, as well as exploiting vulnerable web-facing devices and remote connection capabilities. Increased telework during the COVID-19 pandemic has expanded workforce reliance on remote connectivity, affording malicious actors more opportunities to exploit those connections and to blend in with increased traffic. Attackers may leverage virtual private networks (VPNs) and other remote work tools to gain initial access or persistence on a victim’s network. When successful, these low-effort, high-reward approaches allow threat actors to steal sensitive information, acquire user credentials, and gain persistent access to victim networks.
    • Given the importance that think tanks can have in shaping U.S. policy, CISA and FBI urge individuals and organizations in the international affairs and national security sectors to immediately adopt a heightened state of awareness and implement the critical steps listed in the Mitigations section of this Advisory.
  • A group of Democratic United States Senators have written the CEO of Alphabet and Google about its advertising policies and how its platforms may have been used to spread misinformation and contribute to voter suppression. Thus far, most of the scrutiny about the 2020 election and content moderation policy has fallen on Facebook and Twitter even though Google-owned YouTube has been flagged as containing the same amount of misinformation. Senators Amy Klobuchar (D-MN) and Mark Warner (D-VA) led the effort and expressed “serious concerns regarding recent reports that Google is profiting from the sale of ads spreading election-related disinformation” to Alphabet and Google CEO Sundar Pichai. Klobuchar, Warner, and their colleagues asserted:
    • Google is also helping organizations spreading election-related disinformation to raise revenue by placing ads on their websites. While Google has some policies in place to prevent the spread of election misinformation, they are not properly enforced and are inadequate. We urge you to immediately strengthen and improve enforcement of your policies on election-related disinformation and voter suppression, reject all ads spreading election-related disinformation, and stop providing advertising services on sites that spread election-related disinformation.
    • …a recent study by the Global Disinformation Index (GDI) found that Google services ads on 145 out of 200 websites GDI examined that publish disinformation. 
    • Similarly, a recent report from the Center for Countering Digital Hate (CCDH) found that Google has been placing ads on websites publishing disinformation designed to undermine elections. In examining just six websites publishing election-related disinformation, CCDH estimates that they receive 40 million visits a month, generating revenue for these sites of up to $3.4 million annually from displaying Google ads. In addition, Google receives $1.6 million from the advertisers’ payments annually.  These sites published stories ahead of the 2020 general election that contained disinformation alleging that voting by mail was not secure, that mail-in voting was being introduced to “steal the election,” and that election officials were “discarding mail ballots.” 
  • A bipartisan group of United States Senators on one committee are urging Congressional leadership to include funding to help telecommunications companies remove and replace Huawei and ZTE equipment and to aid the Federal Communications Commission (FCC) in drafting accurate maps of broadband service in the United States (U.S.). Senate Commerce, Science, and Transportation Committee Chair Roger Wicker (R-MS) and a number of his colleagues wrote the leadership of both the Senate and House and argued:
    • we urge you to provide full funding for Public Law 116-124, the Secure and Trusted Communications Networks Act, and Public Law 116-130, the Broadband DATA Act.   
    • Closing the digital divide and winning the race to 5G are critical to America’s economic prosperity and global leadership in technology. However, our ability to connect all Americans and provide access to next-generation technology will depend in large part on the security of our communications infrastructure. The Secure and Trusted Communications Networks Act (“rip and replace”) created a program to help small, rural telecommunications operators remove equipment posing a security threat to domestic networks and replace it with equipment from trusted providers. This is a national security imperative. Fully funding this program is essential to protecting the integrity of our communications infrastructure and the future viability of our digital economy at large.
    • In addition to safeguarding the security of the nation’s communications systems, developing accurate broadband maps is also critically important. The United States faces a persistent digital divide, and closing this divide requires accurate maps that show where broadband is available and where it is not. Current maps overstate broadband availability, which prevents many underserved communities, particularly in rural areas, from receiving the funds needed to build or expand broadband networks to millions of unconnected Americans. Fully funding the Broadband DATA Act will ensure more accurate broadband maps and better stewardship over the millions of dollars the federal government awards each year to support broadband deployment. Without these maps, the government risks overbuilding existing networks, duplicating funding already provided, and leaving communities unserved.  
  • The Government Accountability Office (GAO) released an assessment of 5G policy options that “discusses (1) how the performance goals and expected uses are to be realized in U.S. 5Gwireless networks; (2) the challenges that could affect the performance or usage of 5G wireless networks in the U.S.; and (3) policy options to address these challenges.” The report had been requested by the chairs and ranking members of the House Armed Services, Senate Armed Services, Senate Intelligence, and House Intelligence Committees along with other Members. The GAO stated “[w]hile 5G is expected to deliver significantly improved network performance and greater capabilities, challenges may hinder the performance or usage of 5G technologies in the U.S. We grouped the challenges into the following four categories:
    • availability and efficient use of spectrum
    • security of 5G networks
    • concerns over data privacy
    • concerns over possible health effects
    • The GAO presented the following policy options along with opportunities and considerations for each:
      • Spectrum-Sharing Technologies Opportunities:
        • Could allow for more efficient use of the limited spectrum available for 5G and future generations of wireless networks.
        • It may be possible to leverage existing5G testbeds for testing the spectrum sharing technologies developed through applied research.
      • Spectrum-Sharing Technologies Considerations:
        • Research and development is costly, must be coordinated and administered, and its potential benefits are uncertain. Identifying a funding source, setting up the funding mechanism, or determining which existing funding streams to reallocate will require detailed analysis.
      • Coordinated Cybersecurity Monitoring Opportunities:
        • A coordinated monitoring program would help ensure the entire wireless ecosystem stays knowledgeable about evolving threats, in close to real time; identify cybersecurity risks; and allow stakeholders to act rapidly in response to emerging threats or actual network attacks.
      • Coordinated Cybersecurity Monitoring Considerations:
        • Carriers may not be comfortable reporting incidents or vulnerabilities, and determinations would need to be made about what information is disclosed and how the information will be used and reported.
      • Cybersecurity Requirements Opportunities
        • Taking these steps could produce a more secure network. Without a baseline set of security requirements the implementation of network security practices is likely to be piecemeal and inconsistent.
        • Using existing protocols or best practices may decrease the time and cost of developing and implementing requirements.
      • Cybersecurity Requirements Considerations
        • Adopting network security requirements would be challenging, in part because defining and implementing the requirements would have to be done on an application-specific basis rather than as a one-size-fits-all approach.
        • Designing a system to certify network components would be costly and would require a centralized entity, be it industry-led or government-led.
      • Privacy Practices Considerations
        • Development and adoption of uniform privacy practices would benefit from existing privacy practices that have been implemented by states, other countries, or that have been developed by federal agencies or other organizations.
      • Privacy Practices Opportunities
        • Privacy practices come with costs, and policymakers would need to balance the need for privacy with the direct and indirect costs of implementing privacy requirements. Imposing requirements can be burdensome, especially for smaller entities.
      • High-band Research Opportunities
        • Could result in improved statistical modeling of antenna characteristics and more accurately representing propagation characteristics.
        • Could result in improved understanding of any possible health effects from long-term radio frequency exposure to high-band emissions.
      • High-band Research Considerations
        • Research and development is costly and must be coordinated and administered, and its potential benefits are uncertain. Policymakers will need to identify a funding source or determine which existing funding streams to reallocate.

Coming Events

  • The Senate Judiciary Committee will hold an executive session at which the “Online Content Policy Modernization Act” (S.4632), a bill to narrow the liability shield in 47 USC 230, may be marked up on 10 December.
  • On 10 December, the Federal Communications Commission (FCC) will hold an open meeting and has released a tentative agenda:
    • Securing the Communications Supply Chain. The Commission will consider a Report and Order that would require Eligible Telecommunications Carriers to remove equipment and services that pose an unacceptable risk to the national security of the United States or the security and safety of its people, would establish the Secure and Trusted Communications Networks Reimbursement Program, and would establish the procedures and criteria for publishing a list of covered communications equipment and services that must be removed. (WC Docket No. 18-89)
    • National Security Matter. The Commission will consider a national security matter.
    • National Security Matter. The Commission will consider a national security matter.
    • Allowing Earlier Equipment Marketing and Importation Opportunities. The Commission will consider a Notice of Proposed Rulemaking that would propose updates to its marketing and importation rules to permit, prior to equipment authorization, conditional sales of radiofrequency devices to consumers under certain circumstances and importation of a limited number of radiofrequency devices for certain pre-sale activities. (ET Docket No. 20-382)
    • Promoting Broadcast Internet Innovation Through ATSC 3.0. The Commission will consider a Report and Order that would modify and clarify existing rules to promote the deployment of Broadcast Internet services as part of the transition to ATSC 3.0. (MB Docket No. 20-145)

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Photo by Tima Miroshnichenko from Pexels

Further Reading, Other Developments, and Coming Events (9 December)

Further Reading

  • Secret Amazon Reports Expose the Company’s Surveillance of Labor and Environmental Groups” By Lauren Kaori Gurley — Vice’s Motherboard. Yet another article by Vice drawing back the curtain on Amazon’s labor practices, especially its apparently fervent desire to stop unionizing. This piece shines light on the company’s Global Security Operations Center that tracks labor organizing and union activities among Amazon’s workers and monitors environmental and human rights on social media. The company has even hired Pinkerton operatives to surveil its warehouse employees. Although the focus is on Europe because the leaked emails on which the story is based pertain to activities on that continent, there is no reason to expect the same tactics are not being used elsewhere. Moreover, the company may be violating the much stricter laws in Europe protecting workers and union activities.
  • Cyber Command deployed personnel to Estonia to protect elections against Russian threat” By Shannon Vavra — cyberscoop.  It was recently revealed that personnel from the United States (U.S.) Cyber Command were deployed to Estonia to work with the latter country’s Defense Forces Cyber Command to fend off potential Russian attacks during the U.S. election. This follows another recent “hunt forward” mission for Cyber Command in Montenegro, another nation on the “frontline” of Russian hacking activities. Whether this has any effect beyond building trust and capacity between nations opposed to state-sponsored hacking and disinformation is unclear.
  • How China Is Buying Up the West’s High-Tech Sector” By Elizabeth Braw — Foreign Policy. This piece by a fellow at the ring wing American Enterprise Institute (AEI) makes the case that reviewing and potentially banning direct foreign investment by People’s Republic of China (PRC) in the United States (U.S.), European Union (EU), and European nations is probably not cutting off PRC access to cutting edge technology. PRC entities are investing directly or indirectly as limited partners in venture capital firms and are probably still gaining access to new technology. For example, an entity associated with the University of Cambridge is working with Huawei on a private 5G wireless network even though London is advancing legislation and policy to ban the PRC giant from United Kingdom (UK) networks. The author advocates for expanding the regulation of foreign investment to include limited partnerships and other structures that are apparently allowing the PRC to continue investing in and reaping the benefit of Western venture capital. There is hope, however, as a number of Western nations are starting government-funded venture capital firms to fund promising technology.
  • Twitter expands hate speech rules to include race, ethnicity” By Katie Paul — Reuters. The social media platform announced that it “further expanding our hateful conduct policy to prohibit language that dehumanizes people on the basis of race, ethnicity, or national origin.” A human rights group, the Color of Change, that was part of a coalition to pressure Twitter and other platforms called the change “essential concessions” but took issue with the timing, stating it would have had more impact had it been made before the election. A spokesperson added “[t]he jury is still out for a company with a spotty track record of policy implementation and enforcing its rules with far-right extremist users…[and] [v]oid of hard evidence the company will follow through, this announcement will fall into a growing category of too little, too late PR stunt offerings.”
  • White House drafts executive order that could restrict global cloud computing companies” By Steven Overly and Eric Geller — Politico. The Trump Administration may make another foray into trying to ban foreign companies from United States (U.S.) key critical infrastructure, and this time would reportedly bar U.S. cloud companies like Microsoft, Amazon, and others from partnering with foreign companies or entities that pose risk to the U.S. through the use of these U.S. systems to conduct cyber-attacks. This seems like another attempt to strike at the People’s Republic of China’s (PRC) technology firms. If issued, it remains to be seen how a Biden Administration would use or implement such a directive given that there is not enough time for the Trump government to see things through to end on such an order. In any event, one can be sure that tech giants have already begun pressing both the outgoing and incoming Administration against any such order and most likely Congress as well.

Other Developments

  • A bipartisan group of Senators and Representatives issued the framework for a $908 billion COVID-19 stimulus package that is reportedly the subject of serious in Congress. The framework details $10 billion for broadband without no detail on how these funds would be distributed.
  • The Australian Competition & Consumer Commission (ACCC) announced the signing of the Australian Product Safety Pledge, “a voluntary initiative that commits its signatories to a range of safety related responsibilities that go beyond what is legally required of them” in e-commerce. The ACCC stated “AliExpress, Amazon Australia, Catch.com.au and eBay Australia, who together account for a significant share of online sales in Australia, are the first businesses to sign the pledge, signifying their commitment to consumers’ safety through a range of commitments such as removing unsafe product listings within two days of being notified by the ACCC.” The pledge consists of 12 commitments:
    • Regularly consult the Product Safety Australia website and other relevant sources for information on recalled/unsafe products. Take appropriate action[1] on these products once they are identified.
    • Provide a dedicated contact point(s) for Australian regulatory authorities to notify and request take-downs of recalled/unsafe products.
    • Remove identified unsafe product listings within two business days of the dedicated contact point(s) receiving a take-down request from Australian regulatory authorities. Inform authorities on the action that has been taken and any relevant outcomes.
    • Cooperate with Australian regulatory authorities in identifying, as far as possible, the supply chain of unsafe products by responding to data/information requests within ten business days should relevant information not be publicly available.
    • Have an internal mechanism for processing data/information requests and take-downs of unsafe products.
    • Provide a clear pathway for consumers to notify the pledge signatory directly of unsafe product listings. Such notifications are treated according to the signatory’s processes and where responses to consumers are appropriate, they are given within five business days.
    • Implement measures to facilitate sellers’ compliance with Australian product safety laws. Share information with sellers on compliance training/guidance, including a link to the ACCC’s Selling online page on the Product Safety Australia website.
    • Cooperate with Australian regulatory authorities and sellers to inform consumers[2] about relevant recalls or corrective actions on unsafe products.
    • Set up processes aimed at preventing or restricting the sale of banned, non-compliant and recalled products as appropriate.
    • Put in place reasonable measures to act against repeat offenders selling unsafe products, including in cooperation with Australian regulatory authorities.
    • Take measures aimed at preventing the reappearance of unsafe product listings already removed.
    • Explore the potential use of new technologies and innovation to improve the detection and removal of unsafe products.
  • Senator Ron Wyden (D-OR) and Representative Lauren Underwood (D-IL) introduced “The Federal Cybersecurity Oversight Act” (S.4912) that would amend the “Federal Cybersecurity Enhancement Act of 2015” (P.L. 114-113) to restrict the use of exceptions to longstanding requirements that federal agencies use measures such as multi-factor authentication and encryption. Currently federal agencies exempt themselves on a number of grounds. Wyden and Underwood’s bill would tighten this process by making the exceptions good only for a year at a time and require the Office of Management and Budget (OMB) approve the execption. In a fact sheet, they claimed:
    • [T]he bill requires the Director of the Office of Management and Budget to approve all waivers, which can currently be self-issued by the head of the agency. To request a waiver, the agency head will have to certify that:
      • It would be excessively burdensome to implement the particular requirement;
      • The particular requirement is not necessary to secure the agency system and data; and
      • The agency has taken all necessary steps to secure the agency system and data.
  • The Government Accountability Office (GAO) looked at the United States (U.S.) longstanding efforts to buy common services and equipment in bulk known as Category Management. The GAO found progress but saw room for considerably more progress. GAO noted:
    • Since 2016, the Office of Management and Budget (OMB) has led efforts to improve how agencies buy these products and services through the category management initiative, which directs agencies across the government to buy more like a single enterprise. OMB has reported the federal government has saved $27.3 billion in 3 years through category management.
  • The GAO concluded:
    • The category management initiative has saved the federal government billions of dollars, and in some instances, enhanced agencies’ mission capabilities. However, the initiative has opportunities to accomplish much more. To date, OMB has focused primarily on contracting aspects of the initiative, and still has several opportunities to help agencies improve how they define their requirements for common products and services. OMB can take concrete steps to improve how agencies define these requirements through more robust guidance and training, changes to leadership delegations and cost savings reporting, and the development of additional metrics to measure implementation of the initiative.
    • Additionally, OMB can lead the development of a coordinated strategy that addresses government-wide data challenges hindering agencies’ efforts to assess their spending and identify prices paid for common products and services.
    • Finally, OMB can tailor additional training courses to provide more relevant information to agency personnel responsible for small business matters, and improve public reporting about the impact of category management on small businesses. In doing so, OMB can enhance the quality of the information provided to the small business community and policymakers. Through these efforts to further advance the category management initiative, OMB can help federal agencies accomplish their missions more effectively while also being better stewards of taxpayer dollars.
    • The GAO made the following recommendations:
      • The Director of the Office of Management and Budget should emphasize in its overarching category management guidance the importance of effectively defining requirements for common products and services when implementing the category management initiative. (Recommendation 1)
      • The Director of the Office of Management and Budget should work with the Category Management Leadership Council and the General Services Administration’s Category Management Program Management Office, and other appropriate offices, to develop additional tailored training for Senior Accountable Officials and agency personnel who manage requirements for common products and services. (Recommendation 2)
      • The Director of the Office of Management and Budget should account for agencies’ training needs, including training needs for personnel who define requirements for common products and services, when setting category management training goals. (Recommendation 3)
      • The Director of the Office of Management and Budget should ensure that designated Senior Accountable Officials have the authority necessary to hold personnel accountable for defining requirements for common products and services as well as contracting activities. (Recommendation 4)
      • The Director of the Office of Management and Budget should report cost savings from the category management initiative by agency. (Recommendation 5)
      • The Director of the Office of Management and Budget should work with the Category Management Leadership Council and the Performance Improvement Council to establish additional performance metrics for the category management initiative that are related to agency requirements. (Recommendation 6)
      • The Director of the Office of Management and Budget should, in coordination with the Category Management Leadership Council and the Chief Data Officer Council, establish a strategic plan to coordinate agencies’ responses to government-wide data challenges hindering implementation of the category management initiative, including challenges involving prices-paid and spending data. (Recommendation 7)
      • The Director of the Office of Management and Budget should work with the General Services Administration’s Category Management Program Management Office and other organizations, as appropriate, to develop additional tailored training for Office of Small Disadvantaged Business Utilization personnel that emphasizes information about small business opportunities under the category management initiative. (Recommendation 8)
      • The Director of the Office of Management and Budget should update its methodology for calculating potentially duplicative contract reductions to strengthen the linkage between category management actions and the number of contracts eliminated. (Recommendation 9)
      • The Director of the Office of Management and Budget should identify the time frames covered by underlying data when reporting on how duplicative contract reductions have impacted small businesses. (Recommendation 10)
  • The chair and ranking member of the House Commerce Committee are calling on the Federal Communications Commission (FCC) to take preparatory steps before Congress provides funding to telecommunications providers to remove and replace Huawei and ZTE equipment. House Energy and Commerce Committee Chair Frank Pallone Jr (D-NJ) and Ranking Member Greg Walden (R-OR) noted the “Secure and Trusted Communications Networks Act” (P.L. 116-124):
    • provides the Federal Communications Commission (FCC) with several new authorities to secure our communications supply chain, including the establishment and administration of the Secure and Trusted Communications Networks Reimbursement Program (Program). Through this Program, small communications providers may seek reimbursement for the cost of removing and replacing suspect network equipment. This funding is critical because some small and rural communications providers would not otherwise be able to afford these upgrades. Among the responsibilities entrusted to the FCC to carry out the Program is the development of a list of suggested replacements for suspect equipment, including physical and virtual communications equipment, application and management software, and services.
    • Pallone and Walden conceded that Congress has not yet provided funds but asked the FCC to take some steps:
      • First, the FCC should develop and release the list of eligible replacement equipment, software, and services as soon as possible. Second, the agency should reassure companies that they will not jeopardize their eligibility for reimbursement under the Program just because replacement equipment purchases were made before the Program is funded, assuming other eligibility criteria are met.
  • The Office of Special Counsel (OSC) wrote one of the whistleblowers at the United States Agency for Global Media (USAGM) and indicated it has ordered the head of USAGM to investigate the claims of malfeasance at the agency. The OSC stated:
    • On December 2, 2020, after reviewing the information you submitted, we directed the Chief Executive Officer (CEO) of USAGM to order an investigation into the following allegations and report back to OSC pursuant to 5 U.S.C. § 1213(c). Allegations to be investigated include that, since June 2020, USAGM:
      • Repeatedly violated the Voice of America (VOA) firewall—the law that protects VOA journalists’ “professional independence and integrity”;
      • Engaged in gross mismanagement and abuse of authority by:
        • Terminating the Presidents of each USAGM-funded network— Radio Free Asia (RFA), Radio Free Europe/Radio Liberty (RFE/RL), the Middle East Broadcasting Networks (MBN), and the Office of Cuba Broadcasting (OCB)—as well as the President and the CEO of the Open Technology Fund (OTF);
        • Dismissing the bipartisan board members that governed the USAGM- funded networks, replacing those board members with largely political appointees, and designating the USAGM CEO as Chairman;
        • Revoking all authority from various members of USAGM’s Senior Executive Service (SES) and reassigning those authorities to political appointees outside of the relevant offices;
        • Removing the VOA Editor for News Standards and Best Practices—a central figure in the VOA editorial standards process and a critical component of the VOA firewall—from his position and leaving that position vacant;
        • Similarly removing the Executive Editor of RFA;
        • Suspending the security clearances of six of USAGM’s ten SES members and placing them on administrative leave; and
        • Prohibiting several offices critical to USAGM’s mission—including the Offices of General Counsel, Chief Strategy, and Congressional and Public Affairs—from communicating with outside parties without the front office’s express knowledge and consent;
      • Improperly froze all agency hiring, contracting, and Information Technology migrations, and either refused to approve such decisions or delayed approval until the outside reputation and/or continuity of agency or network operations, and at times safety of staff, were threatened;
      • Illegally repurposed, and pressured career staff to illegally repurpose, congressionally appropriated funds and programs without notifying Congress; and
      • Refused to authorize the renewal of the visas of non-U.S. citizen journalists working for the agency, endangering both the continuity of agency operations and those individuals’ safety.

Coming Events

  • The Senate Judiciary Committee will hold an executive session at which the “Online Content Policy Modernization Act” (S.4632), a bill to narrow the liability shield in 47 USC 230, may be marked up on 10 December.
  • On 10 December, the Federal Communications Commission (FCC) will hold an open meeting and has released a tentative agenda:
    • Securing the Communications Supply Chain. The Commission will consider a Report and Order that would require Eligible Telecommunications Carriers to remove equipment and services that pose an unacceptable risk to the national security of the United States or the security and safety of its people, would establish the Secure and Trusted Communications Networks Reimbursement Program, and would establish the procedures and criteria for publishing a list of covered communications equipment and services that must be removed. (WC Docket No. 18-89)
    • National Security Matter. The Commission will consider a national security matter.
    • National Security Matter. The Commission will consider a national security matter.
    • Allowing Earlier Equipment Marketing and Importation Opportunities. The Commission will consider a Notice of Proposed Rulemaking that would propose updates to its marketing and importation rules to permit, prior to equipment authorization, conditional sales of radiofrequency devices to consumers under certain circumstances and importation of a limited number of radiofrequency devices for certain pre-sale activities. (ET Docket No. 20-382)
    • Promoting Broadcast Internet Innovation Through ATSC 3.0. The Commission will consider a Report and Order that would modify and clarify existing rules to promote the deployment of Broadcast Internet services as part of the transition to ATSC 3.0. (MB Docket No. 20-145)

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Makalu from Pixabay

Further Reading, Other Development, and Coming Events (7 December)

Further Reading

  • Facebook steps up campaign to ban false information about coronavirus vaccines” By Elizabeth Dwoskin — The Washington Post. In its latest step to find and remove lies, misinformation, and disinformation, the social media giant is now committing to removing and blocking untrue material about COVID-19 vaccines, especially from the anti-vaccine community. Will the next step be to take on anti-vaccination proponents generally?
  • Comcast’s 1.2 TB data cap seems like a ton of data—until you factor in remote work” By Rob Pegoraro — Fast Company. Despite many people and children working and learning from home, Comcast is reimposing a 1.2 terabyte limit on data for homes. Sounds like quite a lot until you factor in video meetings, streaming, etc. So far, other providers have not set a cap.
  • Google’s star AI ethics researcher, one of a few Black women in the field, says she was fired for a critical email” By Drew Harwell and Nitasha Tiku — The Washington Post. Timnit Gebru, a top flight artificial intelligence (AI) computer scientist, was fired for questioning Google’s review of a paper she wanted to present at an AI conference that is likely critical of the company’s AI projects. Google claims she resigned, but Gebru says she was fired. She has long been an advocate for women and minorities in tech and AI and her ouster will likely only increase scrutiny of and questions about Google’s commitment to diversity and an ethical approach to the development and deployment of AI. It will also probably lead to more employee disenchantment about the company that follows in the wake of protests about Google’s involvement with the United States Department of Defense’s Project Maven and hiring of former United States Department of Homeland Security chief of staff Miles Taylor who was involved with the policies that resulted in caging children and separating families on the southern border of the United States.
  • Humans Can Help Clean Up Facebook and Twitter” By Greg Bensinger — The New York Times. In this opinion piece, the argument is made that social media platforms should redeploy their human monitors to the accounts that violate terms of service most frequently (e.g., President Donald Trump) and more aggressively label and remove untrue or inflammatory content, they would have a greater impact on lies, misinformation, and disinformation.
  • Showdown looms over digital services tax” By Ashley Gold — Axios. Because the Organization for Economic Cooperation and Development (OECD) has not reached a deal on digital services taxes, a number of the United States (U.S.) allies could move forward with taxes on U.S. multinationals like Amazon, Google, and Apple. The Trump Administration has variously taken an adversarial position threatening to retaliate against countries like France who have enacted a tax that has not been collected during the OECD negotiations. The U.S. also withdrew from talks. It is probable the Biden Administration will be more willing to work in a multi-lateral fashion and may strike a deal on an issue that it not going away as the United Kingdom, Italy, and Canada also have plans for a digital tax.
  • Trump’s threat to veto defense bill over social-media protections is heading to a showdown with Congress” By Karoun Demirjian and Tony Romm — The Washington Post. I suppose I should mention of the President’s demands that the FY 2021 National Defense Authorization Act (NDAA) contain a repeal of 47 U.S.C. 230 (Section 230 of the Communications Act) that came at the eleventh hour and fifty-ninth minute of negotiations on a final version of the bill. Via Twitter, Donald Trump threatened to veto the bill which has been passed annually for decades. Republicans were not having it, however, even if they agreed on Trump’s desire to remove liability protection for technology companies. And yet, if Trump continues to insist on a repeal, Republicans may find themselves in a bind and the bill could conceivably get pulled until President-elect Joe Biden is sworn in. On the other hand, Trump’s veto threats about renaming military bases currently bearing the names of Confederate figures have not been renewed even though the final version of the bill contains language instituting a process to do just that.

Other Developments

  • The Senate Judiciary Committee held over its most recent bill to narrow 47 U.S.C. 230 (Section 230 of the Communications Act) that provides liability protection for technology companies for third-party material posted on their platforms and any decisions to edit, alter, or remove such content. The committee opted to hold the “Online Content Policy Modernization Act” (S.4632), which may mean the bill’s chances of making it to the Senate floor are low. What’s more, even if the Senate passes Section 230 legislation, it is not clear there will be sufficient agreement with Democrats in the House to get a final bill to the President before the end of this Congress. On 1 October, the committee also decided to hold over bill to try to reconcile the fifteen amendments submitted for consideration. The Committee could soon meet again to formally markup and report out this legislation.
    • At the earlier hearing, Chair Lindsey Graham (R-SC) submitted an amendment revising the bill’s reforms to Section 230 that incorporate some of the below amendments but includes new language. For example, the bill includes a definition of “good faith,” a term not currently defined in Section 230. This term would be construed as a platform taking down or restricting content only according to its publicly available terms of service, not as a pretext, and equally to all similarly situated content. Moreover, good faith would require alerting the user and giving him or her an opportunity to respond subject to certain exceptions. The amendment also makes clear that certain existing means of suing are still available to users (e.g. suing claiming a breach of contract.)
    • Senator Mike Lee (R-UT) offered a host of amendments:
      • EHF20913 would remove “user[s]” from the reduced liability shield that online platforms would receive under the bill. Consequently, users would still not be legally liable for the content posted by another user.
      • EHF20914 would revise the language the language regarding the type of content platforms could take down with legal protection to make clear it would not just be “unlawful” content but rather content “in violation of a duly enacted law of the United States,” possibly meaning federal laws and not state laws. Or, more likely, the intent would be to foreclose the possibility a platform would say it is acting in concert with a foreign law and still assert immunity.
      • EHF20920 would add language making clear that taking down material that violates terms of service or use according to an objectively reasonable belief would be shielded from liability.
      • OLL20928 would expand legal protection to platforms for removing or restricting spam,
      • OLL20929 would bar the Federal Communications Commission (FCC) from a rulemaking on Section 230.
      • OLL20930 adds language making clear if part of the revised Section 230 is found unconstitutional, the rest of the law would still be applicable.
      • OLL20938 revises the definition of an “information content provider,” the term of art in Section 230 that identifies a platform, to expand when platforms may be responsible for the creation or development of information and consequently liable for a lawsuit.
    • Senator Josh Hawley (R-MO) offered an amendment that would create a new right of action for people to sue large platforms for taking down his or her content if not done in “good faith.” The amendment limits this right only to “edge providers” who are platforms with more than 30 million users in the U.S. , 300 million users worldwide, and with revenues of more than $1.5 billion. This would likely exclude all platforms except for Twitter, Facebook, Instagram, TikTok, Snapchat, and a select group of a few others.
    • Senator John Kennedy (R-LA) offered an amendment that removes all Section 230 legal immunity from platforms that collect personal data and then uses an “automated function” to deliver targeted or tailored content to a user unless a user “knowingly and intentionally elect[s]” to receive such content.
  • The Massachusetts Institute of Technology’s (MIT) Work of the Future Task Force issued its final report and drew the following conclusions:
    • Technological change is simultaneously replacing existing work and creating new work. It is not eliminating work altogether.
    • Momentous impacts of technological change are unfolding gradually.
    • Rising labor productivity has not translated into broad increases in incomes because labor market institutions and policies have fallen into disrepair.
    • Improving the quality of jobs requires innovation in labor market institutions.
    • Fostering opportunity and economic mobility necessitates cultivating and refreshing worker skills.
    • Investing in innovation will drive new job creation, speed growth, and meet rising competitive challenges.
    • The Task Force stated:
      • In the two-and-a-half years since the Task Force set to work, autonomous vehicles, robotics, and AI have advanced remarkably. But the world has not been turned on its head by automation, nor has the labor market. Despite massive private investment, technology deadlines have been pushed back, part of a normal evolution as breathless promises turn into pilot trials, business plans, and early deployments — the diligent, if prosaic, work of making real technologies work in real settings to meet the demands of hard-nosed customers and managers.
      • Yet, if our research did not confirm the dystopian vision of robots ushering workers off of factor y floors or artificial intelligence rendering superfluous human expertise and judgment, it did uncover something equally pernicious: Amidst a technological ecosystem delivering rising productivity, and an economy generating plenty of jobs (at least until the COVID-19 crisis), we found a labor market in which the fruits are so unequally distributed, so skewed towards the top, that the majority of workers have tasted only a tiny morsel of a vast har vest.
      • As this report documents, the labor market impacts of technologies like AI and robotics are taking years to unfold. But we have no time to spare in preparing for them. If those technologies deploy into the labor institutions of today, which were designed for the last century, we will see similar effects to recent decades: downward pressure on wages, skills, and benefits, and an increasingly bifurcated labor market. This report, and the MIT Work of the Future Task Force, suggest a better alternative: building a future for work that har vests the dividends of rapidly advancing automation and ever-more powerful computers to deliver opportunity and economic security for workers. To channel the rising productivity stemming from technological innovations into broadly shared gains, we must foster institutional innovations that complement technological change.
  • The European Data Protection Supervisor (EDPS) Wojciech Wiewiorówski published his “preliminary opinion on the European Commission’s (EC) Communication on “A European strategy for data” and the creation of a common space in the area of health, namely the European Health Data Space (EHDS).” The EDPS lauded the goal of the EHDS, “the prevention, detection and cure of diseases, as well as for evidence-based decisions in order to enhance effectiveness, accessibility and sustainability of the healthcare systems.” However, Wiewiorówski articulated his concerns that the EC needs to think through the applicability of the General Data Protection Regulation (GDPR), among other European Union (EU) laws before it can legally move forward. The EDPS stated:
    • The EDPS calls for the establishment of a thought-through legal basis for the processing operations under the EHDS in line with Article 6(1) GDPR and also recalls that such processing must comply with Article 9 GDPR for the processing of special categories of data.
    • Moreover, the EDPS highlights that due to the sensitivity of the data to be processed within the EHDS, the boundaries of what constitutes a lawful processing and a compatible further processing of the data must be crystal-clear for all the stakeholders involved. Therefore, the transparency and the public availability of the information relating to the processing on the EHDS will be key to enhance public trust in the EHDS.
    • The EDPS also calls on the Commission to clarify the roles and responsibilities of the parties involved and to clearly identify the precise categories of data to be made available to the EHDS. Additionally, he calls on the Member States to establish mechanisms to assess the validity and quality of the sources of the data.
    • The EDPS underlines the importance of vesting the EHDS with a comprehensive security infrastructure, including both organisational and state-of-the-art technical security measures to protect the data fed into the EHDS. In this context, he recalls that Data Protection Impact Assessments may be a very useful tool to determine the risks of the processing operations and the mitigation measures that should be adopted.
    • The EDPS recommends paying special attention to the ethical use of data within the EHDS framework, for which he suggests taking into account existing ethics committees and their role in the context of national legislation.
    • The EDPS is convinced that the success of the EHDS will depend on the establishment of a strong data governance mechanism that provides for sufficient assurances of a lawful, responsible, ethical management anchored in EU values, including respect for fundamental rights. The governance mechanism should regulate, at least, the entities that will be allowed to make data available to the EHDS, the EHDS users, the Member States’ national contact points/ permit authorities, and the role of DPAs within this context.
    • The EDPS is interested in policy initiatives to achieve ‘digital sovereignty’ and has a preference for data being processed by entities sharing European values, including privacy and data protection. Moreover, the EDPS calls on the Commission to ensure that the stakeholders taking part in the EHDS, and in particular, the controllers, do not transfer personal data unless data subjects whose personal data are transferred to a third country are afforded a level of protection essentially equivalent to that guaranteed within the European Union.
    • The EDPS calls on Member States to guarantee the effective implementation of the right to data portability specifically in the EHDS, together with the development of the necessary technical requirements. In this regard, he considers that a gap analysis might be required regarding the need to integrate the GDPR safeguards with other regulatory safeguards, provided e.g. by competition law or ethical guidelines.
  • The Office of Management and Budget (OMB) extended a guidance memorandum directing agencies to consolidate data centers after Congress pushed back the sunset date for the program. OMB extended OMB Memorandum M-19-19, Update to Data Center Optimization Initiative (DCOI) through 30 September 2022, which applies “to the 24 Federal agencies covered by the Chief Financial Officers (CFO) Act of 1990, which includes the Department of Defense.” The DCOI was codified in the “Federal Information Technology Acquisition Reform” (FITARA) (P.L. 113-291) and extended in 2018 until October 1, 2020. And this sunset was pushed back another two years in the FY 2020 National Defense Authorization Act (NDAA) (P.L. 116-92).
    • In March 2020, the Government Accountability Office (GAO) issued another of its periodic assessments of the DCOI, started in 2012 by the Obama Administration to shrink the federal government’s footprint of data centers, increase efficiency and security, save money, and reduce energy usage.
    • The GAO found that 23 of the 24 agencies participating in the DCOI met or planned to meet their FY 2019 goals to close 286 of the 2,727 data centers considered part of the DCOI. This latter figure deserves some discussion, for the Trump Administration changed the definition of what is a data center to exclude smaller ones (so-called non-tiered data centers). GAO asserted that “recent OMB DCOI policy changes will reduce the number of data centers covered by the policy and both OMB and agencies may lose important visibility over the security risks posed by these facilities.” Nonetheless, these agencies are projecting savings of $241.5 million when all the 286 data centers planned for closure in FY 2019 actually close. It bears note that the GAO admitted in a footnote it “did not independently validate agencies’ reported cost savings figures,” so these numbers may not be reliable.
    • In terms of how to improve the DCOI, the GAO stated that “[i]n addition to reiterating our prior open recommendations to the agencies in our review regarding their need to meet DCOI’s closure and savings goals and optimization metrics, we are making a total of eight new recommendations—four to OMB and four to three of the 24 agencies. Specifically:
      • The Director of the Office of Management and Budget should (1) require that agencies explicitly document annual data center closure goals in their DCOI strategic plans and (2) track those goals on the IT Dashboard. (Recommendation 1)
      • The Director of the Office of Management and Budget should require agencies to report in their quarterly inventory submissions those facilities previously reported as data centers, even if those facilities are not subject to the closure and optimization requirements of DCOI. (Recommendation 2)
      • The Director of the Office of Management and Budget should document OMB’s decisions on whether to approve individual data centers when designated by agencies as either a mission critical facility or as a facility not subject to DCOI. (Recommendation 3)
      • The Director of the Office of Management and Budget should take action to address the key performance measurement characteristics missing from the DCOI optimization metrics, as identified in this report. (Recommendation 4)
  • Australia’s Inspector-General of Intelligence and Security (IGIS) released its first report on how well the nation’s security services did in observing the law with respect to COVID  app  data. The IGIS “is satisfied that the relevant agencies have policies and procedures in place and are taking reasonable steps to avoid intentional collection of COVID app data.” The IGIS revealed that “[i]ncidental collection in the course of the lawful collection of other data has occurred (and is permitted by the Privacy Act); however, there is no evidence that any agency within IGIS jurisdiction has decrypted, accessed or used any COVID app data.” The IGIS is also “satisfied  that  the intelligence agencies within IGIS jurisdiction which have the capability to incidentally collect a least some types of COVID app data:
    • Are aware of their responsibilities under Part VIIIA of the Privacy Act and are taking active steps to minimise the risk that they may collect COVID app data.
    • Have appropriate  policies  and  procedures  in  place  to  respond  to  any  incidental  collection of COVID app data that they become aware of. 
    • Are taking steps to ensure any COVID app data is not accessed, used or disclosed.
    • Are taking steps to ensure any COVID app data is deleted as soon as practicable.
    • Have not decrypted any COVID app data.
    • Are applying the usual security measures in place in intelligence agencies such that a ‘spill’ of any data, including COVID app data, is unlikely.
  • New Zealand’s Government Communications Security Bureau’s National Cyber Security Centre (NCSC) has released its annual Cyber Threat Report that found that “nationally significant organisations continue to be frequently targeted by malicious cyber actors of all types…[and] state-sponsored and non-state actors targeted public and private sector organisations to steal information, generate revenue, or disrupt networks and services.” The NCSC added:
    • Malicious cyber actors have shown their willingness to target New Zealand organisations in all sectors using a range of increasingly advanced tools and techniques. Newly disclosed vulnerabilities in products and services, alongside the adoption of new services and working arrangements, are rapidly exploited by state-sponsored actors and cyber criminals alike. A common theme this year, which emerged prior to the COVID-19 pandemic, was the exploitation of known vulnerabilities in internet-facing applications, including corporate security products, remote desktop services and virtual private network applications.
  • The former Director of the United States’ (U.S.) Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA) wrote an opinion piece disputing President Donald Trump’s claims that the 2020 Presidential Election was fraudulent. Christopher Krebs asserted:
    • While I no longer regularly speak to election officials, my understanding is that in the 2020 results no significant discrepancies attributed to manipulation have been discovered in the post-election canvassing, audit and recount processes.
    • This point cannot be emphasized enough: The secretaries of state in Georgia, Michigan, Arizona, Nevada and Pennsylvania, as well officials in Wisconsin, all worked overtime to ensure there was a paper trail that could be audited or recounted by hand, independent of any allegedly hacked software or hardware.
    • That’s why Americans’ confidence in the security of the 2020 election is entirely justified. Paper ballots and post-election checks ensured the accuracy of the count. Consider Georgia: The state conducted a full hand recount of the presidential election, a first of its kind, and the outcome of the manual count was consistent with the computer-based count. Clearly, the Georgia count was not manipulated, resoundingly debunking claims by the president and his allies about the involvement of CIA supercomputers, malicious software programs or corporate rigging aided by long-gone foreign dictators.

Coming Events

  • The National Institute of Standards and Technology (NIST) will hold a webinar on the Draft Federal Information Processing Standards (FIPS) 201-3 on 9 December.
  • On 9 December, the Senate Commerce, Science, and Transportation Committee will hold a hearing titled “The Invalidation of the EU-US Privacy Shield and the Future of Transatlantic Data Flows” with the following witnesses:
    • The Honorable Noah Phillips, Commissioner, Federal Trade Commission
    • Ms. Victoria Espinel, President and Chief Executive Officer, BSA – The Software Alliance
    • Mr. James Sullivan, Deputy Assistant Secretary for Services, International Trade Administration, U.S. Department of Commerce
    • Mr. Peter Swire, Elizabeth and Tommy Holder Chair of Law and Ethics, Georgia Tech Scheller College of Business, and Research Director, Cross-Border Data Forum
  • On 10 December, the Federal Communications Commission (FCC) will hold an open meeting and has released a tentative agenda:
    • Securing the Communications Supply Chain. The Commission will consider a Report and Order that would require Eligible Telecommunications Carriers to remove equipment and services that pose an unacceptable risk to the national security of the United States or the security and safety of its people, would establish the Secure and Trusted Communications Networks Reimbursement Program, and would establish the procedures and criteria for publishing a list of covered communications equipment and services that must be removed. (WC Docket No. 18-89)
    • National Security Matter. The Commission will consider a national security matter.
    • National Security Matter. The Commission will consider a national security matter.
    • Allowing Earlier Equipment Marketing and Importation Opportunities. The Commission will consider a Notice of Proposed Rulemaking that would propose updates to its marketing and importation rules to permit, prior to equipment authorization, conditional sales of radiofrequency devices to consumers under certain circumstances and importation of a limited number of radiofrequency devices for certain pre-sale activities. (ET Docket No. 20-382)
    • Promoting Broadcast Internet Innovation Through ATSC 3.0. The Commission will consider a Report and Order that would modify and clarify existing rules to promote the deployment of Broadcast Internet services as part of the transition to ATSC 3.0. (MB Docket No. 20-145)

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Photo by Daniel Schludi on Unsplash

Further Reading, Other Developments, and Coming Events (4 December)

Further Reading

  • How Misinformation ‘Superspreaders’ Seed False Election Theories” By Sheera Frenkel — The New York Times. A significant percentage of lies, misinformation, and disinformation about the legitimacy of the election have been disseminated by a small number of right-wing figures, which are then repeated, reposted, and retweeted. The Times relies on research of how much engagement people like President Donald Trump and Dan Bongino get on Facebook after posting untrue claims about the election and it turns out that such trends and rumors do not start spontaneously.
  • Facebook Said It Would Ban Holocaust Deniers. Instead, Its Algorithm Provided a Network for Them” By Aaron Sankin — The Markup. This news organization still found Holocaust denial material promoted by Facebook’s algorithm even though the platform said it was taking down such material recently. This result may point to the difficulty of policing objectionable material that uses coded language and/or the social media platforms lack of sufficient resources to weed out this sort of content.
  • What Facebook Fed the Baby Boomers” By Charlie Warzel — The New York Times. A dispiriting trip inside two people’s Facebook feeds. This article makes the very good point that comments are not moderated, and these tend to be significant sources of vitriol and disinformation.
  • How to ‘disappear’ on Happiness Avenue in Beijing” By Vincent Ni and Yitsing Wang — BBC. By next year, the People’s Republic of China (PRC) may have as many as 560 million security cameras, and one artist ran an experiment of sorts to see if a group of people could walk down a major street in the capital without being seen by a camera or without their face being seen at places with lots of cameras.
  • Patients of a Vermont Hospital Are Left ‘in the Dark’ After a Cyberattack” By Ellen Barry and Nicole Perlroth — The New York Times. A Russian hacking outfit may have struck back after the Department of Defense’s (DOD) Cyber Command and Microsoft struck them. A number of hospitals were hacked, and care was significantly disrupted. This dynamic may lend itself to arguments that the United States (U.S.) may be wise to curtail its offensive operations.
  • EU seeks anti-China alliance on tech with Biden” By Jakob Hanke Vela and David M. Herszenhorn — Politico. The European Union (EU) is hoping the United States (U.S.) will be more amenable to working together in the realm of future technology policy, especially against the People’s Republic of China (PRC) which has made a concerted effort to drive the adoption of standards that favor its companies (e.g., the PRC pushed for and obtained 5G standards that will favor Huawei). Diplomatically speaking, this is considered low-hanging fruit, and a Biden Administration will undoubtedly be more multilateral than the Trump Administration.
  • Can We Make Our Robots Less Biased Than We Are?” By David Berreby — The New York Times. The bias present in facial recognition technology and artificial intelligence is making its way into robotics, posing the question of how do we change this? Many African American and other minority scientists are calling for the inclusion of people of color inn designing such systems as a countermeasure to the usual bias for white men.

Other Developments

  • The top Democrat on the Senate Homeland Security and Governmental Affairs Committee wrote President Donald Trump and “slammed the Trump Administration for their lack of action against foreign adversaries, including Russia, China, and North Korea, that have sponsored cyber-attacks against American hospitals and research institutions in an effort to steal information related to development of Coronavirus vaccines.” Peters used language that was unusually strong as Members of Congress typically tone down the rhetoric and deploy coded language to signal their level of displeasure about administration action or inaction. Peters could well feel strongly about what he perceives to be Trump Administration indifference to the cyber threats facing institutions researching and developing COVID-19 vaccines, this is an issue on which he may well be trying to split Republicans, placing them in the difficult position of lining up behind a president disinclined to prioritize some cyber issues or breaking ranks with him.
    • Peters stated:
      • I urge you, again, to send a strong message to any foreign government attempting to hack into our medical institutions that this behavior is unacceptable. The Administration should use the tools at its disposal, including the threat of sanctions, to deter future attacks against research institutions. In the event that any foreign government directly threatens the lives of Americans through attacks on medical facilities, other Department of Defense capabilities should be considered to make it clear that there will be consequences for these actions.
  • A United States federal court has ruled against a Trump Administration appointee Michael Pack and the United States Agency for Global Media (USAGM) and their attempts to interfere illegally with the independence of government-funded news organizations such as the Voice of America (VOA). The District Court for the District of Columbia enjoined Pack and the USAGM from a list of actions VOA and USAGM officials claim are contrary to the First Amendment and the organization’s mission.
  • The Federal Trade Commission (FTC) is asking a United States federal court to compel former Trump White House advisor Steve Bannon to appear for questioning per a Civil Investigative Demand (CID) as part of its ongoing probe of Cambridge Analytica’s role in misusing personal data of Facebook users in the 2016 Presidential Election. The FTC noted it “issued the CID to determine, among other things, whether Bannon may be held individually liable for the deceptive conduct of Cambridge Analytica, LLC—the subject of an administrative law enforcement action brought by the Commission.” There had been an interview scheduled in September but the day before it was to take place, Bannon’s lawyers informed the FTC he would not be attending.
    • In 2019, the FTC settled with former Cambridge Analytica CEO Alexander Nix and app developer Aleksandr Kogan in “administrative orders restricting how they conduct any business in the future, and requiring them to delete or destroy any personal information they collected.” The FTC did not, however, settle with the company itself. The agency alleged “that Cambridge Analytica, Nix, and Kogan deceived consumers by falsely claiming they did not collect any personally identifiable information from Facebook users who were asked to answer survey questions and share some of their Facebook profile data.” Facebook settled with the FTC for a record $5 billion for its role in the Cambridge Analytica scandal and for how it violated its 2012 consent order with the agency.
  • Apple responded to a group of human rights and civil liberties organizations about its plans to deploy technology on its operating system that allows users greater control of their privacy. Apple confirmed that its App Tracking Transparency (ATT) would be made part of its iOS early next year and would provide users of Apple products with a prompt with a warning about how their information may be used by the app developer. ATT would stop app developers from tracking users when they use other apps on ta device. Companies like Facebook have objected, claiming that the change is a direct shot at them and their revenue. Apple does not reap a significant revenue stream from collecting, combining, and processing user data whereas Facebook does. Facebook also tracks users across devices and apps on a device through a variety of means.
    • Apple stated:
      • We delayed the release of ATT to early next year to give developers the time they indicated they needed to properly update their systems and data practices, but we remain fully committed to ATT and to our expansive approach to privacy protections. We developed ATT for a single reason: because we share your concerns about users being tracked without their consent and the bundling and reselling of data by advertising networks and data brokers.
      • ATT doesn’t ban the reasonable collection of user data for app functionality or even for advertising. Just as with the other data-access permissions we have added over many software releases, developers will be able to explain why they want to track users both before the ATT prompt is shown and in the prompt itself. At that point, users will have the freedom to make their own choice about whether to proceed. This privacy innovation empowers consumers — not Apple — by simply making it clear what their options are, and giving them the information and power to choose.
    • As mentioned, a number of groups wrote Apple in October “to express our disappointment that Apple is delaying the full implementation of iOS 14’s anti-tracking features until early 2021.” They argued:
      • These features will constitute a vital policy improvement with the potential to strengthen respect for privacy across the industry. Apple should implement these features as expeditiously as possible.
      • We were heartened by Apple’s announcement that starting with the iOS 14 update, all app developers will be required to provide information that will help users understand the privacy implications of an app before they install it, within the App Store interface.
      • We were also pleased that iOS 14 users would be required to affirmatively opt in to app tracking, on an app-by-app basis. Along with these changes, we urge Apple to verify the accuracy of app policies, and to publish transparency reports showing the number of apps that are rejected and/or removed from the App Store due to inadequate or inaccurate policies.
  • The United States (U.S.) Government Accountability Office (GAO) sent its assessment of the privacy notices and practices of U.S. banks and credit unions to the chair of the Senate committee that oversees this issue. Senate Banking, Housing, and Urban Affairs Committee Chair Mike Crapo (R-ID) had asked the GAO “to examine the types of personal information that financial institutions collect, use, and share; how they make consumers aware of their information-sharing practices; and federal regulatory oversight of these activities.” The GAO found that a ten-year-old model privacy disclosure form used across these industries may comply with the prevailing federal requirements but no longer encompasses the breadth and scope of how the personal information of people is collected, processed, and used. The GAO called on the Consumer Financial Protection Bureau (CFPB) to update this form. The GAO explained:
    • Banks and credit unions collect, use, and share consumers’ personal information—such as income level and credit card transactions—to conduct everyday business and market products and services. They share this information with a variety of third parties, such as service providers and retailers.
    • The Gramm-Leach-Bliley Act (GLBA) requires financial institutions to provide consumers with a privacy notice describing their information-sharing practices. Many banks and credit unions elect to use a model form—issued by regulators in 2009—which provides a safe harbor for complying with the law (see figure). GAO found the form gives a limited view of what information is collected and with whom it is shared. Consumer and privacy groups GAO interviewed cited similar limitations. The model form was issued over 10 years ago. The proliferation of data-sharing since then suggests a reassessment of the form is warranted. Federal guidance states that notices about information collection and usage are central to providing privacy protections and transparency.
    • Since Congress transferred authority to the CFPB for implementing GLBA privacy provisions, the agency has not reassessed if the form meets consumer expectations for disclosures of information-sharing. CFPB officials said they had not considered a reevaluation because they had not heard concerns from industry or consumer groups about privacy notices. Improvements to the model form could help ensure that consumers are better informed about all the ways banks and credit unions collect and share personal information
    • The increasing amounts of and changing ways in which industry collects and shares consumer personal information—including from online activities—highlights the importance of clearly disclosing practices for collection, sharing, and use. However, our work shows that banks and credit unions generally used the model form, which was created more than 10 years ago, to make disclosures required under GLBA. As a result, the disclosures often provided a limited view of how banks and credit unions collect, use, and share personal information.
    • We recognize that the model form is required to be succinct, comprehensible to consumers, and allow for comparability across institutions. But, as information practices continue to change or expand, consumer insights into those practices may become even more limited. Improvements and updates to the model privacy form could help ensure that consumers are better informed about all the ways that banks and credit unions collect, use, and share personal information. For instance, in online versions of privacy notices, there may be opportunities for readers to access additional details—such as through hyperlinks—in a manner consistent with statutory requirements.
  • The Australian Competition & Consumer Commission (ACCC) is asking for feedback on Google’s proposed $2.1 billion acquisition of Fitbit. In a rather pointed statement, the chair of the ACCC, Rod Sims, made clear “[o]ur decision to begin consultation should not be interpreted as a signal that the ACCC will ultimately accept the undertaking and approve the transaction.” The buyout is also under scrutiny in the European Union (EU) and may be affected by the suit the United States Department of Justice (DOJ) and some states have brought against the company for anti-competitive behavior. The ACCC released a Statement of Issues in June about the proposed deal.
    • The ACCC explained “[t]he proposed undertaking would require Google to:
      • not use certain user data collected through Fitbit and Google wearables for Google’s advertising purposes for 10 years, with an option for the ACCC to extend this obligation by up to a further 10 years;
      • maintain access for third parties, such as health and fitness apps, to certain user data collected through Fitbit and Google wearable devices for 10 years; and
      • maintain levels of interoperability between third party wearables and Android smartphones for 10 years.
    • In August, the EU “opened an in-depth investigation to assess the proposed acquisition of Fitbit by Google under the EU Merger Regulation.” The European Commission (EC) expressed its concerns “that the proposed transaction would further entrench Google’s market position in the online advertising markets by increasing the already vast amount of data that Google could use for personalisation of the ads it serves and displays.” The EC stated “[a]t this stage of the investigation, the Commission considers that Google:
      • is dominant in the supply of online search advertising services in the EEA countries (with the exception of Portugal for which market shares are not available);
      • holds a strong market position in the supply of online display advertising services at least in Austria, Belgium, Bulgaria, Croatia, Denmark, France, Germany, Greece, Hungary, Ireland, Italy, Netherlands, Norway, Poland, Romania, Slovakia, Slovenia, Spain, Sweden and the United Kingdom, in particular in relation to off-social networks display ads;
      • holds a strong market position in the supply of ad tech services in the EEA.
    • The EC explained that it “will now carry out an in-depth investigation into the effects of the transaction to determine whether its initial competition concerns regarding the online advertising markets are confirmed…[and] will also further examine:
      • the effects of the combination of Fitbit’s and Google’s databases and capabilities in the digital healthcare sector, which is still at a nascent stage in Europe; and
      • whether Google would have the ability and incentive to degrade the interoperability of rivals’ wearables with Google’s Android operating system for smartphones once it owns Fitbit.
    • Amnesty International (AI) sent EC Executive Vice-President Margrethe Vestager a letter, arguing “[t]he merger risks further extending the dominance of Google and its surveillance-based business model, the nature and scale of which already represent a systemic threat to human rights.” AI asserted “[t]he deal is particularly troubling given the sensitive nature of the health data that Fitbit holds that would be acquired by Google.” AI argued “[t]he Commission must ensure that the merger does not proceed unless the two business enterprises can demonstrate that they have taken adequate account of the human rights risks and implemented strong and meaningful safeguards that prevent and mitigate these risks in the future.”
  • Europol, the United Nations Interregional Crime and Justice Research Institute (UNICRI) and Trend Micro have cooperated on a report that looks “into current and predicted criminal uses of artificial intelligence (AI).
    • The organizations argued “AI could be used to support:
      • convincing social engineering attacks at scale;
      • document-scraping malware to make attacks more efficient;
      • evasion of image recognition and voice biometrics;
      • ransomware attacks, through intelligent targeting and evasion;
      • data pollution, by identifying blind spots in detection rules.
    • The organizations concluded:
      • Based on available insights, research, and a structured open-source analysis, this report covered the present state of malicious uses and abuses of AI, including AI malware, AI-supported password guessing, and AI-aided encryption and social engineering attacks. It also described concrete future scenarios ranging from automated content generation and parsing, AI-aided reconnaissance, smart and connected technologies such as drones and autonomous cars, to AI-enabled stock market manipulation, as well as methods for AI-based detection and defense systems.
      • Using one of the most visible malicious uses of AI — the phenomenon of so-called deepfakes — the report further detailed a case study on the use of AI techniques to manipulate or generate visual and audio content that would be difficult for humans or even technological solutions to immediately distinguish from authentic ones.
      • As speculated on in this paper, criminals are likely to make use of AI to facilitate and improve their attacks by maximizing opportunities for profit within a shorter period, exploiting more victims, and creating new, innovative criminal business models — all the while reducing their chances of being caught. Consequently, as “AI-as-a-Service”206 becomes more widespread, it will also lower the barrier to entry by reducing the skills and technical expertise required to facilitate attacks. In short, this further exacerbates the potential for AI to be abused by criminals and for it to become a driver of future crimes.
      • Although the attacks detailed here are mostly theoretical, crafted as proofs of concept at this stage, and although the use of AI to improve the effectiveness of malware is still in its infancy, it is plausible that malware developers are already using AI in more obfuscated ways without being detected by researchers and analysts. For instance, malware developers could already be relying on AI-based methods to bypass spam filters, escape the detection features of antivirus software, and frustrate the analysis of malware. In fact, DeepLocker, a tool recently introduced by IBM and discussed in this paper, already demonstrates these attack abilities that would be difficult for a defender to stop.
      • To add, AI could also enhance traditional hacking techniques by introducing new ways of performing attacks that would be difficult for humans to predict. These could include fully automated penetration testing, improved password-guessing methods, tools to break CAPTCHA security systems, or improved social engineering attacks. With respect to open-source tools providing such functionalities, the paper discussed some that have already been introduced, such as DeepHack, DeepExploit, and XEvil.
      • The widespread use of AI assistants, meanwhile, also creates opportunities for criminals who could exploit the presence of these assistants in households. For instance, criminals could break into a smart home by hijacking an automation system through exposed audio devices.

Coming Events

  • The National Institute of Standards and Technology (NIST) will hold a webinar on the Draft Federal Information Processing Standards (FIPS) 201-3 on 9 December.
  • On 9 December, the Senate Commerce, Science, and Transportation Committee will hold a hearing titled “The Invalidation of the EU-US Privacy Shield and the Future of Transatlantic Data Flows” with the following witnesses:
    • The Honorable Noah Phillips, Commissioner, Federal Trade Commission
    • Ms. Victoria Espinel, President and Chief Executive Officer, BSA – The Software Alliance
    • Mr. James Sullivan, Deputy Assistant Secretary for Services, International Trade Administration, U.S. Department of Commerce
    • Mr. Peter Swire, Elizabeth and Tommy Holder Chair of Law and Ethics, Georgia Tech Scheller College of Business, and Research Director, Cross-Border Data Forum
  • On 10 December, the Federal Communications Commission (FCC) will hold an open meeting and has released a tentative agenda:
    • Securing the Communications Supply Chain. The Commission will consider a Report and Order that would require Eligible Telecommunications Carriers to remove equipment and services that pose an unacceptable risk to the national security of the United States or the security and safety of its people, would establish the Secure and Trusted Communications Networks Reimbursement Program, and would establish the procedures and criteria for publishing a list of covered communications equipment and services that must be removed. (WC Docket No. 18-89)
    • National Security Matter. The Commission will consider a national security matter.
    • National Security Matter. The Commission will consider a national security matter.
    • Allowing Earlier Equipment Marketing and Importation Opportunities. The Commission will consider a Notice of Proposed Rulemaking that would propose updates to its marketing and importation rules to permit, prior to equipment authorization, conditional sales of radiofrequency devices to consumers under certain circumstances and importation of a limited number of radiofrequency devices for certain pre-sale activities. (ET Docket No. 20-382)
    • Promoting Broadcast Internet Innovation Through ATSC 3.0. The Commission will consider a Report and Order that would modify and clarify existing rules to promote the deployment of Broadcast Internet services as part of the transition to ATSC 3.0. (MB Docket No. 20-145)

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

“Censorship, Suppression, and the 2020 Election” Hearing

A second committee gets its shot at social media platform CEOs and much of the hearing runs much like the one at the end of last month.

It was with some reluctance that I watched the Senate Judiciary Committee’s hearing with Facebook and Twitter’s CEO given the other Senate hearing at which they appeared a few weeks ago. This hearing was prompted by the two platform’s “censorship” of a dubious New York Post article on Hunter Biden’s business practices that seems to have been planted by Trump campaign associates. At first, both Facebook and Twitter restricted posting or sharing the article in different ways but ultimately relented. Whatever their motivation and whether this was appropriate strike me as legitimate policy questions to ask. However, to criticize social media platforms for doing what is entirely within their rights under the liability shield provided by 47 U.S.C. 230 (Section 230) seems a bit much. Nonetheless, both Mark Zuckerberg and Jack Dorsey faced pointed questions from both Republicans and Democrats who profess to want to see change in social media. And yet, it remains unlikely the two parties in Congress can coalesce around broad policy changes. Perhaps targeted legislation has a chance, but it seems far too late in this Congress for that to happen.

Chair Lindsey Graham (R-SC) took an interesting approach and largely eschewed the typical Republican approach to rail against an anti-conservative biases social media platforms allegedly have despite little in the way of evidence to support these claims. Graham cited a handful of studies showing that social media engagement might be linked to harm to children and teenagers. This was an interesting approach given the hearing was ostensibly about censorship, content moderation, and Section 230. Perhaps Graham is using a modified rationale similar to the one undergirding Graham’s bill, the “Eliminating Abusive and Rampant Neglect of Interactive Technologies Act of 2020” (EARN IT Act of 2020) (S.3398) (i.e., children are at risk and are being harmed, hence Section 230 must be changed.) Graham did, of course reference the New York Post article but was equivocal as to its veracity and instead framed Twitter and Facebook’s decisions as essentially overriding the editorial choices of the newspaper. He also discussed a tweet of former United Nations Ambassador Nikki Haley that cast doubt on the legality and potential for fraud of mail-in voting that has a label appended by Twitter. Graham contrasted Haley’s tweet with one from Iran’s Ayatollah that questioned why many European nations outlaw Holocaust denial but allow Mohammed to be insulted. This tweet was never fact checked or labeled. Graham suggested the Ayatollah was calling for the destruction of Israel.

Graham argued Section 230 must be changed, and he expressed hope that Republicans and Democrats could work together to do so. He wondered if social media platforms were akin to media organizations given their immense influence and, if so, perhaps they should be regulated accordingly and open to the same liability for publishing defamatory material. Graham called for changes to Section 230 that would establish incentives for social media platforms to make changes such as a more open and transparent system of content moderation, including the biases of the fact checkers. He conceded social media platforms have the almost impossible task of telling people what is reliable and what is not. Finally, he framed social media issues as health issues and compared their addictive effect and harm to cigarettes.

Senator Richard Blumenthal (D-CT) made an opening statement in place of Ranking Member Dianne Feinstein (D-CA), suggesting the possibility that the latter did not want to be associated with this hearing that the former called not serious and a political sideshow. In any event, Blumenthal repeated many of his previously articulated positions on social media companies and how they are currently harming the United States (U.S.) in a number of ways. Blumenthal claimed President Donald Trump is using the megaphone of social media in ways that are harming the U.S. and detrimental to democracy. He called social media terrifying tools of persuasion with power far exceeding the Robber Barons of the last Gilded Age. Blumenthal further claimed social media companies are strip mining the personal data of people to their great profit while also promoting hate speech and voter suppression. Blumenthal acknowledged the baby steps Twitter and Facebook made in trying to address these problems but remarked parenthetically that Google was not required to appear at the hearing, an apparent reward for doing less than the other two companies to combat lies and misinformation.

Blumenthal said the hearing was not serious and was a political sideshow. Blumenthal remarked that “his colleagues” (by which he almost certainly meant Republicans) did not seem interested in foreign interference in U.S. elections and the calls for the murder of Federal Bureau of Investigation Director Christopher Wray and National Institute of Allergy and Infectious Diseases (NIAID) Director Anthony Fauci. Blumenthal said the purpose of the hearing was to bully Facebook, Twitter, and other platforms. He called for serious hearings into “Big Tech,” specifically on antitrust issues as the companies have become dominant and are abusing their power. He specifically suggested that Instagram and WhatsApp be spun off from Facebook and other companies broken up, too. Blumenthal called for strong privacy legislation to be enacted. He said “meaningful” Section 230 reform is needed, including a possible repeal of most of the liability protection, for the immunity shield is way too broad and the victims of harm deserve their day in court. Blumenthal vowed to keep working with Graham in the next Congress on the EARN IT Act, a sign perhaps that the bill is not going to get enacted before the end of the year. Graham noted, however, that next year should the Republicans hold the Congress, Senator Chuck Grassley (R-IA), the Senate’s President Pro Tempore, would become chair. Graham expressed his hope Grassley would work on Section 230.

Facebook CEO Mark Zuckerberg again portrayed Facebook as the platform that gives everyone a voice and then pivoted to the reforms implemented to ensure the company was not a vessel for election misinformation and mischief. Zuckerberg touted Facebook’s voter registration efforts (more than 4.5 million), its role in helping people volunteer at polls, and its efforts to disseminate factual information about when, where, and how Americans could vote. He turned to Facebook’s efforts to combat misinformation and voter suppression and the steps it took on election day and thereafter. Zuckerberg touted the lessons Facebook learned from the 2016 election in the form of changed policies and greater awareness of efforts by other nations to spread disinformation, lies, and chaos. Incidentally (or perhaps not so incidentally) Zuckerberg did not discuss the platform’s efforts to take on domestic efforts to undermine U.S. democracy. He, did, however reveal that Facebook is funding a “partnership with a team of independent external academics to conduct objective and empirically grounded research on social media’s impact on democracy.” Beyond remarking that Facebook hopes to learn about its role in this dynamic, he did not pledge any particular action on the basis of this study.

Zuckerberg reiterated Facebook’s positions on Section 230 reform:

I’ve also called for Congress to update Section 230 of the Communications Decency Act to make sure it’s working as intended. Section 230 allows us to provide our products and services to users by doing two things:

  • First, it encourages free expression. Without Section 230, platforms could potentially be held liable for everything people say. Platforms would likely censor more content to avoid legal risk and would be less likely to invest in technologies that enable people to express themselves in new ways.
  • Second, it allows platforms to moderate content. Without Section 230, platforms could face liability for doing even basic moderation, such as removing hate speech and harassment that impacts the safety and security of their communities.

Thanks to Section 230, people have the freedom to use the internet to express themselves, and platforms are able to more effectively address risks. Updating Section 230 is a significant decision, but we support the ideas around transparency and industry collaboration that are being discussed in some of the current bipartisan proposals, and I look forward to a meaningful dialogue about how we might update the law to deal with the problems we face today.

It’s important that any changes to the law don’t prevent new companies or businesses from being built, because innovation in the internet sector brings real benefits to billions of people around the world. We stand ready to work with Congress on what regulation could look like, whether that means Section 230 reform or providing guidance to platforms on other issues such as harmful content, privacy, elections, and data portability. By updating the rules for the internet, we can preserve what’s best about it—the freedom for people to express themselves and for entrepreneurs to build new things—while also protecting society from broader harms.

Twitter CEO Jack Dorsey explained Twitter’s content moderation policies, especially those related to the election. He stressed that Congress should build upon the foundation laid in Section 230 either through additional legislation or in helping to create private codes of conduct social media companies would help craft and then abide. He asserted that removing Section 230 protection or radically reducing the liability shield would not go to the problem of addressing problematic speech on social media and would indeed cause most platforms to retrench and more severely restrict speech, an outcome at odds with what Members desire. Dorsey then trotted the idea that carving out Section 230, as many of the bills introduced in this Congress propose to do, would create a complicated competitive landscape that would favor large incumbents with the resources to comply while all but shutting out smaller competitors. Regardless of whether this is likely to happen, it is shrewd testimony given the anti-trust sentiment on Capitol Hill and the executive branch towards large technology firms.

In terms of any concrete recommendations for Congress, Dorsey noted:

Three weeks ago, I told the Senate Committee on Commerce, Science and Transportation that I believe the best way to address our mutually-held concerns is to require the publication of moderation processes and practices, a straightforward process to appeal decisions, and best efforts around algorithmic choice, while protecting the privacy of the people who use our service. These are achievable in short order.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Photo by Prateek Katyal from Pexels