Further Reading, Other Development, and Coming Events (20 and 21 January 2021)

Further Reading

  • Amazon’s Ring Neighbors app exposed users’ precise locations and home addresses” By Zack Whittaker — Tech Crunch. Again Amazon’s home security platform suffers problems by way of users data being exposed or less than protected.
  • Harassment of Chinese dissidents was warning signal on disinformation” By Shawna Chen and Bethany Allen-Ebrahimian — Axios. In an example of how malicious online activities can spill into the real world as a number of Chinese dissidents were set upon by protestors.
  • How Social Media’s Obsession with Scale Supercharged Disinformation” By Joan Donovan — Harvard Business Review. Companies like Facebook and Twitter emphasized scale over safety in trying to grow as quickly as possible. This lead to a proliferation of fake accounts and proved welcome ground for the seeds of misinformation.
  • The Moderation War Is Coming to Spotify, Substack, and Clubhouse” By Alex Kantrowitz — OneZero. The same issues with objectionable and abusive content plaguing Twitter, Facebook, YouTube and others will almost certainly become an issue for the newer platforms, and in fact already are.
  • Mexican president mounts campaign against social media bans” By Mark Stevenson — The Associated Press. The leftist President of Mexico President Andrés Manuel López Obrador is vowing to lead international efforts to stop social media companies from censoring what he considers free speech. Whether this materializes into something substantial is not clear.
  • As Trump Clashes With Big Tech, China’s Censored Internet Takes His Side” By Li Yuan — The New York Times. The government in Beijing is framing the ban of former President Donald Trump after the attempted insurrection by social media platforms as proof there is no untrammeled freedom of speech. This position helps bolster the oppressive policing of online content the People’s Republic of China (PRC) wages against its citizens. And quite separately many Chinese people (or what appear to be actual people) are questioning what is often deemed the censoring of Trump in the United States (U.S.), a nation ostensibly committed to free speech. There is also widespread misunderstanding about the First Amendment rights of social media platforms not to host content with which they disagree and the power of platforms to make such determinations without fear that the U.S. government will punish them as is often the case in the PRC.
  • Trump admin slams China’s Huawei, halting shipments from Intel, others – sources” By Karen Freifeld and Alexandra Alper — Reuters. On its way out of the proverbial door, the Trump Administration delivered parting shots to Huawei and the People’s Republic of China by revoking one license and denying others to sell the PRC tech giant semiconductors. Whether the Biden Administration will reverse or stand by these actions remains to be seen. The companies, including Intel, could appeal. Additionally, there are an estimated $400 million worth of applications for similar licenses pending at the Department of Commerce that are now the domain of the new regime in Washington. It is too early to discern how the Biden Administration will maintain or modify Trump Administration policy towards the PRC.
  • Behind a Secret Deal Between Google and Facebook” By Daisuke Wakabayashi and Tiffany Hsu — The New York Times. The newspaper got its hands on an unredacted copy of the antitrust suit Texas Attorney General Ken Paxton and other attorneys general filed against Google, and it has details on the deal Facebook and Google allegedly struck to divide the online advertising world. Not only did Facebook ditch an effort launched by publishers to defeat Google’s overwhelming advantages in online advertising bidding, it joined Google’s rival effort with a guarantee that it would win a specified number of bids and more time to bid on ads. Google and Facebook naturally deny any wrongdoing.
  • Biden and Trump Voters Were Exposed to Radically Different Coverage of the Capitol Riot on Facebook” By Colin Lecher and Jon Keegan — The Markup. Using a tool on browsers the organization pays Facebook users to have, the Markup can track the type of material they see in their feed. Facebook’s algorithm fed people material about the 6 January 2021 attempted insurrection based on their political views. Many have pointed out that this very dynamic creates filter bubbles that poison democracy and public discourse.
  • Banning Trump won’t fix social media: 10 ideas to rebuild our broken internet – by experts” By Julia Carrie Wong — The Guardian. There are some fascinating proposals in this piece that could help address the problems of social media.
  • Misinformation dropped dramatically the week after Twitter banned Trump and some allies” By Elizabeth Dwoskin and Craig Timberg — The Washington Post. Research showed that lies, misinformation, and disinformation about election fraud dropped by three-quarters after former President Donald Trump was banned from Twitter and other platforms. Other research showed that a small group of conservatives were responsible for up to 20% of misinformation on this and other conspiracies.
  • This Was WhatsApp’s Plan All Along” By Shoshana Wodinsky — Gizmodo. This piece does a great job of breaking down into plain English the proposed changes to terms of service on WhatsApp that so enraged users that competitors Signal and Telegram have seen record-breaking downloads. Basically, it is all about reaping advertising dollars for Facebook through businesses and third-party partners using user data from business-related communications. Incidentally, WhatsApp has delayed changes until March because of the pushback.
  • Brussels eclipsed as EU countries roll out their own tech rules” By By Laura Kayali and Mark Scott — Politico EU. The European Union (EU) had a hard-enough task in trying to reach final language on a Digital Services Act and Digital Markets Act without nations like France, Germany, Poland, and others picking and choosing text from draft bills and enacting them into law. Brussels is not happy with this trend.

Other Developments

  • Federal Trade Commission (FTC) Chair Joseph J. Simons announced his resignation from the FTC effective on 29 January 2021 in keeping with tradition and past practice. This resignation clears the way for President Joe Biden to name the chair of the FTC, and along with FTC Commissioner Rohit Chopra’s nomination to head the Consumer Financial Protection Bureau (CFPB), the incoming President will get to nominate two Democratic FTC Commissioners, tipping the political balance of the FTC and likely ushering in a period of more regulation of the technology sector.
    • Simons also announced the resignation of senior staff: General Counsel Alden F. Abbott; Bureau of Competition Director Ian Conner; Bureau of Competition Deputy Directors Gail Levine and Daniel Francis; Bureau of Consumer Protection Director Andrew Smith; Bureau of Economics Director Andrew Sweeting; Office of Public Affairs Director Cathy MacFarlane; and Office of Policy Planning Director Bilal Sayyed.
  • In a speech last week before he sworn in, President Joe Biden announced his $1.9 trillion American Rescue Plan, and according to a summary, Biden will ask Congress to provide $10 billion for a handful of government facing programs to improve technology. Notably, Biden “is calling on Congress to launch the most ambitious effort ever to modernize and secure federal IT and networks.” Biden is proposing to dramatically increase funding for a fund that would allow agencies to borrow and then pay back funds to update their technology. Moreover, Biden is looking to push more money to a program to aid officials at agencies who oversee technology development and procurement.
    • Biden stated “[t]o remediate the SolarWinds breach and boost U.S. defenses, including of the COVID-19 vaccine process, President-elect Biden is calling on Congress to:
      • Expand and improve the Technology Modernization Fund. ​A $9 billion investment will help the U.S. launch major new IT and cybersecurity shared services at the Cyber Security and Information Security Agency (CISA) and the General Services Administration and complete modernization projects at federal agencies. ​In addition, the president-elect is calling on Congress to change the fund’s reimbursement structure in order to fund more innovative and impactful projects.
      • Surge cybersecurity technology and engineering expert hiring​. Providing the Information Technology Oversight and Reform fund with $200 million will allow for the rapid hiring of hundreds of experts to support the federal Chief Information Security Officer and U.S. Digital Service.
      • Build shared, secure services to drive transformational projects. ​Investing$300 million in no-year funding for Technology Transformation Services in the General Services Administration will drive secure IT projects forward without the need of reimbursement from agencies.
      • Improving security monitoring and incident response activities. ​An additional $690M for CISA will bolster cybersecurity across federal civilian networks, and support the piloting of new shared security and cloud computing services.
  • The United States (U.S.) Department of Commerce issued an interim final rule pursuant to an executive order (EO) issued by former President Donald Trump to secure the United States (U.S.) information and communications supply chain. This rule will undoubtedly be reviewed by the Biden Administration and may be withdrawn or modified depending on the fate on the EO on which the rule relies.
    • In the interim final rule, Commerce explained:
      • These regulations create the processes and procedures that the Secretary of Commerce will use to identify, assess, and address certain transactions, including classes of transactions, between U.S. persons and foreign persons that involve information and communications technology or services designed, developed, manufactured, or supplied, by persons owned by, controlled by, or subject to the jurisdiction or direction of a foreign adversary; and pose an undue or unacceptable risk. While this interim final rule will become effective on March 22, 2021, the Department of Commerce continues to welcome public input and is thus seeking additional public comment. Once any additional comments have been evaluated, the Department is committed to issuing a final rule.
      • On November 27, 2019, the Department of Commerce (Department) published a proposed rule to implement the terms of the Executive Order. (84 FR 65316). The proposed rule set forth processes for (1) how the Secretary would evaluate and assess transactions involving ICTS to determine whether they pose an undue risk of sabotage to or subversion of the ICTS supply chain, or an unacceptable risk to the national security of the United States or the security and safety of U.S. persons; (2) how the Secretary would notify parties to transactions under review of the Secretary’s decision regarding the ICTS Transaction, including whether the Secretary would prohibit or mitigate the transaction; and (3) how parties to transactions reviewed by the Secretary could comment on the Secretary’s preliminary decisions. The proposed rule also provided that the Secretary could act without complying with the proposed procedures where required by national security. Finally, the Secretary would establish penalties for violations of mitigation agreements, the regulations, or the Executive Order.
      • In addition to seeking general public comment, the Department requested comments from the public on five specific questions: (1) Whether the Secretary should consider categorical exclusions or whether there are classes of persons whose use of ICTS cannot violate the Executive Order; (2) whether there are categories of uses or of risks that are always capable of being reliably and adequately mitigated; (3) how the Secretary should monitor and enforce any mitigation agreements applied to a transaction; (4) how the terms, “transaction,” “dealing in,” and “use of” should be clarified in the rule; and (5) whether the Department should add record-keeping requirements for information related to transactions.
      • The list of “foreign adversaries” consists of the following foreign governments and non-government persons: The People’s Republic of China, including the Hong Kong Special Administrative Region (China); the Republic of Cuba (Cuba); the Islamic Republic of Iran (Iran); the Democratic People’s Republic of Korea (North Korea); the Russian Federation (Russia); and Venezuelan politician Nicolás Maduro (Maduro Regime).
  • The Federal Trade Commission (FTC) adjusted its penalty amounts for inflation, including a boost to the per violation penalty virtually all the privacy bills introduced in the last Congress would allow the agency to wield against first-time violators. The penalty for certain unfair and deceptive acts or practices was increased from $43,280 to $43,792.
  • The United States (U.S.) Department of State stood up its new Bureau of Cyberspace Security and Emerging Technologies (CSET) as it has long planned. At the beginning of the Trump Administration, the Department of State dismantled the Cyber Coordinator Office and gave its cybersecurity portfolio to the Bureau of Economic Affairs, which displeased Congressional stakeholders. In 2019, the department notified Congress of its plan to establish CSET. The department asserted:
    • The need to reorganize and resource America’s cyberspace and emerging technology security diplomacy through the creation of CSET is critical, as the challenges to U.S. national security presented by China, Russia, Iran, North Korea, and other cyber and emerging technology competitors and adversaries have only increased since the Department notified Congress in June 2019 of its intent to create CSET.
    • The CSET bureau will lead U.S. government diplomatic efforts on a wide range of international cyberspace security and emerging technology policy issues that affect U.S. foreign policy and national security, including securing cyberspace and critical technologies, reducing the likelihood of cyber conflict, and prevailing in strategic cyber competition.  The Secretary’s decision to establish CSET will permit the Department to posture itself appropriately and engage as effectively as possible with partners and allies on these pressing national security concerns.
    • The Congressional Members of the Cyberspace Solarium Commission made clear their disapproval of the decision. Senators Angus King (I-ME) and Ben Sasse, (R-NE) and Representatives Mike Gallagher (R-WI) and Jim Langevin (D-RI) said:
      • In our report, we emphasize the need for a greater emphasis on international cyber policy at State. However, unlike the bipartisan Cyber Diplomacy Act, the State Department’s proposed Bureau will reinforce existing silos and […] hinder the development of a holistic strategy to promote cyberspace stability on the international stage. We urge President-elect Biden to pause this reorganization when he takes office in two weeks and work with Congress to enact meaningful reform to protect our country in cyberspace.
  • The Australian Cyber Security Centre (ACSC) the Risk Identification Guidance “developed to assist organisations in identifying risks associated with their use of suppliers, manufacturers, distributors and retailers (i.e. businesses that constitute their cyber supply chain)” and the Risk Management Guidance because “[c]yber supply chain risk management can be achieved by identifying the cyber supply chain, understanding cyber supply chain risk, setting cyber security expectations, auditing for compliance, and monitoring and improving cyber supply chain security practices.”
  • The United Kingdom’s Surveillance Camera Commissioner (SCC), issued “best practice guidance, ‘Facing the Camera’, to all police forces in England and Wales” The SCC explained that “The provisions of this document only apply to the use of facial recognition technology and the inherent processing of images by the police where such use is integral to a surveillance camera system being operated in ‘live time’ or ‘near real time’ operational scenarios.” Last summer, a British appeals court overturned a decision that found that a police force’s use of facial recognition technology in a pilot program that utilized live footage to be legal. The appeals court found the use of this technology by the South Wales Police Force a violation of “the right to respect for private life under Article 8 of the European  Convention  on  Human  Rights,  data  protection  legislation,  and  the  Public  Sector Equality Duty (“PSED”) under section 149 of the Equality Act 2010.” The SCC stated:
    • The SCC considers surveillance to be an intrusive investigatory power where it is conducted by the police which impacts upon those fundamental rights and freedoms of people, as set out by the European Convention of Human Rights (ECHR) and the Human Rights Act 1998. In the context of surveillance camera systems which make use of facial recognition technology, the extent of state intrusion in such matters is significantly increased by the capabilities of algorithms which are in essence, integral to the surveillance conduct seeking to harvest information, private information, metadata, data, personal data, intelligence and evidence. Each of the aforementioned are bound by laws and rules which ought to be separately and jointly considered and applied in a manner which is demonstrably lawful and ethical and engenders public trust and confidence.
    • Whenever the police seek to use technology in pursuit of a legitimate aim, the key question arises as to whether the degree of intrusion which is caused to the fundamental freedoms of citizens by the police surveillance conduct using surveillance algorithms (biometric or otherwise) is necessary in a democratic society when considered alongside the legality and proportionality of their endeavours and intent. The type of equipment/technology/modality which they choose to use to that end (e.g. LFR, ANPR, thermal imaging, gait analysis, movement sensors etc), the manner in which such technological means are deployed, (such as using static cameras at various locations, used with body worn cameras or other mobile means), and whether such technology is used overtly alongside or networked with other surveillance technologies, are all factors which may significantly influence the depth of intrusion caused by police conduct upon citizen’s rights.
  • The Senate confirmed the nomination of Avril Haines to be the new Director of National Intelligence by an 89-10 vote after Senator Tom Cotton (R-AK) removed his hold on her nomination. However, Josh Hawley (R-MO) placed a hold on the nomination of Alejandro Mayorkas to be the next Secretary of Homeland Security and explained his action this way:
    • On Day 1 of his administration, President-elect Biden has said he plans to unveil an amnesty plan for 11 million immigrants in this nation illegally. This comes at a time when millions of American citizens remain out of work and a new migrant caravan has been attempting to reach the United States. Mr. Mayorkas has not adequately explained how he will enforce federal law and secure the southern border given President-elect Biden’s promise to roll back major enforcement and security measures. Just today, he declined to say he would enforce the laws Congress has already passed to secure the border wall system. Given this, I cannot consent to skip the standard vetting process and fast-track this nomination when so many questions remain unanswered.
  • Former Trump White House Cyber Coordinator Rob Joyce will replace the National Security Agency’s (NSA) Director of Cybersecurity Anne Neuberger who has been named the Biden White House’s Deputy National Security Advisor for Cyber and Emerging Technology. Anne Neuberger’s portfolio at the NSA included “lead[ing] NSA’s cybersecurity mission, including emerging technology areas like quantum-resistant cryptography.” Joyce was purged when former National Security Advisor John Bolton restructured the NSC in 2018, forcing out Joyce and his boss, former Homeland Security Advisor Tom Bossert. Presumably Joyce would have the same responsibilities. At the National Security Council, Neuberger would will work to coordinate cybersecurity and emerging technology policy across agencies and funnel policy options up to the full NSC and ultimately the President. This work would include Joyce.
  • The Supreme Court of the United States (SCOTUS) heard oral arguments on whether the Federal Trade Commission (FTC) Act gives the agency the power to seek monetary damages and restitution alongside permanent injunctions under Section 13(b). In AMG Capital Management, LLC v. FTC, the parties opposing the FTC argue the plain language of the statute does not allow for the seeking of restitution and monetary damages under this specific section of the FTC Act while the agency argues long accepted past practice and Congressional intent do, in fact, allow this relief to be sought when the FTC is seeking to punish violators of Section 5. The FTC is working a separate track to get a fix from Congress which could rewrite the FTC Act to make clear this sort of relief is legal. However, some stakeholders in the debate over privacy legislation may be using the case as leverage.
    • In October 2020, the FTC wrote the House and Senate committees with jurisdiction over the agency, asking for language to resolve the litigation over the power to seek and obtain restitution for victims of those who have violated Section 5 of the FTC Act and disgorgement of ill-gotten gains. The FTC is also asking that Congress clarify that the agency may act against violators even if their conduct has stopped as it has for more than four decades. Two federal appeals courts have ruled in ways that have limited the FTC’s long used powers, and now the Supreme Court of the United States is set to rule on these issues sometime next year. The FTC is claiming, however, that defendants are playing for time in the hopes that the FTC’s authority to seek and receive monetary penalties will ultimately be limited by the United States (U.S.) highest court. Judging by language tucked into a privacy bill introduced by the former chair of one of the committees, Congress may be willing to act soon.
    • The FTC asked the House Energy and Commerce and Senate Commerce, Science, and Transportation Committees “to take quick action to amend Section 13(b) [of the FTC Act i.e. 15 U.S.C. § 53(b)] to make clear that the Commission can bring actions in federal court under Section 13(b) even if conduct is no longer ongoing or impending when the suit is filed and can obtain monetary relief, including restitution and disgorgement, if successful.” The agency asserted “[w]ithout congressional action, the Commission’s ability to use Section 13(b) to provide refunds to consumer victims and to enjoin illegal activity is severely threatened.” All five FTC Commissioners signed the letter.
    • The FTC explained that adverse rulings by two federal appeals courts are constraining the agency from seeking relief for victims and punishment for violators of the FTC Act in federal courts below those two specific courts, but elsewhere defendants are either asking courts for a similar ruling or using delaying tactics in the hopes the Supreme Court upholds the two federal appeals courts:
      • …[C]ourts of appeals in the Third and Seventh Circuits have recently ruled that the agency cannot obtain any monetary relief under Section 13(b). Although review in the Supreme Court is pending, these lower court decisions are already inhibiting our ability to obtain monetary relief under 13(b). Not only do these decisions already prevent us from obtaining redress for consumers in the circuits where they issued, prospective defendants are routinely invoking them in refusing to settle cases with agreed-upon redress payments.
      • Moreover, defendants in our law enforcement actions pending in other circuits are seeking to expand the rulings to those circuits and taking steps to delay litigation in anticipation of a potential Supreme Court ruling that would allow them to escape liability for any monetary relief caused by their unlawful conduct. This is a significant impediment to the agency’s effectiveness, its ability to provide redress to consumer victims, and its ability to prevent entities who violate the law from profiting from their wrongdoing.
  • The United Kingdom’s Information Commissioner’s Office (ICO) issued guidance for British entities that may be affected by the massive SolarWinds hack that has compromised many key systems in the United States. The ICO advised:
    • Organisations should immediately check whether they are using a version of the software that has been compromised. These are versions 2019.4 HF 5, 2020.2 with no hotfix installed, and 2020.2 HF 1.
    • Organisations must also determine if the personal data they hold has been affected by the cyber-attack. If a reportable personal data breach is found, UK data controllers are required to inform the ICO within 72 hours of discovering the breach. Reports can be submitted online or organisations can call the ICO’s personal data breach helpline for advice on 0303 123 1113, option 2.
    • Organisations subject to the NIS Regulation will also need to determine if this incident has led to a “substantial impact on the provision’ of its digital services and report to the ICO.
  • Europol announced the takedown of “the world’s largest illegal marketplace on the dark web” in an operation coordinated by the following nations: “Germany, Australia, Denmark, Moldova, Ukraine, the United Kingdom (the National Crime Agency), and the USA (DEA, FBI, and IRS).” Europol added:
    • The Central Criminal Investigation Department in the German city of Oldenburg arrested an Australian citizen who is the alleged operator of DarkMarket near the German-Danish border over the weekend. The investigation, which was led by the cybercrime unit of the Koblenz Public Prosecutor’s Office, allowed officers to locate and close the marketplace, switch off the servers and seize the criminal infrastructure – more than 20 servers in Moldova and Ukraine supported by the German Federal Criminal Police office (BKA). The stored data will give investigators new leads to further investigate moderators, sellers, and buyers. 
  • The Enforcement Bureau (Bureau) of the Federal Communications Commission (FCC) issued an enforcement advisory intended to remind people that use of amateur and personal radios to commit crimes is itself a criminal offense that could warrant prosecution. The notice was issued because the FCC is claiming it is aware of discussion by some of how these means of communications may be superior to social media, which has been cracking down on extremist material since the attempted insurrection at the United States Capitol on 6 January. The Bureau stated:
    • The Bureau has become aware of discussions on social media platforms suggesting that certain radio services regulated by the Commission may be an alternative to social media platforms for groups to communicate and coordinate future activities.  The Bureau recognizes that these services can be used for a wide range of permitted purposes, including speech that is protected under the First Amendment of the U.S. Constitution.  Amateur and Personal Radio Services, however, may not be used to commit or facilitate crimes. 
    • Specifically, the Bureau reminds amateur licensees that they are prohibited from transmitting “communications intended to facilitate a criminal act” or “messages encoded for the purpose of obscuring their meaning.” Likewise, individuals operating radios in the Personal Radio Services, a category that includes Citizens Band radios, Family Radio Service walkie-talkies, and General Mobile Radio Service, are prohibited from using those radios “in connection with any activity which is against Federal, State or local law.” Individuals using radios in the Amateur or Personal Radio Services in this manner may be subject to severe penalties, including significant fines, seizure of the offending equipment, and, in some cases, criminal prosecution.
  • The European Data Protection Board (EDPB) issued its “Strategy for 2021-2023” in order “[t]o be effective in confronting the main challenges ahead.” The EDPB cautioned:
    • This Strategy does not provide an exhaustive overview of the work of the EDPB in the years to come. Rather it sets out the four main pillars of our strategic objectives, as well as set of key actions to help achieve those objectives. The EDPB will implement this Strategy within its Work Program, and will report on the progress achieved in relation to each Pillar as part of its annual reports.
    • The EDPB listed and explained the four pillars of its strategy:
      • PILLAR 1: ADVANCING HARMONISATION AND FACILITATING COMPLIANCE. The EDPB will continue to strive for a maximum degree of consistency in the application of data protection rules and limit fragmentation among Member States. In addition to providing practical, easily understandable and accessible guidance, the EDPB will develop and promote tools that help to implement data protection into practice, taking into account practical experiences of different stakeholders on the ground.
      • PILLAR 2: SUPPORTING EFFECTIVE ENFORCEMENT AND EFFICIENT COOPERATION BETWEEN NATIONAL SUPERVISORY AUTHORITIES. The EDPB is fully committed to support cooperation between all national supervisory authorities that work together to enforce European data protection law. We will streamline internal processes, combine expertise and promote enhanced coordination. We intend not only to ensure a more efficient functioning of the cooperation and consistency mechanisms, but also to strive for the development of a genuine EU-wide enforcement culture among supervisory authorities.
      • PILLAR 3: A FUNDAMENTAL RIGHTS APPROACH TO NEW TECHNOLOGIES. The protection of personal data helps to ensure that technology, new business models and society develop in accordance with our values, such as human dignity, autonomy and liberty. The EDPB will continuously monitor new and emerging technologies and their potential impact on the fundamental rights and daily lives of individuals. Data protection should work for all people, particularly in the face of processing activities presenting the greatest risks to individuals’ rights and freedoms (e.g. to prevent discrimination). We will help to shape Europe’s digital future in line with our common values and rules. We will continue to work with other regulators and policymakers to promote regulatory coherence and enhanced protection for individuals.
      • PILLAR 4: THE GLOBAL DIMENSION. The EDPB is determined to set and promote high EU and global standards for international data transfers to third countries in the private and the public sector, including in the law enforcement sector. We will reinforce our engagement with the international community to promote EU data protection as a global model and to ensure effective protection of personal data beyond EU borders.
  • The United Kingdom’s (UK) Information Commissioner’s Office (ICO) revealed that all but one of the videoconferencing platforms it and other data protection authorities’ (DPA) July 2020 letter urging them to “adopt principles to guide them in addressing some key privacy risks.” The ICO explained:
    • Microsoft, Cisco, Zoom and Google replied to the open letter. The joint signatories thank these companies for engaging on this important matter and for acknowledging and responding to the concerns raised. In their responses the companies highlighted various privacy and security best practices, measures, and tools that they advise are implemented or built-in to their video teleconferencing services.
    • The information provided by these companies is encouraging. It is a constructive foundation for further discussion on elements of the responses that the joint signatories feel would benefit from more clarity and additional supporting information.
    • The ICO stated:
      • The joint signatories have not received a response to the open letter from Houseparty. They strongly encourage Houseparty to engage with them and respond to the open letter to address the concerns raised.
  • The European Union Agency for Cybersecurity (ENISA) “launched a public consultation, which runs until 7 February 2021, on its draft of the candidate European Union Cybersecurity Certification Scheme on Cloud Services (EUCS)…[that] aims to further improve the Union’s internal market conditions for cloud services by enhancing and streamlining the services’ cybersecurity guarantees.” ENISA stated:
    • There are challenges to the certification of cloud services, such as a diverse set of market players, complex systems and a constantly evolving landscape of cloud services, as well as the existence of different schemes in Member States. The draft EUCS candidate scheme tackles these challenges by calling for cybersecurity best practices across three levels of assurance and by allowing for a transition from current national schemes in the EU. The draft EUCS candidate scheme is a horizontal and technological scheme that intends to provide cybersecurity assurance throughout the cloud supply chain, and form a sound basis for sectoral schemes.
    • More specifically, the draft EUCS candidate scheme:
      • Is a voluntary scheme;
      • The scheme’s certificates will be applicable across the EU Member States;
      • Is applicable for all kinds of cloud services – from infrastructure to applications;
      • Boosts trust in cloud services by defining a reference set of security requirements;
      • Covers three assurance levels: ‘Basic’, ‘Substantial’ and ‘High’;
      • Proposes a new approach inspired by existing national schemes and international standards;
      • Defines a transition path from national schemes in the EU;
      • Grants a three-year certification that can be renewed;
      • Includes transparency requirements such as the location of data processing and storage.

Coming Events

  • The Commerce, Science, and Transportation Committee will hold a hearing on the nomination of Gina Raimondo to be the Secretary of Commerce on 26 January.
  • On 27 July, the Federal Trade Commission (FTC) will hold PrivacyCon 2021.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2021. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Peggy und Marco Lachmann-Anke from Pixabay

Further Reading, Other Developments, and Coming Events (5 January 2021)

Further Reading

  • China Used Stolen Data To Expose CIA Operatives In Africa And Europe;” “Beijing Ransacked Data as U.S. Sources Went Dark in China;” “Tech Giants Are Giving China A Vital Edge In Espionage” By Zach Dorfman — Foreign Policy. This terrifying trio of articles lays bare the 180 degree change in espionage advantage the People’s Republic of China (PRC) seems to hold over the United States (U.S.). Hacking, big data, processing, algorithms, and other technological issues play prominent roles in the PRC’s seeming advantage. It remains to be seen how the U.S. responds to the new status quo.
  • Singapore police can access COVID-19 contact tracing data for criminal investigations” By Eileen Yu — ZDNet. During questioning in Singapore’s Parliament, it was revealed the police can use existing authority to access the data on a person’s smartphone collected by the nation’s TraceTogether app. Technically, this would entail a person being asked by the police to upload their data, which is stored on devices and encrypted. Nonetheless, this is the very scenario privacy advocates have been saying is all but inevitable with COVID-19 tracing apps on phones.
  • As Understanding of Russian Hacking Grows, So Does Alarm” By David Sanger, Nicole Perlroth, and Julian Barnes — The New York Times. Like a detonated bomb, the Russian hack of United States (U.S.) public and private systems keeps getting worse in terms of damage and fallout. The scope continues to widen as it may come to pass that thousands of U.S. entities have been compromised in ways that leave them vulnerable to future attacks. Incidentally, the massive hack has tarnished somewhat the triumph of the U.S. intelligence agencies in fending off interference with the 2020 election.
  • Google workers launch unconventional union with help of Communications Workers of America” By Nitasha Tiku — The Washington Post. A new union formed in Google stopped short of seeking certification by the National Labor Relations Board (NLRB), which will block it from collective bargaining. Nonetheless, the new union will collect dues and have a board of directors. This may lead to additional unionizing efforts in union-averse Silicon Valley and throughout the tech world.
  • ‘Break up the groupthink’: Democrats press Biden to diversify his tech picks” By Cristiano Lima — Politico. Key Democratic groups in the House are pushing the Biden team to appoint people of color for key technology positions at agencies such as the Federal Trade Commission (FTC), Federal Communications Commission (FCC), the Office of Science and Technology Policy (OSTP).

Other Developments

  • The Congress overrode President Donald Trump’s veto of the FY 2021 National Defense Authorization Act (NDAA), thus enacting the annual defense and national security policy bill, which includes a number of technology provisions that will have effects in the public and private sectors. (See here and here for analysis of these provisions in the “William M. “Mac” Thornberry National Defense Authorization Act for Fiscal Year 2021” (H.R.6395).
  • A federal court dismissed a lawsuit brought by a civil liberties and privacy advocacy group to stop implementation of President Donald Trump’s executive order aimed at social media companies and their liability protection under 47 USC 230 (aka Section 230). In June, the Center for Democracy and Technology (CDT), filed suit in federal court to block enforcement of the “Executive Order (EO) on Preventing Online Censorship.” However, the United States District Court of the District of Columbia ruled that CDT is not injured by the executive order (EO) and any such lawsuit is premature. The court dismissed the lawsuit for lack of jurisdiction.
    • In its complaint, CDT argued the EO “violates the First Amendment in two fundamental respects:
      • First, the Order is plainly retaliatory: it attacks a private company, Twitter, for exercising its First Amendment right to comment on the President’s statements.
      • Second, and more fundamentally, the Order seeks to curtail and chill the constitutionally protected speech of all online platforms and individuals— by demonstrating the willingness to use government authority to retaliate against those who criticize the government.”
  • The Federal Trade Commission (FTC) reached a settlement with a company that sells emergency travel and medical services for failing “to take reasonable steps to secure sensitive consumer information such as health records,” including having a unsecured cloud database a security researcher stumbled upon with the sensitive data of more than 130,000 people. Moreover, the company claimed a certification of compliance with the Health Insurance Portability and Accountability Act (HIPAA), which turned out to be untrue. In the complaint, the FTC alleged that these and other practices “constitute unfair and/or deceptive acts or practices, in or affecting commerce in violation of Section 5(a) of the Federal Trade Commission Act.” The FTC and the company reached agreement on a consent order that will require the company’s compliance for at least 20 years.
    • In the complaint, the FTC stated that SkyMed “advertises, offers for sale, and sells nationwide a wide array of emergency travel membership plans that cover up to eighteen different emergency travel and medical evacuation services for members who sustain serious illnesses or injuries during travel in certain geographic areas.”
    • The FTC asserted a security researcher discovered SkyMed’s “database, which could be located and accessed by anyone on the internet, contained approximately 130,000 membership records with consumers’ personal information stored in plain text, including information populated in certain fields for names, dates of birth, gender, home addresses, email addresses, phone numbers, membership information and account numbers, and health information.”
    • The FTC noted the company told affected customers that it had investigated and “[t]here was no medical or payment-related information visible and no indication that the information has been misused.” This turns out to be completely false, and the company’s “investigation did not determine that consumers’ health information was neither stored on the cloud database, nor improperly accessed by an unauthorized third party.”
    • The FTC summarized the terms of the consent order and SkyMed’s obligations:
      • Under the proposed settlement, SkyMed is prohibited from misrepresenting how it secures personal data, the circumstances of and response to a data breach, and whether the company has been endorsed by or participates in any government-sponsored privacy or security program. The company also will be required to send a notice to affected consumers detailing the data that was exposed by the data breach.
      • As part of the mandated information security program, the company must identify and document potential internal and external risks and design, implement, and maintain safeguards to protect personal information it collects from those risks. In addition, SkyMed must obtain biennial assessments of its information security program by a third party, which the FTC has authority to approve, to examine the effectiveness of SkyMed’s information security program, identify any gaps or weaknesses, and monitor efforts to address these problems. The settlement also requires a senior SkyMed executive to certify annually that the company is complying with the requirements of the settlement.
  • The European Commission (EC) has communicated its vision for a new cybersecurity strategy to the European Parliament and European Council “to ensure a global and open Internet with strong guardrails to address the risks to the security and fundamental rights and freedoms of people in Europe.” The EC spelled out its dramatic plan to remake how the bloc regulates, invests in, and structures policies around cybersecurity. The EC claimed “[a]s a key component of Shaping Europe’s Digital Future, the Recovery Plan for Europe  and the EU Security Union Strategy, the Strategy will bolster Europe’s collective resilience against cyber threats and help to ensure that all citizens and businesses can fully benefit from trustworthy and reliable services and digital tools.” If the European Union (EU) follows through, this strategy may have significant effects in the EU and around the world. The EC further explained:
    • Following the progress achieved under the previous strategies, it contains concrete proposals for deploying three principal instruments –regulatory, investment and policy instruments – to address three areas of EU action – (1) resilience, technological sovereignty and leadership, (2) building operational capacity to prevent, deter and respond, and (3) advancing a global and open cyberspace. The EU is committed to supporting this strategy through an unprecedented level of investment in the EU’s digital transition over the next seven years – potentially quadrupling previous levels – as part of new technological and industrial policies and the recovery agenda
    • Cybersecurity must be integrated into all these digital investments, particularly key technologies like Artificial Intelligence (AI), encryption and quantum computing, using incentives, obligations and benchmarks. This can stimulate the growth of the European cybersecurity industry and provide the certainty needed to ease the phasing out of legacy systems. The European Defence Fund (EDF) will support European cyber defence solutions, as part of the European defence technological and industrial base. Cybersecurity is included in external financial instruments to support our partners, notably the Neighbourhood, Development and International Cooperation Instrument. Preventing the misuse of technologies, protecting critical infrastructure and ensuring the integrity of supply chains also enables the EU’s adherence to the UN norms, rules and principles of responsible state behavior.
    • With respect to actions that might be taken, the EC stated that “[t]he EU should ensure:
      • Adoption of revised NIS Directive;
      • Regulatory measures for an Internet of Secure Things
      • Through the CCCN investment in cybersecurity (notably through the Digital Europe Programme, Horizon Europe and recovery facility) to reach up to €4.5 billion in public and private investments over 2021-2027;
      • An EU network of AI-enabled Security Operation Centres and an ultra-secure communication infrastructure harnessing quantum technologies;
      • Widespread adoption of cybersecurity technologies through dedicated support to SMEs under the Digital Innovation Hubs;
      • Development of an EU DNS resolver service as a safe and open alternative for EU citizens, businesses and public administration to access the Internet; and
      • Completion of the implementation of the 5G Toolbox by the second quarter of 2021
      • Complete the European cybersecurity crisis management framework and determine the process, milestones and timeline for establishing the Joint Cyber Unit;
      •  Continue implementation of cybercrime agenda under the Security Union Strategy;
      • Encourage and facilitate the establishment of a Member States’ cyber intelligence working group residing within the EU INTCEN;
      • Advance the EU’s cyber deterrence posture to prevent, discourage, deter and respond to malicious cyber activities;
      • Review the Cyber Defence Policy Framework;
      • Facilitate the development of an EU “Military Vision and Strategy on Cyberspace as a Domain of Operations” for CSDP military missions and operations;
      • Support synergies between civil, defence and space industries; and
      • Reinforce cybersecurity of critical space infrastructures under the Space Programme.
      • Define a set of objectives in international standardisation processes, and promote these at international level;
      • Advance international security and stability in cyberspace, notably through the proposal by the EU and its Member States for a Programme of Action to Advance Responsible State Behaviour in Cyberspace (PoA) in the United Nations;
      • Offer practical guidance on the application of human rights and fundamental freedoms in cyberspace;
      • Better protect children against child sexual abuse and exploitation, as well as a Strategy on the Rights of the Child;
      • Strengthen and promote the Budapest Convention on Cybercrime, including through the work on the Second Additional Protocol to the Budapest Convention;
      • Expand EU cyber dialogue with third countries, regional and international organisations, including through an informal EU Cyber Diplomacy Network;
      • Reinforce the exchanges with the multi-stakeholder community, notably by regular and structured exchanges with the private sector, academia and civil society; and
      • Propose an EU External Cyber Capacity Building Agenda and an EU Cyber Capacity Building Board.
  • The U.S.-China  Economic  and  Security  Review  Commission released its annual report on the People’s Republic of China (PRC) per its “mandate “to monitor, investigate, and report to Congress on the national security implications of the bilateral trade and economic relationship between the United States and the People’s Republic of China.” The Commission argued:
    • Left unchecked, the PRC will continue building a new global order anathema to the interests and values that have underpinned unprecedented economic growth and stability among nations in the post-Cold War era. The past 20 years are littered with the Chinese  Communist  Party’s (CCP) broken promises. In China’s intended new order, there is little reason to believe CCP promises of “win-win” solutions, mutual respect, and peaceful coexistence. A clear understanding of the CCP’s adversarial national security and economic ambitions is essential as U.S. and allied leaders develop the policies and programs that will define the conditions of global freedom and shape our future.
    • The Commission made ten “Key Recommendations:”
      • Congress adopt the principle of reciprocity as foundational in all legislation bearing on U.S.-China relations.
      • Congress expand the authority of the Federal Trade Commission (FTC) to monitor and take foreign government subsidies into account in premerger notification processes.
      • Congress direct the U.S. Department of State to produce an annual report detailing China’s actions in the United Nations and its subordinate agencies that subvert the principles and purposes of the United Nations
      • Congress hold hearings to consider the creation of an interagency executive Committee on Technical Standards that would be responsible for coordinating U.S. government policy and priorities on international standards.
      • Congress consider establishing a “Manhattan Project”-like effort to ensure that the American public has access to safe and secure supplies of critical lifesaving and life-sustaining drugs and medical equipment, and to ensure that these supplies are available from domestic sources or, where necessary, trusted allies.
      • Congress enact legislation establishing a China Economic Data Coordination Center (CEDCC) at the Bureau of Economic Analysis at the U.S. Department of Commerce.
      • Congress direct the Administration, when sanctioning an entity in the People’s Republic of China for actions contrary to the economic and national security interests of the United States or for violations of human rights, to also sanction the parent entity.
      • Congress consider enacting legislation to make the Director of the American Institute in Taiwan a presidential nomination subject to the advice and consent of the United States Senate.
      • Congress amend the Immigration and Nationality Act to clarify that association with a foreign government’s technology transfer programs may be considered grounds to deny a nonimmigrant visa if the foreign government in question is deemed a strategic competitor of the United States, or if the applicant has engaged in violations of U.S. laws relating to espionage, sabotage, or export controls.
      • Congress direct the Administration to identify and remove barriers to receiving United States visas for Hong Kong residents attempting to exit Hong Kong for fear of political persecution.
  • The Electronic Privacy Information Center, the Center for Digital Democracy, the Campaign for a Commercial-Free Childhood, the Parent Coalition for Student Privacy, and Consumer Federation of America asked the Federal Trade Commission (FTC) “to recommend specific changes to the proposed Consent Order to safeguard the privacy interests of Zoom users” in their comments submitted regarding the FTC’s settlement with Zoom. In November, the FTC split along party lines to approve a settlement with Zoom to resolve allegations that the video messaging platform violated the FTC Act’s ban on unfair and deceptive practices in commerce. Zoom agreed to a consent order mandating a new information security program, third party assessment, prompt reporting of covered incidents and other requirements over a period of 20 years. The two Democratic Commissioners voted against the settlement and dissented because they argued it did not punish the abundant wrongdoing and will not dissuade future offenders. Commissioners Rohit Chopra and Rebecca Kelly Slaughter dissented for a variety of reasons that may be summed up: the FTC let Zoom off with a slap on the wrist. Kelly Slaughter focused on the majority’s choice to ignore the privacy implications of Zoom’s misdeeds, especially by not including any requirements that Zoom improve its faulty privacy practices.
    • The groups “recommend that the FTC modify the proposed Consent Order and require Zoom to(1) implement a comprehensive privacy program; (2) obtain regular independent privacy assessments and make those assessments available to the public; (3) provide meaningful redress for victims of Zoom’s unfair and deceptive trade practices; and (4) ensure the adequate protection and limits on the collection of children’s data.”

Coming Events

  • On 13 January, the Federal Communications Commission (FCC) will hold its monthly open meeting, and the agency has placed the following items on its tentative agenda “Bureau, Office, and Task Force leaders will summarize the work their teams have done over the last four years in a series of presentations:
    • Panel One. The Commission will hear presentations from the Wireless Telecommunications Bureau, International Bureau, Office of Engineering and Technology, and Office of Economics and Analytics.
    • Panel Two. The Commission will hear presentations from the Wireline Competition Bureau and the Rural Broadband Auctions Task Force.
    • Panel Three. The Commission will hear presentations from the Media Bureau and the Incentive Auction Task Force.
    • Panel Four. The Commission will hear presentations from the Consumer and Governmental Affairs Bureau, Enforcement Bureau, and Public Safety and Homeland Security Bureau.
    • Panel Five. The Commission will hear presentations from the Office of Communications Business Opportunities, Office of Managing Director, and Office of General Counsel.
  • On 27 July, the Federal Trade Commission (FTC) will hold PrivacyCon 2021.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2021. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Free-Photos from Pixabay

FTC Settlement with Zoom

The FTC again splits on a data security and privacy action. The popular online video call platform needs to revamp its data security practices or face considerable future liability.

The Federal Trade Commission (FTC) split along party lines to approve a settlement with Zoom to resolve allegations that the video messaging platform violated the FTC Act’s ban on unfair and deceptive practices in commerce. Zoom agreed to a consent order mandating a new information security program, third party assessment, prompt reporting of covered incidents and other requirements over a period of 20 years. The two Democratic Commissioners voted against the settlement and dissented because they argued it did not punish the abundant wrongdoing and will not dissuade future offenders.

In the complaint, the FTC asserted there is evidence proving that Zoom lied about its claims that its used end-to-end encryption (it didn’t), used AES 256-bit encryption (it used 128-bit encryption, which is much easier to hack), and stored recorded encrypted on its servers (it didn’t until 60 days after recording when they were moved to the cloud and encrypted). The FTC labeled each of these deceptive practices that violated Section 5 of the FTC Act and provided extensive evidence that Zoom committed all these offenses. But, the worst violation may have been Zoom’s decision to circumvent Apple’s security feature on its browser Safari in the interest of allowing people one click to join a call. Apple had installed a feature on Safari and other of its applications that notify users when a clicked link (like one to a Zoom call) is going to take the person to a website or launch a non-Apple app. This feature was designed to address attacks via malware that launches automatically upon clicking a link or attackers seeking to penetrate a computer the same way. Apparently, Zoom did not like this, so the company essentially designed malware that defeated this feature of Safari and placed on the computers of Mac users without notice or consent. The FTC called this a fraudulent act in violation of Section 5.

The FTC also found systemic data security vulnerabilities in the company’s internal network that would have allowed malicious actors untrammeled access to sensitive user information. Moreover, contractors and service providers with access to Zoom’s networks were not subject to oversight. Also, software patches were not applied in a timely fashion, making it all the more likely that malicious parties could penetrate the company’s networks.

The FTC drafted a consent order Zoom signed without admitting any guilt that will require the company to honestly represent its security practices, the implementation of effective information security practices, periodic third party assessments, submit an annual certification that the company is complying, send compliance reports to the agency, and alert the FTC if there is a breach of Zoom’s security such that affects more than 500 users and triggers reporting requirements to a federal or state agency. As mentioned, this consent order is to last for 20 years, and in the event of any violations, the FTC can go to court and seek monetary damages for Zoom being in contempt of the order. This is the usual means the FTC can obtain civil fines, and the method by which the FTC reached a $5 billion settlement. All in all, this consent order is par for the course for the FTC.

Commissioners Rohit Chopra and Rebecca Kelly Slaughter dissented for a variety of reasons that may be summed up: the FTC let Zoom off with a slap on the wrist.

In his dissent, Chopra accused the majority of not using the full extent of its powers to help the people and businesses that had been harmed by Zoom’s actions and not setting an example to deter both future bad acts by others and by Zoom itself. Chopra characterized the Zoom settlement as being the latest in a long string of ineffectual consent orders that will fail to change the behavior of companies in the digital markets. Chopra called on the agency to use rarely utilized powers, notably through a rulemaking spelling out the practices the FTC will find deceptive and unfair, which would allow the agency to pursue civil fines in the first instance and also put companies on notice about what is allowed and what is not. Chopra also called for structural changes at the agency to increase its effectiveness. Kelly Slaughter focused on the majority’s choice to ignore the privacy implications of Zoom’s misdeeds, especially by not including any requirements that Zoom improve its faulty privacy practices.

To no great surprise, the majority disagreed with Chopra and Kelly Slaughter, trumpeting the settlement as “ensur[ing] that Zoom will prioritize consumers’ privacy and security.” The majority also asserted:

Our dissenting colleagues suggest additional areas for relief that likely would require protracted litigation to obtain. Given the effective relief this settlement provides, we see no need for that….We feel it is important to put in place measures to protect those users’ privacy and security now, rather than expend scarce staff resources on speculative, potential relief that a Court would not likely grant, given the facts here.

Incidentally, the majority’s primary rationale for not seeking more comprehensive punishment of Zoom and relief and redress for businesses and consumers lays bare the reason why any federal privacy regime may prove to be a toothless tiger. The majority reasoned that the FTC did the best it could because going to court would entail the risk Zoom would prevail given its resources, and even if the agency won, it would still burn through precious agency resources. As I’ve made this point before, if people are not given the right to vindicate their rights in court, absent a major infusion of money and authority into the FTC, a federal privacy law will fail to achieve the goal of increasing privacy in the digital world. And, this failure will occur because of the incentives. If a multi-billion corporation like Zoom gives the FTC night sweats about pursuing what appears to be an open and shut case given the egregious violations of the FTC Act, then the biggest players in the market will continue doing what they are currently doing with some changes in order to at least nod to a new law. However, the FTC will lack the means and the will to punish enough violators to change their behavior, the ultimate goal of any statutory scheme.

As it happened, the FTC also announced its consent order against Sunday Riley and its namesake CEO for posting fake reviews of its cosmetic products on the website of retailer Sephora. Sunday Riley executives and employees created fake accounts to post fake reviews, and then used a VPN once the reviews were taken down. CEO Sunday Riley also directed employees to create three different fake accounts for this purpose. The consent order bars Sunday Riley and the named parties from making any misrepresentations about the company’s products and forbids them from failing to disclose material connections in advertising and related practices. This case does not pertain to data security and privacy, but Chopra and Kelly Slaughter dissented, voted against the consent order, and asserted, much as they did in the Zoom case:

  • The FTC is doubling down on its no-money, no-fault settlement with Sunday Riley, who was charged with egregious fake review fraud. This weak settlement is a serious setback for the Commission’s credibility as a watchdog over digital markets.
  • To defend this settlement, the Commissioners supporting this outcome claim they had no basis to seek more than $0. Their analytical approach favors the fraudster, and it will undermine our mission in future cases.
  • The Commission can end its no-consequences settlement policy by publishing a Policy Statement on Equitable Monetary Remedies, restating legal precedent into formal rules, and designating specific misconduct as penalty offenses through an unused FTC Act authority.

FTC Chair Joseph Simons and Commissioners Noah Joshua Phillips and Christine S. Wilson made the case in their statement:

  • Every case presents unique circumstances, and there are many factors that must be considered in determining what constitutes an appropriate settlement. The primary factor is the law. For example, to obtain monetary relief, the Commission must have a viable legal basis to demonstrate consumer injury or ill-gotten gains from the alleged violations. In some cases, such as frauds where the consumer receives no value, this calculation may be obvious. In others, including Sunday Riley, a legally defensible calculation of ill-gotten gains may be difficult. In such cases, the expenditure of resources needed to develop an adequate evidentiary basis reasonably to approximate ill-gotten gains may substantially outweigh any benefits to consumers and the market. We believe the Commission’s order strikes the right balance.
  • The relief obtained in this case is consequential and will provide both specific and general deterrence. The administrative order binds Sunday Riley and its CEO.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Photo by Anna Shvets from Pexels

Further Reading, Other Developments, and Coming Events (21 August)

Here are Further Reading, Other Developments, and Coming Events.

Coming Events

  • The United States’ Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) announced that its third annual National Cybersecurity Summit “will be held virtually as a series of webinars every Wednesday for four weeks beginning September 16 and ending October 7:”
    • September 16: Key Cyber Insights
    • September 23: Leading the Digital Transformation
    • September 30: Diversity in Cybersecurity
    • October 7: Defending our Democracy
    • One can register for the event here.
  • The Senate Judiciary Committee’s Antitrust, Competition Policy & Consumer Rights Subcommittee will hold a hearing on 15 September titled “Stacking the Tech: Has Google Harmed Competition in Online Advertising?.” In their press release, Chair Mike Lee (R-UT) and Ranking Member Amy Klobuchar (D-MN) asserted:
    • Google is the dominant player in online advertising, a business that accounts for around 85% of its revenues and which allows it to monetize the data it collects through the products it offers for free. Recent consumer complaints and investigations by law enforcement have raised questions about whether Google has acquired or maintained its market power in online advertising in violation of the antitrust laws. News reports indicate this may also be the centerpiece of a forthcoming antitrust lawsuit from the U.S. Department of Justice. This hearing will examine these allegations and provide a forum to assess the most important antitrust investigation of the 21st century.
  • On 22 September, the Federal Trade Commission (FTC) will hold a public workshop “to examine the potential benefits and challenges to consumers and competition raised by data portability.” By 21 August, the FTC “is seeking comment on a range of issues including:
    • How are companies currently implementing data portability? What are the different contexts in which data portability has been implemented?
    • What have been the benefits and costs of data portability? What are the benefits and costs of achieving data portability through regulation?
    • To what extent has data portability increased or decreased competition?
    • Are there research studies, surveys, or other information on the impact of data portability on consumer autonomy and trust?
    • Does data portability work better in some contexts than others (e.g., banking, health, social media)? Does it work better for particular types of information over others (e.g., information the consumer provides to the business vs. all information the business has about the consumer, information about the consumer alone vs. information that implicates others such as photos of multiple people, comment threads)?
    • Who should be responsible for the security of personal data in transit between businesses? Should there be data security standards for transmitting personal data between businesses? Who should develop these standards?
    • How do companies verify the identity of the requesting consumer before transmitting their information to another company?
    • How can interoperability among services best be achieved? What are the costs of interoperability? Who should be responsible for achieving interoperability?
    • What lessons and best practices can be learned from the implementation of the data portability requirements in the GDPR and CCPA? Has the implementation of these requirements affected competition and, if so, in what ways?”
  • The Federal Communications Commission (FCC) will hold an open meeting on 30 September, but an agenda is not available at this time.

Other Developments

  • The National Institute of Standards and Technology (NIST) published for input Four Principles of Explainable Artificial Intelligence (Draft NISTIR 8312) in which the authors stated:
    • We introduce four principles for explainable artificial intelligence (AI) that comprise the fundamental properties for explainable AI systems. They were developed to encompass the multidisciplinary nature of explainable AI, including the fields of computer science,  engineering, and psychology. Because one size fits all explanations do not exist, different users will require different types of explanations. We present five categories of explanation and summarize theories of explainable AI. We give an overview of the algorithms in the field that cover the major classes of explainable algorithms. As a baseline comparison, we assess how well explanations provided by people follow our four principles. This assessment provides insights to the challenges of designing explainable AI systems.
    • NIST said “our four principles of explainable AI are:
      • Explanation: Systems deliver accompanying evidence or reason(s) for all outputs.
      • Meaningful: Systems provide explanations that are understandable to individual users.
      • Explanation Accuracy: The explanation correctly reflects the system’s process for generating the output.
      • Knowledge Limits: The system only operates under conditions for which it was designed or when the system reaches a sufficient confidence in its output.
    • A year ago, NIST published “U.S. LEADERSHIP IN AI: A Plan for Federal Engagement in Developing Technical Standards and Related Tools” as required by Executive Order (EO) 13859, Maintaining American Leadership in Artificial Intelligence in response to an August 10, 2019 due date. 
      • NIST explained that “[t]here are a number of cross-sector (horizontal) and sector-specific (vertical) AI standards available now and many others are being developed by numerous standards developing organizations (SDOs)…[and] [s]ome areas, such as communications, have well-established and regularly maintained standards in widespread use, often originally developed for other technologies. Other aspects, such as trustworthiness, are only now being considered.” NIST explained that its AI plan “identifies the following nine areas of focus for AI standards: 
        • Concepts and terminology
        • Data and knowledge 
        • Human interactions 
        • Metrics
        • Networking
        • Performance testing and reporting methodology
        • Safety
        • Risk management
        • Trustworthiness
      • NIST asserting that “[i]n deciding which standards efforts merit strong Federal government involvement, U.S. government agencies should prioritize AI standards efforts that are:
        • Consensus-based, where decision-making is based upon clearly established terms or agreements that are understood by all involved parties, and decisions are reached on general agreement.
        • Inclusive and accessible, to encourage input reflecting diverse and balanced communities of users, developers, vendors, and experts. Stakeholders should include representatives from diverse technical disciplines as well as experts and practioners from non-traditional disciplines of special importance to AI such as ethicists, economists, legal professionals, and policy makers: essentially, accommodating all desiring a “seat at the table.”
        • Multi-path, developed through traditional and novel standards-setting approaches and organizations that best meet the needs of developers and users in the marketplace as well as society at large.
        • Open and transparent, operating in a manner that: provides opportunity for participation by all directly- and materially- affected; has well-established and readily accessible operating rules, procedures, and policies that provide certainty about decision making processes; allows timely feedback for further consideration of the standard; and ensures prompt availability of the standard upon adoption.
        • Result in globally relevant and non-discriminatory standards, where standards avoid becoming non-tariff trade barriers or locking in particular technologies or products.
  • Consumer Watchdog has sued Zoom Video Communications “for making false and deceptive representations to consumers about its data security practices in violation of the District of Columbia Consumer Protection Procedures Act (CPPA).” The advocacy organization asserted
    • To distinguish itself from competitors and attract new customers, Zoom began advertising and touting its use of a strong security feature called “end-to-end encryption” to protect communications on its platform, meaning that the only people who can access the communicated data are the sender and the intended recipient. Using end-to-end encryption prevents unwanted third parties—including the company that owns the platform (in this case, Zoom)—from accessing communications, messages, and data transmitted by users.
    • Unfortunately, Zoom’s claims that communications on its platform were end-to-end encrypted were false. Zoom only used the phrase “end-to-end encryption” as a marketing device to lull consumers and businesses into a false sense of security.
    • The reality is that Zoom is, and has always been, capable of intercepting and accessing any and all of the data that users transmit on its platform—the very opposite of end-to-end encryption. Nonetheless, Zoom relied on its end-to-end encryption claim to attract customers and to build itself into a publicly traded company with a valuation of more than $70 billion.
    • Consumer Watchdog is seeking the greater of treble damages or $1,500 per violation along with other relief
    • Zoom is being sued in a number of other cases, including two class action suits in United States courts in Northern California (#1 and #2).
  • The United States (U.S.) Government Accountability Office (GAO) decided the Trump Administration violated the order of succession at the U.S. Department of Homeland Security by naming the Customs and Border Protection (CBP) Commissioner of Kevin McAleenan the acting Secretary after former Secretary Kirstjen Nielsen resigned early in 2019. The agency’s existing order of succession made clear that Cybersecurity and Infrastructure Security Agency (CISA) Director Christopher Krebs was next in line to lead DHS. The GAO added “[a]s such, the subsequent appointments of Under Secretary for Strategy, Policy, and Plans, Chad Wolf and Principal Deputy Director of U.S. Citizenship and Immigration Services (USCIS) Ken Cuccinelli were also improper because they relied on an amended designation made by Mr. McAleenan.”
    • However, GAO is punting the question of what the implications of its findings are:
      • In this decision we do not review the consequences of Mr. McAleenan’s service as Acting Secretary, other than the consequences of the November delegation, nor do we review the consequences of Messers. Wolf and Cuccinelli service as Acting Secretary and Senior Official Performing the Duties of Deputy Secretary respectively.
      • We are referring the question as to who should be serving as the Acting Secretary and the Senior Official Performing the Duties of Deputy Secretary to the DHS Office of Inspector General for its review.
      • We also refer to the Inspector General the question of consequences of actions taken by these officials, including consideration of whether actions taken by these officials may be ratified by the Acting Secretary and Senior Official Performing the Duties of Deputy Secretary as designated in the April Delegation.
    • The GAO also denied DHS’s request to rescind this opinion because “DHS has not shown that our decision contains either material errors of fact or law, nor has DHS provided information not previously considered that warrants reversal or modification of the decision.”
    • The chairs of the House Homeland Security and Oversight and Reform Committees had requested the GAO legal opinion and claimed in their press release the opinion “conclude[es] that President Donald Trump’s appointments to senior leadership positions at the Department of Homeland Security were illegal and circumvented both the Federal Vacancy Reform Act and the Homeland Security Act.”
  • Top Democrats on the House Energy and Commerce Committee wrote the members of the Facebook Oversight Board expressing their concern the body “does not have the power it needs to change Facebook’s harmful policies.” Chair Frank Pallone, Jr. (D-NJ), Communications and Technology Subcommittee Chair Mike Doyle (D-PA) and Consumer Protection and Commerce Subcommittee Chair Jan Schakowsky (D-IL) “encouraged the newly appointed members to exert pressure on Facebook to listen to and act upon their policy recommendations, something that is not currently included in the Board Members’ overall responsibilities.” They asserted:
    • The Committee leaders believe Facebook is intentionally amplifying divisive and conspiratorial content because such content attracts more customer usage and, with it, advertising revenue. Pallone, Doyle and Schakowsky were also troubled by recent reports that Facebook had an opportunity to retune its systems responsible for the amplification of this content, but chose not to. 
    • The three Committee leaders wrote that the public interest should be the Oversight Board’s priority and that it should not be influenced by the profit motives of Facebook executives. Pallone, Doyle and Schakowsky also requested the board members answer a series of questions in the coming weeks.
  • The United States (U.S.) Government Accountability Office (GAO) examined how well the United States Department of Homeland Security and selected federal agencies are implementing a cybersecurity program designed to give the government better oversight and control of their networks. In auditing the Continuous Diagnostics and Mitigation (CDM), the GAO found limited success and ongoing, systemic roadblocks preventing increased levels of security. DHS has estimated the program will cost $10.9 billion over ten years.
    • The GAO concluded
      • Selected agencies reported that the CDM program had helped improve their awareness of hardware on their networks. However, although the program has been in existence for several years, these agencies had only implemented the foundational capability for managing hardware to a limited extent, including not associating hardware devices with FISMA systems. In addition, while most agencies implemented requirements for managing software, all of them inconsistently implemented requirements for managing configuration settings. Moreover, poor data quality resulting from these implementation shortcomings diminished the usefulness of agency dashboards to support security-related decision making. Until agencies fully and effectively implement CDM program capabilities, including the foundational capability of managing hardware on their networks, agency and federal dashboards will not accurately reflect agencies’ security posture. Part of the reason that agencies have not fully implemented key CDM requirements is that DHS had not ensured integrators had addressed shortcomings with integrators’ CDM solutions for managing hardware and vulnerabilities. Although DHS has taken various actions to address challenges identified by agencies, without further assistance from DHS in helping agencies overcome implementation shortcomings, the program—costing billions of dollars— will likely not fully achieve expected benefits.
    • The chairs and ranking members of the Senate Homeland Security & Governmental Affairs and House Homeland Security Committees, the chair of the House Oversight and Reform Committee, and other Members requested that the GAO study and report on this issue.
  • Google and the Australian Competition and Consumer Commission (ACCC) have exchanged public letters, fighting over the latter’s proposal to ensure that media companies are compensated for articles and content the former uses.
    • In an Open Letter to Australians, Google claimed:
      • A proposed law, the News Media Bargaining Code, would force us to provide you with a dramatically worse Google Search and YouTube, could lead to your data being handed over to big news businesses, and would put the free services you use at risk in Australia.
      • You’ve always relied on Google Search and YouTube to show you what’s most relevant and helpful to you. We could no longer guarantee that under this law. The law would force us to give an unfair advantage to one group of businesses – news media businesses – over everyone else who has a website, YouTube channel or small business. News media businesses alone would be given information that would help them artificially inflate their ranking over everyone else, even when someone else provides a better result. We’ve always treated all website owners fairly when it comes to information we share about ranking. The proposed changes are not fair and they mean that Google Search results and YouTube will be worse for you.
      • You trust us with your data and our job is to keep it safe. Under this law, Google has to tell news media businesses “how they can gain access” to data about your use of our products. There’s no way of knowing if any data handed over would be protected, or how it might be used by news media businesses.
      • We deeply believe in the importance of news to society. We partner closely with Australian news media businesses — we already pay them millions of dollars and send them billions of free clicks every year. We’ve offered to pay more to license content. But rather than encouraging these types of partnerships, the law is set up to give big media companies special treatment and to encourage them to make enormous and unreasonable demands that would put our free services at risk.
    • In its response, the ACCC asserted:
      • The open letter published by Google today contains misinformation about the draft news media bargaining code which the ACCC would like to address. 
      • Google will not be required to charge Australians for the use of its free services such as Google Search and YouTube, unless it chooses to do so.
      • Google will not be required to share any additional user data with Australian news businesses unless it chooses to do so.
      • The draft code will allow Australian news businesses to negotiate for fair payment for their journalists’ work that is included on Google services.
      • This will address a significant bargaining power imbalance between Australian news media businesses and Google and Facebook.
    • Late last month, the ACCC released for public consultation a draft of “a mandatory code of conduct to address bargaining power imbalances between Australian news media businesses and digital platforms, specifically Google and Facebook.” The government in Canberra had asked the ACCC to draft this code earlier this year after talks broke down between the Australian Treasury and the companies.
    • The ACCC explained
      • The code would commence following the introduction and passage of relevant legislation in the Australian Parliament. The ACCC released an exposure draft of this legislation on 31 July 2020, with consultation on the draft due to conclude on 28 August 2020. Final legislation is expected to be introduced to Parliament shortly after conclusion of this consultation process.
    • This is not the ACCC’s first interaction with the companies. Late last year, the ACCC announced a legal action against Google “alleging they engaged in misleading conduct and made false or misleading representations to consumers about the personal location data Google collects, keeps and uses” according to the agency’s press release. In its initial filing, the ACCC is claiming that Google mislead and deceived the public in contravention of the Australian Competition Law and Android users were harmed because those that switched off Location Services were unaware that their location information was still be collected and used by Google for it was not readily apparent that Web & App Activity also needed to be switched off.
    • A year ago, the ACCC released its final report in its “Digital Platforms Inquiry” that “proposes specific recommendations aimed at addressing some of the actual and potential negative impacts of digital platforms in the media and advertising markets, and also more broadly on consumers.”
  • The United States Coast Guard is asking for information on “the introduction and development of automated and autonomous commercial vessels and vessel technologies subject to U.S. jurisdiction, on U.S. flagged commercial vessels, and in U.S. port facilities.” The Coast Guard is particularly interested in the “barriers to the development of autonomous vessels.” The agency stated
    • On February 11, 2019, the President issued Executive Order (E.O.) 13859, “Maintaining American Leadership in Artificial Intelligence.”The executive order announced the policy of the United States Government to sustain and enhance the scientific, technological, and economic leadership position of the United States in artificial intelligence (AI) research and development and deployment through a coordinated Federal Government strategy. Automation is a broad category that may or may not incorporate many forms of technology, one of which is AI. This request for information (RFI) will support the Coast Guard’s efforts to accomplish its mission consistent with the policies and strategies articulated in E.O. 13859. Input received from this RFI will allow the Coast Guard to better understand, among other things, the intersection between AI and automated or autonomous technologies aboard commercial vessels, and to better fulfill its mission of ensuring our Nation’s maritime safety, security, and stewardship.

Further Reading

  • ‘Boring and awkward’: students voice concern as colleges plan to reopen – through Minecraft” By Kari Paul – The Guardian. A handful of universities in the United States (U.S.) are offering students access to customized Minecraft, an online game that allows players to build worlds. The aim seems to be to allow students to socialize online in replicas on their campuses. The students interviewed for this story seemed underwhelmed by the effort, however.
  • When regulators fail to rein in Big Tech, some turn to antitrust litigation” – By Reed Albergotti and Jay Greene – The Washington Post. This article places Epic Games suit against Apple and Google into the larger context of companies availing themselves of the right to sue themselves under antitrust laws in the United States. However, for a number of reasons, these suits have not often succeeded, and one legal commentator opined that judges tend to see these actions as sour grapes. However, revelations turned up during discovery can lead antitrust regulators to jump into proceedings, giving the suit additional heft.
  • What Can America Learn from Europe About Regulating Big Tech?” By Nick Romeo – The New Yorker.  A former Member of the European Parliament, Marietje Schaake, from the Netherlands is now a professor at Stanford and is trying to offer a new path on regulating big tech that would rein in the excesses and externalities while allowing new technologies and competition to flourish. The question is whether there is a wide enough appetite for her vision in the European Union let alone the United States.
  • Facebook employees internally question policy after India content controversy – sources, memos” By Aditya Kalra and Munsif Vengattil – Reuters. The tech giant is also facing an employee revolt in the world’s largest democracy. Much like in the United States and elsewhere, employees are pressing leadership to explain why they are seemingly not applying the platform’s rules on false and harmful material to hateful speech by leaders. In this case, it was posts by a member of the ruling Bharatiya Janata Party (BJP) calling Indian Muslims traitors. And, in much the same way accusations have been leveled at a top Facebook lobbyist in Washington who has allegedly interceded on behalf of Republicans and far right interests on questionable material, a lobbyist in New Delhi has done the same the BJB.
  • List of 2020 election meddlers includes Cuba, Saudi Arabia and North Korea, US intelligence official says” By Shannon Vavra – cyberscoop. At a virtual event this week, National Counterintelligence and Security Center (NCSC) Director William Evanina claimed that even more nations are trying to disrupt the United States election this fall, including Cuba, Saudi Arabia, and North Korea. Evanina cautioned anyone lest they think the capabilities of these nations rise to the level of the Russian Federation, People’s Republic of China, and Iran. Earleir this month, Evanina issued an update to his late July statement “100 Days Until Election 2020” through “sharing additional information with the public on the intentions and activities of our adversaries with respect to the 2020 election…[that] is being released for the purpose of better informing Americans so they can play a critical role in safeguarding our election.” Evanina offered more in the way of detail on the three nations identified as those being most active in and capable of interfering in the November election: the Russian Federation, the PRC, and Iran. This additional detail may well have been provided given the pressure Democrats in Congress to do just this. Members like Speaker of the House Nancy Pelosi (D-CA) argued that Evanina was not giving an accurate picture of the actions by foreign nations to influence the outcome and perception of the 2020 election. Republicans in Congress pushed back, claiming Democrats were seeking to politicize the classified briefings given by the Intelligence Community (IC).

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Silentpilot from Pixabay

Further Reading and Other Developments (11 July)

First things first, if you would like to receive my Technology Policy Update, email me. You can find some of these Updates from 2019 and 2020 here.

Other Developments

  • The United States District Court of Maine denied a motion by a number of telecommunications trade associations to enjoin enforcement of a new Maine law instituting privacy practices for internet service providers (ISP) in the state that limited information collection and processing. The plaintiffs claimed the 2017 repeal of the Federal Communications Commission’s (FCC) 2016 ISP Privacy Order preempted states from implementing their own privacy rules for ISPs. In its decision, the court denied the plaintiffs’ motion and will proceed to decide the merits of the case.
  • The European Data Protection Board (EDPB) has debuted a “One-Stop-Shop” register “containing decisions taken by national supervisory authorities following the One-Stop-Shop cooperation procedure (Art. 60 GDPR).” The EDPB explained “[u]nder the GDPR, Supervisory Authorities have a duty to cooperate on cases with a cross-border component to ensure a consistent application of the regulation – the so-called one-stop-shop (OSS) mechanism…[and] [u]nder the OSS, the Lead Supervisory Authority (LSA) is in charge of preparing the draft decisions and works together with the concerned SAs to reach consensus.” Hence this new repository will contain the decisions on which EU data protection authorities have cooperated in addressing alleged GDPR violations that reach across the borders of EU nations.
  • The chair of the House Energy and Commerce Committee and three subcommittee chairs wrote Facebook, Google, and Twitter asking the companies “provide the Committee with monthly reports similar in scope to what you are providing the European Commission regarding your COVID-19 disinformation efforts as they relate to United States users of your platform.” They are also asking that the companies brief them and staff on 22 July on these efforts. Given the Committee’s focus on disinformation, it is quite possible these monthly reports and the briefing could be the basis of more hearings and/or legislation. Chair Frank Pallone, Jr. (D-NJ), Oversight and Investigations Subcommittee Chair Diana DeGette (D-CO), Communications and Technology Subcommittee Chair Mike Doyle (D-PA) and Consumer Protection and Commerce Subcommittee Chair Jan Schakowsky (D-IL) signed the letters.
  • Reports indicate the Federal Trade Commission (FTC) and Department of Justice (DOJ) are reviewing the February 2019 $5.7 million settlement between the FTC and TikTok for violating the Children’s Online Privacy Protection Act (COPPA). In May 2020, a number of public advocacy groups filed a complaint with the FTC, asking whether the agency has “complied with the consent decree.” If TikTok has violated the order, it could face huge fines as the FTC and DOJ could seek a range of financial penalties. This seems to be another front in the escalating conflict between the United States and the People’s Republic of China.
  • Tech Inquiry, an organization that “seek[s] to combat abuses in the tech industry through coupling concerned tech workers with relevant members of civil society” revealed “an in-depth analysis of all public US federal (sub)contracting data over the last four and a half years to estimate the rankings of tech companies, both in and out of Silicon Valley, as contractors with the military, law enforcement, and diplomatic arms of the United States.” Tech Inquiry claimed “[o]ur analysis shows a diversity of contracting postures (see Tables 2 and 3), not a systemic divide from Washington. Within a substantial list of namebrand tech companies, only Facebook, Apple, and Twitter look to be staying out of major military and law enforcement contracts.”
  • The United States Secret Service announced the formation of a new Cyber Fraud Task Force (CFTF) which merges “its Electronic Crimes Task Forces (ECTFs) and Financial Crimes Task Forces (FCTFs) into a single unified network.” The rationale given for the merger is “the line between cyber and financial crimes has steadily blurred, to the point today where the two – cyber and financial crimes – cannot be effectively disentangled.”
  • The United States Election Assistance Commission (EAC) held a virtual public hearing, “Lessons Learned from the 2020 Primary Elections” “to discuss the administration of primary elections during the coronavirus pandemic.”
  • The National Council of Statewide Interoperability Coordinators (NCSWIC), a Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA) administered program, released its “NCSWIC Strategic Plan and Implementation Guide,” “a stakeholder-driven, multi-jurisdictional, and multi-disciplinary plan to enhance interoperable and emergency communications.” NCSWIC contended “[t]he plan is a critical mid-range (three-year) tool to help NCSWIC and its partners prioritize resources, strengthen governance, identify future investments, and address interoperability gaps.”
  • Access Now is pressing “video conferencing platforms” other than Zoom to issue “regular transparency reports… clarifying exactly how they protect personal user data and enforce policies related to freedom of expression.”

Further Reading

  • India bans 59 Chinese apps, including TikTok and WeChat, after deadly border clash” – South China Morning Post. As a seeming extension to the military skirmish India and the People’s Republic of China (PRC) engaged in, a number of PRC apps have been banned by the Indian government, begging the question of whether there will be further escalation between the world’s two most populous nations. India is the TikTok’s biggest market with more than 120 million users in the South Asian country, and a range of other apps and platforms also have millions of users. Most of the smartphones used in India are made by PRC entities. Moreover, if New Delhi joins Washington’s war on Huawei, ZTE, and other PRC companies, the cumulative effect could significantly affect the PRC’s global technological ambitions.
  • Huawei data flows under fire in German court case” – POLITICO. A former Huawei employee in Germany has sued the company alleging violations of the General Data Protection Regulation (GDPR) through the company’s use of standard contractual clauses. This person requested the data the company had collected from him and the reasons for doing so. Huawei claimed it had deleted the data. A German court’s decision that Huawei had violated the GDPR is being appealed. However, some bigger issues are raised by the case, including growing unease within the European Union, that People’s Republic of China firms are possibly illegally transferring and processing EU citizens’ data and a case before Europe’s highest court in which the legality of standard contractual clauses may be determined as early as this month.
  • Deutsche Telekom under pressure after reports on Huawei reliance” – Politico. A German newspaper reported on confidential documents showing that Deutsche Telekom deepened its relationship with Huawei as the United States’ government was pressuring its allies and other nations to stop using the equipment and services of the company. The German telecommunications company denied the claims, and a number of German officials expressed surprise and dismay, opining that the government of Chancellor Angela Merkel should act more swiftly to implement legislation to secure Germany’s networks.
  • Inside the Plot to Kill the Open Technology Fund” – Vice. According to critics, the Trump Administration’s remaking of the United States (US) Agency for Global Media (USAGM) is threatening the mission and effectiveness of the Open Technology Fund (OTF), a US government non-profit designed to help dissidents and endangered populations throughout the world. The OTF has funded a number of open technology projects, including the Signal messaging app, but the new USAGM head, Michael pack, is pressing for closed source technology.
  • How Police Secretly Took Over a Global Phone Network for Organized Crime” – Vice. European law enforcement agencies penetrated and compromised an encrypted messaging service in Europe, leading to a number of arrests and seizures of drugs. Encrochat had billed itself as completely secure, but hackers with the French government broke into the system and laid bare the details of numerous crimes. And, this is only the latest encrypted app that is marketed to criminals, meaning others will soon step into the void created when Encrochat shut down.
  • Virus-Tracing Apps Are Rife With Problems. Governments Are Rushing to Fix Them.” – The New York Times. In numerous nations around the world, the rush to design and distribute contact tracing apps to fight COVID-19 has resulted in a host of problems predicted by information technology professionals and privacy, civil liberties and human rights advocates. Some apps collect too much information, many are not secure, and some do not seem to perform their intended tasks. Moreover, without mass adoption, the utility of an app is questionable at best. Some countries have sought to improve and perfect their apps in response to criticism, but others are continuing to use and even mandate their citizens and residents use them.
  • Hong Kong Security Law Sets Stage for Global Internet Fight” – The New York Times. After the People’s Republic of China (PRC) passed a new law that strips many of the protections Hong Kong enjoyed, technology companies are caught in a bind, for now Hong Kong may well start demanding they hand over data on people living in Hong Kong or employees could face jail time. Moreover, the data demands made of companies like Google or Facebook could pertain to people anywhere in the world. Companies that comply with Beijing’s wishes would likely face turbulence in Washington and vice versa. TikTok said it would withdraw from Hong Kong altogether.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Gino Crescoli from Pixabay

Further Reading and Other Developments (13 June)

First things first, if you would like to receive my Technology Policy Update, email me. You can find some of these Updates from 2019 and 2020 here.

Other Developments

  • The University of Toronto’s Citizen Lab alleged that an Indian information technology (IT) firm has been running a hacking for hire operation possibly utilized by multinationals to target non-profits, journalists, and advocacy groups:
    • Dark Basin is a hack-for-hire group that has targeted thousands of individuals and hundreds of institutions on six continents. Targets include advocacy groups and journalists, elected and senior government officials, hedge funds, and multiple industries.
    • Dark Basin extensively targeted American nonprofits, including organisations working on a campaign called #ExxonKnew, which asserted that ExxonMobil hid information about climate change for decades.
    • We also identify Dark Basin as the group behind the phishing of organizations working on net neutrality advocacy, previously reported by the Electronic Frontier Foundation.
  • The Massachusetts Institute of Technology (MIT) and the University of Michigan (UM) “released a report on the security of OmniBallot, an Internet voting and ballot delivery system produced by Democracy Live…[that] has been deployed in Delaware, West Virginia, and other jurisdictions.” MIT and UM “The full technical report contains detailed recommendations for jurisdictions, but here’s what individual voters can do to help reduce risks to their security and privacy:
    • Your safest option is to avoid using OmniBallot. Either vote in person or request a mail-in absentee ballot, if you can. Mail-in ballots are a reasonably safe option, provided you check them for accuracy and adhere to all relevant deadlines.
    • If you can’t do that, your next-safest option is to use OmniBallot to download a blank ballot and print it, mark it by hand, and mail it back or drop it off. Always double-check that you’ve marked your ballot correctly, and confirm the mailing address with your local jurisdiction. 
    • If you are unable to mark your ballot by hand, OmniBallot can let you mark it on-screen. However, this option (as used in Delaware and West Virginia) will send your identity and secret ballot selections over the Internet to Democracy Live’s servers even if you return your ballot through the mail. This increases the risk that your choices may be exposed or manipulated, so we recommend that voters only use online marking as a last resort. If you do mark your ballot online, be sure to print it, carefully check that the printout is marked the way you intended, and physically return it.
    • If at all possible, do not return your ballot through OmniBallot’s website or by email or fax. These return modes cause your vote to be transmitted over the Internet, or via networks attached to the Internet, exposing the election to a critical risk that votes will be changed, at wide scale, without detection. Recent recommendations from DHS, the bi-parisan findings of the Senate Intelligence Committee, and the consensus of the National Academies of Science, Engineering, and Medicine accord with our assessment that returning ballots online constitutes a severe security risk.
  • The “Justice in Policing Act of 2020” (H.R.7120/S.3912) was introduced this week in response to the protests and disparate policing practices towards African Americans primarily and would bar the use of facial recognition technology for body cameras, patrol car cameras, or other cameras authorized and regulated under the bill. The House Oversight and Reform Committee has held a series of hearings this Congress on facial recognition technology, with Members on both sides of the aisle saying they want legislation regulating the government’s use of it. As of yet, no such legislation has been introduced. Facial recognition technology language was also a major factor in privacy legislation dying last year in Washington state and was outright removed to avoid the same fate this year.
  • The Government Accountability Office (GAO) released “ELECTRONIC HEALTH RECORDS: Ongoing Stakeholder Involvement Needed in the Department of Veterans Affairs’ Modernization Effort” a week after Secretary of Veterans Affairs Robert Wilkie informed the House Appropriations Committee that the electronic health record rollout has been paused due to COVID-19. Nevertheless, the GAO concluded:
    • VA met its schedule for making the needed system configuration decisions that would enable the department to implement its new EHR system at the first VA medical facility, which was planned for July 2020. In addition, VA has formulated a schedule for making the remaining EHR system configuration decisions before implementing the system at additional facilities planned for fall 2020. VA’s EHRM program was generally effective in establishing decisionmaking procedures that were consistent with applicable federal standards for internal control.
    • However, VA did not always ensure the involvement of relevant stakeholders, including medical facility clinicians and staff, in the system configuration decisions. Specifically, VA did not always clarify terminology and include adequate detail in descriptions of local workshop sessions to medical facility clinicians and staff to ensure relevant representation at local workshop meetings. Participation of such stakeholders is critical to ensuring that the EHR system is configured to meet the needs of clinicians and support the delivery of clinical care.
  • The GAO recommended
    • For implementation of the EHR system at future VA medical facilities, we recommend that the Secretary of VA direct the EHRM Executive Director to clarify terminology and include adequate detail in descriptions of local workshop sessions to facilitate the participation of all relevant stakeholders including medical facility clinicians and staff. (Recommendation 1)
  • Europol and the European Union Intellectual Property Office released a report to advise law enforcement agencies and policymakers “in the shape of a case book and presents case examples showing how intellectual property (IP) crime is linked to other forms of criminality, including money laundering, document fraud, cybercrime, fraud, drug production and trafficking and terrorism.”
  • The New York University Stern Center for Business and Human Rights released its latest report on social media titled “Who Moderates the Social Media Giants? A Call to End Outsourcing” that calls for major reforms in how these companies moderate content so as to improve the online ecosystem and the conditions, pay, and efficiacy of those actually doing the work. The report claimed “[d]espite the centrality of content moderation, however, major social media companies have marginalized the people who do this work, outsourcing the vast majority of it to third-party vendors…[and] [a] close look at this situation reveals three main problems:
    • In some parts of the world distant from Silicon Valley, the marginalization of content moderation has led to social media companies paying inadequate attention to how their platforms have been misused to stoke ethnic and religious violence. This has occurred in places ranging from Myanmar to Ethiopia. Facebook, for example, has expanded into far-flung markets, seeking to boost its user-growth numbers, without having sufficient moderators in place who understand local languages and cultures.
    • The peripheral status of moderators undercuts their receiving adequate counseling and medical care for the psychological side effects of repeated exposure to toxic online content. Watching the worst social media has to offer leaves many moderators emotionally debilitated. Too often, they don’t get the support or benefits they need and deserve.
    • The frequently chaotic outsourced environments in which moderators work impinge on their decisionmaking. Disputes with quality-control reviewers consume time and attention and contribute to a rancorous atmosphere.
  • The National Institute of Standards and Technology (NIST) “requests review and comments on the four-volume set of documents: Special Publication (SP) 800-63-3 Digital Identity Guidelines, SP 800-63A Enrollment and Identity Proofing, SP 800-63B Authentication and Lifecycle Management, and SP 800-63C Federation and Assertions…[that] presents the controls and technical requirements to meet the digital identity management assurance levels specified in each volume.” NIST “is requesting comments on the document in response to agency and industry implementations, industry and market innovation and the current threat environment.” Comments are due by 10 August.
  • The Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) updated its Cyber Risks to Next Generation 911 White Paper and released Cyber Risks to 911: Telephony Denial of Service and PSAP Ransomware Poster. CISA explained:
    • Potential cyber risks to Next Generation 9-1-1 (NG9-1-1) systems do not undermine the benefits of NG9-1-1. Nevertheless, cyber risks present a new level of exposure that PSAP administrators must understand and actively manage as a part of a comprehensive risk management program. Systems are already under attack. As cyber threats grow in complexity and sophistication, attacks could be more severe against NG9-1-1 systems as attackers can launch multiple distributed attacks with greater automation from a broader geography and against more targets.  This document provides an overview of the cyber risk landscape, offers an approach for assessing and managing risk, and provides additional cybersecurity resources. 
  • The Government Accountability Office (GAO) released a number of technology reports:
    • The GAO recommended that the Department of Energy’s (DOE) National Nuclear Security Administration (NNSA) “should incorporate additional management controls to better oversee and coordinate NNSA’s microelectronics activities. Such management controls could include investing the microelectronics coordinator with increased responsibility and authority, developing an overarching management plan, and developing a mission need statement and a microelectronics requirements document.”
  • The GAO found that
    • The Department of Homeland Security (DHS) has taken steps to implement selected leading practices in its transition from waterfall, an approach that historically delivered useable software years after program initiation, to Agile software development, which is focused on incremental and rapid delivery of working software in small segments. As shown below, this quick, iterative approach is to deliver results faster and collect user feedback continuously.
    • DHS has fully addressed one of three leading practice areas for organization change management and partially addressed the other two. Collectively, these practices advise an organization to plan for, implement, and measure the impact when undertaking a significant change. The department has fully defined plans for transitioning to Agile development. DHS has partially addressed implementation—the department completed 134 activities but deferred roughly 34 percent of planned activities to a later date. These deferred activities are in progress or have not been started. With respect to the third practice, DHS clarified expected outcomes for the transition, such as reduced risk of large, expensive IT failures. However, these outcomes are not tied to target measures. Without these, DHS will not know if the transition is achieving its desired results.
    • DHS has also addressed four of the nine leading practices for adopting Agile software development. For example, the department has modified its acquisition policies to support Agile development methods. However, it needs to take additional steps to, among other things, ensure all staff are appropriately trained and establish expectations for tracking software code quality. By fully addressing leading practices, DHS can reduce the risk of continued problems in developing and acquiring current, as well as, future IT systems.
  • The GAO rated “[t]he Department of Defense’s (DOD) current initiative to transition to Internet Protocol version 6 (IPv6), which began in April 2017, follows at least two prior attempts to implement IPv6 that were halted by DOD.”
    • In February 2019, DOD released its own IPv6 planning and implementation guidance that listed 35 required transition activities, 18 of which were due to be completed before March 2020. DOD completed six of the 18 activities as of March 2020. DOD officials acknowledged that the department’s transition time frames were optimistic; they added that they had thought that the activities’ deadlines were reasonable until they started performing the work. Without an inventory, a cost estimate, or a risk analysis, DOD significantly reduced the probability that it could have developed a realistic transition schedule. Addressing these basic planning requirements would supply DOD with needed information that would enable the department to develop realistic, detailed, and informed transition plans and time frames.

Further Reading

  • Amid Pandemic and Upheaval, New Cyberthreats to the Presidential Election” – The New York Times. Beyond disinformation and misinformation campaigns, United States’ federal and state officials are grappling with a range of cyber-related threats including some states’ insistence on using online voting, which the Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) deemed “high risk” in an unreleased assessment the agency softened before distribution to state lection officials. There are also worries that Russian or other nation-state hackers could assess voting databases in ways that would call election day results into question, or other hackers could break in, lock, and then ransom such data bases. CISA and other stakeholders have articulated concerns about the security of voting machines, apps, and systems currently used by states. 
  • Microsoft won’t sell police its facial-recognition technology, following similar moves by Amazon and IBM” – The Washington Post. The three tech giants responded to pressure from protestors to stop selling facial recognition technology to police departments with Microsoft being the latest to make this pledge. The companies have said they will not sell this technology until there is a federal law regulating it. The American Civil Liberties Union said in its press release “Congress and legislatures nationwide must swiftly stop law enforcement use of face recognition, and companies like Microsoft should work with the civil rights community  — not against it — to make that happen…[and] [t]his includes Microsoft halting its current efforts to advance legislation that would legitimize and expand the police use of facial recognition in multiple states nationwide.” The above mentioned “Justice in Policing Act of 2020” (H.R.7120/S.3912) would not regulate the technology per se but would ban its use from body and car cameras. However, the companies said nothing about selling this technology to federal agencies such as US Immigration and Customs Enforcement. And, IBM, unlike Amazon and Microsoft, announced it was leaving the facial recognition field altogether. However, AI Clearview, the controversial facial recognition firm, has not joined this pledge.
  • ICE Outlines How Investigators Rely on Third-Party Facial Recognition Services” – Nextgov. In a recently released privacy impact assessment, US Immigration and Customs Enforcement’s Homeland Security Investigations (HSI) explained its use of US and state government and commercial recognition databases and technologies. The agency claimed this is to be used only after agents have exhausted more traditional means of identifying suspects and others and only if relevant to the investigation. The agency claimed “ICE HSI primarily uses this law enforcement tool to identify victims of child exploitation and human trafficking, subjects engaged in the online and sexual exploitation of children, subjects engaged in financial fraud schemes, identity and benefit fraud, and those identified as members of transnational criminal organizations.” Given what some call abuses and others call mistakes in US surveillance programs, it is probable ICE will exceed the limits it is setting on the use of this technology absent meaningful, independent oversight.
  • Zoom confirms Beijing asked it to suspend activists over Tiananmen Square meetings” – Axios. In a statement, Zoom admitted it responded to pressure from the People’s Republic of China (PRC) to shut down 4 June meetings to commemorate Tiananmen Square inside and outside the PRC, including in the United States if enough PRC nationals were participating. It is not hard to imagine the company being called to task in Washington and in western Europe for conforming to Beijing’s wishes. The company seems to be vowing to develop technology to block participants by country as opposed to shutting down meetings and a process to consider requests by nations to block certain content illegal within their borders.
  • Coronavirus conspiracy theorists threaten 5G cell towers, DHS memo warns” – CyberScoop. The Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) has warned telecommunications companies they should establish or better still already have in place security protocols to protect equipment, especially 5G gear, from sabotage arising from the conspiracy theory that 5G transmission either compromises immune systems making one more susceptible to COVID-19 or actually spreads the virus. There have been a spate of attacks in the United Kingdom, and a number of Americans are advocating for this theory, including actor Woody Harrelson.  
  • Police Officers’ Personal Info Leaked Online” – Associated Press. At the same time police are facing protestors in the streets of many American cities and towns, the sensitive personal information of some officers have been posted online, possibly putting them and their families at risk.
  • Facebook Helped the FBI Hack a Child Predator” – Vice’s Motherboard. In a story apparently leaked by Facebook, it is revealed that the company hired a third-party hacker to help reveal a nefarious, technologically adept person who was terrorizing and extorting female minors through the development of a zero-day exploit. This is supposedly the first time Facebook engaged in conduct such as this to help law enforcement authorities. The company revealed it routinely tracks problematic users, including those exploiting children. This article would seem tailor-made to push back on the narrative being propagated by the Department of Justice and other nations’ law enforcement agencies that tech companies opposing backdoors in encrypted systems helps sexual predators. There are also the usual concerns that any exploit of a platform or technology people use to remain private will ultimately be used broadly by law enforcement agencies often to the detriment of human rights activists, dissidents, and journalists.
  • Amazon, Facebook and Google turn to deep network of political allies to battle back antitrust probes” – The Washington Post. These tech companies are utilizing means beyond traditional lobbying and public relations to wage the battle against US and state governments investigating them for possible antitrust and anticompetitive practices.
  • One America News, the Network That Spreads Conspiracies to the West Wing” – The New York Times. The upstart media outlet has received a boost in recent days by being promoted by President Donald Trump who quoted its as of yet unproven allegations that a Buffalo man knocked down by police was an antifa agitator. The outlet has received preferential treatment from the White House and is likely another means by which the White House will seek to get its message out.
  • EU says China behind ‘huge wave’ of Covid-19 disinformation” – The Guardian. European Commission Vice President Vĕra Jourová called out the People’s Republic of China (PRC) along with the Russian Federation for spreading prodigious amounts of disinformation in what is likely a shift for Brussels towards a more adversarial stance versus the PRC. As recently as March, an European Union body toned down a report on PRC activities, but this development seems to be a change of course.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Gerd Altmann from Pixabay

Further Reading and Other Developments (6 June)

Other Developments

First things first, if you would like to receive my Technology Policy Update, email me. You can find some of these Updates from 2019 and 2020 here.

  • A number of tech trade groups are asking the House Appropriations Committee’s Commerce-Justice-Science Subcommittee “to direct the National Institute of Standards and Technology (NIST) to create guidelines that help companies navigate the technical and ethical hurdles of developing artificial intelligence.” They argued:
    • A NIST voluntary framework-based consensus set of best practices would be pro-innovation, support U.S. leadership, be consistent with NIST’s ongoing engagement on AI industry consensus standards development, and align with U.S. support for the OECD AI principles as well as the draft Memorandum to Heads of Executive Departments and Agencies, “Guidance for Regulation of Artificial Intelligence Applications.”
  • The Department of Defense (DOD) “named seven U.S. military installations as the latest sites where it will conduct fifth-generation (5G) communications technology experimentation and testing. They are Naval Base Norfolk, Virginia; Joint Base Pearl Harbor-Hickam, Hawaii; Joint Base San Antonio, Texas; the National Training Center (NTC) at Fort Irwin, California; Fort Hood, Texas; Camp Pendleton, California; and Tinker Air Force Base, Oklahoma.”  The DOD explained “[t]his second round, referred to as Tranche 2, brings the total number of installations selected to host 5G testing to 12…[and] builds on DOD’s previously-announced 5G communications technology prototyping and experimentation and is part of a 5G development roadmap guided by the Department of Defense 5G Strategy.”
  • The Federal Trade Commission announced a $150,000 settlement with “HyperBeard, Inc. [which] violated the Children’s Online Privacy Protection Act Rule (COPPA Rule) by allowing third-party ad networks to collect personal information in the form of persistent identifiers to track users of the company’s child-directed apps, without notifying parents or obtaining verifiable parental consent.”
  • The National Institute of Standards and Technology (NIST) released Special Publication 800-133 Rev. 2, Recommendation for Cryptographic Key Generation that “discusses the generation of the keys to be used with the approved  cryptographic  algorithms…[which] are  either  1) generated  using  mathematical  processing  on  the  output  of  approved  Random  Bit  Generators (RBGs) and  possibly  other  parameters or 2) generated based on keys that are generated in this fashion.”
  • United States Trade Representative (USTR) announced “investigations into digital services taxes that have been adopted or are being considered by a number of our trading partners.” These investigations are “with respect to Digital Services Taxes (DSTs) adopted or under consideration by Austria, Brazil, the Czech Republic, the European Union, India, Indonesia, Italy, Spain, Turkey, and the United Kingdom.” The USTR is accepting comments until 15 July.
  • NATO’s North Atlantic Council released a statement “concerning malicious cyber activities” that have targeted medical facilities stating “Allies are committed to protecting their critical infrastructure, building resilience and bolstering cyber defences, including through full implementation of NATO’s Cyber Defence Pledge.” NATO further pledged “to employ the full range of capabilities, including cyber, to deter, defend against and counter the full spectrum of cyber threats.”
  • The Public Interest Declassification Board (PIDB) released “A Vision for the Digital Age: Modernization of the U.S. National Security Classification and Declassification System” that “provides recommendations that can serve as a blueprint for modernizing the classification and declassification system…[for] there is a critical need to modernize this system to move from the analog to the digital age by deploying advanced technology and by upgrading outdated paper-based policies and practices.”
  • In a Department of State press release, a Declaration on COVID-19, the G7 Science and Technology Ministers stated their intentions “to work collaboratively, with other relevant Ministers to:
    • Enhance cooperation on shared COVID-19 research priority areas, such as basic and applied research, public health, and clinical studies. Build on existing mechanisms to further priorities, including identifying COVID-19 cases and understanding virus spread while protecting privacy and personal data; developing rapid and accurate diagnostics to speed new testing technologies; discovering, manufacturing, and deploying safe and effective therapies and vaccines; and implementing innovative modeling, adequate and inclusive health system management, and predictive analytics to assist with preventing future pandemics.
    • Make government-sponsored COVID-19 epidemiological and related research results, data, and information accessible to the public in machine-readable formats, to the greatest extent possible, in accordance with relevant laws and regulations, including privacy and intellectual property laws.
    • Strengthen the use of high-performance computing for COVID-19 response. Make national high-performance computing resources available, as appropriate, to domestic research communities for COVID-19 and pandemic research, while safeguarding intellectual property.
    • Launch the Global Partnership on AI, envisioned under the 2018 and 2019 G7 Presidencies of Canada and France, to enhance multi-stakeholder cooperation in the advancement of AI that reflects our shared democratic values and addresses shared global challenges, with an initial focus that includes responding to and recovering from COVID-19. Commit to the responsible and human-centric development and use of AI in a manner consistent with human rights, fundamental freedoms, and our shared democratic values.
    • Exchange best practices to advance broadband connectivity; minimize workforce disruptions, support distance learning and working; enable access to smart health systems, virtual care, and telehealth services; promote job upskilling and reskilling programs to prepare the workforce of the future; and support global social and economic recovery, in an inclusive manner while promoting data protection, privacy, and security.
  • The Digital, Culture, Media and Sport Committee’s Online Harms and Disinformation Subcommittee held a virtual meeting, which “is the second time that representatives of the social media companies have been called in by the DCMS Sub-committee in its ongoing inquiry into online harms and disinformation following criticism by Chair Julian Knight about a lack of clarity of evidence and further failures to provide adequate answers to follow-up correspondence.” Before the meeting, the Subcommittee sent a letter to Twitter, Facebook, and Google and received responses. The Subcommittee heard testimony from:
    • Facebook Head of Product Policy and Counterterrorism Monika Bickert
    • YouTube Vice-President of Government Affairs and Public Policy Leslie Miller
    • Google Global Director of Information Policy Derek Slater
    • Twitter Director of Public Policy Strategy Nick Pickles
  • Senators Ed Markey (D-MA), Ron Wyden (D-OR) and Richard Blumenthal (D-CT) sent a letter to AT&T CEO Randall Stephenson “regarding your company’s policy of not counting use of HBO Max, a streaming service that you own, against your customers’ data caps.” They noted “[a]lthough your company has repeatedly stated publicly that it supports legally binding net neutrality rules, this policy appears to run contrary to the essential principle that in a free and open internet, service providers may not favor content in which they have a financial interest over competitors’ content.”
  • The Brookings Institution released what it considers a path forward on privacy legislation and held a webinar on the report with Federal Trade Commissioner (FTC) Christine Wilson and former FTC Commissioner and now Microsoft Vice President and Deputy General Counsel Julie Brill.

Further Reading

  • Google: Overseas hackers targeting Trump, Biden campaigns” – Politico. In what is the latest in a series of attempted attacks, Google’s Threat Analysis Group announced this week that People’s Republic of China affiliated hackers tried to gain access to the campaign of former Vice President Joe Biden and Iranian hackers tried the same with President Donald Trump’s reelection campaign. The group referred the matter to the federal government but said the attacks were not successful. An official from the Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) remarked “[i]t’s not surprising that a number of state actors are targeting our elections…[and] [w]e’ve been warning about this for years.” It is likely the usual suspects will continue to try to hack into both presidential campaigns.
  • Huawei builds up 2-year reserve of ‘most important’ US chips” ­– Nikkei Asian Review. The Chinese tech giant has been spending billions of dollars stockpiling United States’ (U.S.) chips, particularly from Intel for servers and programable chips from Xilinx, the type that is hard to find elsewhere. This latter chip maker is seen as particularly crucial to both the U.S. and the People’s Republic of China (PRC) because it partners with the Taiwan Semiconductor Manufacturing Company, the entity persuaded by the Trump Administration to announce plans for a plant in Arizona. Shortly after the arrest of Huawei CFO Meng Wanzhou in 2018, the company began these efforts and spent almost $24 billion USD last year stockpiling crucial U.S. chips and other components.
  • GBI investigation shows Kemp misrepresented election security” – Atlanta-Journal Constitution. Through freedom of information requests, the newspaper obtained records from the Georgia Bureau of Investigation (GBI) on its investigation at the behest of then Secretary of State Brian Kemp, requested days before the gubernatorial election he narrowly won. At the time, Kemp claimed hackers connected to the Democratic Party were trying to get into the state’s voter database, when it was Department of Homeland Security personnel running a routine scan for vulnerabilities Kemp’s office had agreed to months earlier. The GBI ultimately determined Kemp’s claims did not merit a prosecution. Moreover, even though Kemp’s staff at the time continues to deny these findings, the site did have vulnerabilities, including one turned up by a software company employee.
  • Trump, Biden both want to repeal tech legal protections — for opposite reasons” – Politico. Former Vice President Joe Biden (D) wants to revisit Section 230 because online platforms are not doing enough to combat misinformation, in his view. Biden laid out his views on this and other technology matters for the editorial board of The New York Times in January, at which point he said Facebook should have to face civil liability for publishing misinformation. Given Republican and Democratic discontent with Section 230 and the social media platforms, there may be a possibility legislation is enacted to limit this shield from litigation.
  • Wearables like Fitbit and Oura can detect coronavirus symptoms, new research shows” –The Washington Post. Perhaps wearable health technology is a better approach to determining when a person has contracted COVID-19 than contact tracing apps. A handful of studies are producing positive results, but these studies have not yet undergone the per review process. Still, these devices may be able to determine disequilibrium in one’s system as compared to a baseline, suggesting an infection and a need for a test. This article, however, did not explore possible privacy implications of sharing one’s personal health data with private companies.
  • Singapore plans wearable virus-tracing device for all” – Reuters. For less than an estimated $10 USD for unit, Singapore will soon introduce wearable devices to better track contacts to fight COVID-19. In what may be a sign that the city-state has given up on its contact tracing app, TraceTogether, the Asian nation will soon release these wearables. If it not clear if everyone will be mandated to wear one and what privacy and data protections will be in place.
  • Exclusive: Zoom plans to roll out strong encryption for paying customers” – Reuters. In the same vein as Zoom allowing paying customers to choose where their calls are routing through (e.g. paying customers in the United States could choose a different region with lesser surveillance capabilities), Zoom will soon offer stronger security for paying customers. Of course, should Zoom’s popularity during the pandemic solidify into a dominant competitive position, this new policy of offering end-to-end encryption that the company cannot crack would likely rouse the ire of the governments of the Five Eyes nations. These plans breathe further life into the views of those who see a future in which privacy and security are commodities to be bought and those unable or unwilling to afford them will not enjoy either. Nonetheless, the company may still face a Federal Trade Commission (FTC) investigation into its apparently inaccurate claims that calls were encrypted, which may have violated Section 5 of the FTC Act along with similar investigations by other nations.
  • Russia and China target U.S. protests on social media” – Politico. Largely eschewing doctored material, the Russian Federation and the People’s Republic of China (PRC) are using social media platforms to further drive dissension and division in the United States (U.S.) during the protests by amplifying the messages and points of views of Americans, according to an analysis of one think tank. For example, some PRC officials have been tweeting out “Black Lives Matter” and claims that videos purporting to show police violence are, in fact, police violence. The goal to fan the flames and further weaken Washington. Thus far, the American government and the platforms themselves have not had much of a public response. Additionally, this represents a continued trend of the PRC in seeking to sow discord in the U.S. whereas before this year use of social media and disinformation tended to be confined to issues of immediate concern to Beijing.
  • The DEA Has Been Given Permission To Investigate People Protesting George Floyd’s Death” – BuzzFeed News. The Department of Justice (DOJ) used a little known section of the powers delegated to the agency to task the Drug Enforcement Agency (DEA) with conducting “covert surveillance” of to help police maintain order during the protests following the killing of George Floyd’s, among other duties. BuzzFeed News was given the two page memorandum effectuating this expansion of the DEA’s responsibilities beyond drug crimes, most likely by agency insiders who oppose the memorandum. These efforts could include use of authority granted to the agency to engage in “bulk collection” of some information, a practice the DOJ Office of the Inspector General (OIG) found significant issues with, including the lack of legal analysis on the scope of the sprawling collection practices.
  • Cops Don’t Need GPS Data to Track Your Phone at Protests” – Gizmodo. Underlying this extensive rundown of the types of data one’s phone leaks that is vacuumed up by a constellation of entities is the fact that more law enforcement agencies are buying or accessing these data because the Fourth Amendment’s protections do not apply to private parties giving the government information.
  • Zuckerberg Defends Approach to Trump’s Facebook Posts” – The New York Times. Unlike Twitter, Facebook opted not to flag President Donald Trump’s tweets about the protests arising from George Floyd’s killing last week that Twitter found to be glorifying violence. CEO Mark Zuckerberg reportedly deliberated at length with senior leadership before deciding the tweets did not violate the platform’s terms of service, a decision roundly criticized by Facebook employees, some of whom staged a virtual walkout on 1 June. In a conference call, Zuckerberg faced numerous questions about why the company does not respond more forcefully to tweets that are inflammatory or untrue. His answers that Facebook does not act as an arbiter of truth were not well freceived among many employees.
  • Google’s European Search Menu Draws Interest of U.S. Antitrust Investigators” – The New York Times. Allegedly Department of Justice (DOJ) antitrust investigators are keenly interested in the system Google lives under in the European Union (EU) where Android users are now prompted to select a default search engine instead of just making its Google’s. This system was put in place as a response to the EU’s €4.34 billion fine in 2018 for imposing “illegal restrictions on Android device manufacturers and mobile network operators to cement its dominant position in general internet search.” This may be seen as a way to address competition issues while not breaking up Google as some have called for. However, Google is conducting monthly auctions among the other search engines to be of the three choices given to EU consumers, which allows Google to reap additional revenue.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.