Tech Election Results

A number of tech ballot initiatives were considered.

There were a number of significant technology measures put before voters in states in yesterday’s election. The most significant were in California as voters agreed to replace the “California Consumer Privacy Act” (CCPA) (AB 375) with a new privacy bill, voted for another technology-related ballot initiative, and rejected another one. In voting for Proposition 24, California voters chose to replace the recently effective CCPA with the “California Privacy Rights Act” (CPRA) (see here for my analysis) that will largely be operative on 1 January 2023, meaning the CCPA will continue to be the law of California until then unless a federal privacy law is enacted that preempts all state laws.

California voters voted for Proposition 22 that would allow Uber, Lyft and other companies to “Classif[y] app-based drivers as “independent contractors,” instead of “employees,” and provide[] independent-contractor drivers other compensation, unless certain criteria are met.” This ballot initiative would essentially negate AB 5, legislation that codified a court ruling that created the presumption that a person hired by an employer is an employee and not a contractor. Uber and Lyft have been fighting enforcement of AB 5 in court.

Voters also rejected Proposition 25 that would have permitted a 2018 statute to take effect that would have abolished cash bail in California with a system that determines who gets bail on the basis of algorithms. Elsewhere, Michigan voters overwhelmingly voted to support Proposal 20-2. Require Warrant for Electronic Data that would change state law to make electronic communications data protected to the extent police would need to obtain a search warrant before accessing it. In Massachusetts, voters supported expanding a right to repair cars law that would require auto manufacturers to make available telematic data to third-party repair garages. This law is seen as a precursor of a similar right to repair hardware that could soon be placed on ballots throughout the United States.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by David Mark from Pixabay

Further Reading, Other Developments, and Coming Events (10 September)

Coming Events

  • The Federal Communications Commission (FCC) will hold a forum on 5G Open Radio Access Networks on 14 September. The FCC asserted
    • Chairman [Ajit] Pai will host experts at the forefront of the development and deployment of open, interoperable, standards-based, virtualized radio access networks to discuss this innovative new approach to 5G network architecture. Open Radio Access Networks offer an alternative to traditional cellular network architecture and could enable a diversity in suppliers, better network security, and lower costs.
  • The Senate Judiciary Committee’s Antitrust, Competition Policy & Consumer Rights Subcommittee will hold a hearing on 15 September titled “Stacking the Tech: Has Google Harmed Competition in Online Advertising?.” In their press release, Chair Mike Lee (R-UT) and Ranking Member Amy Klobuchar (D-MN) asserted:
    • Google is the dominant player in online advertising, a business that accounts for around 85% of its revenues and which allows it to monetize the data it collects through the products it offers for free. Recent consumer complaints and investigations by law enforcement have raised questions about whether Google has acquired or maintained its market power in online advertising in violation of the antitrust laws. News reports indicate this may also be the centerpiece of a forthcoming antitrust lawsuit from the U.S. Department of Justice. This hearing will examine these allegations and provide a forum to assess the most important antitrust investigation of the 21st century.
  • The United States’ Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) announced that its third annual National Cybersecurity Summit “will be held virtually as a series of webinars every Wednesday for four weeks beginning September 16 and ending October 7:”
    • September 16: Key Cyber Insights
    • September 23: Leading the Digital Transformation
    • September 30: Diversity in Cybersecurity
    • October 7: Defending our Democracy
    • One can register for the event here.
  • On 22 September, the Federal Trade Commission (FTC) will hold a public workshop “to examine the potential benefits and challenges to consumers and competition raised by data portability.”
  • The Senate Judiciary Committee’s Antitrust, Competition Policy & Consumer Rights Subcommittee will hold a hearing on 30 September titled ““Oversight of the Enforcement of the Antitrust Laws” with Federal Trade Commission Chair Joseph Simons and United States Department of Justice Antitrust Division Assistant Attorney General Makan Delhrahim.
  • The Federal Communications Commission (FCC) will hold an open meeting on 30 September, but an agenda is not available at this time.

Other Developments

  • Top Senate Democrats asked the Secretary of the Treasury to impose sanctions on officials and others in the Russian Federation for interfering in the 2020 United States election. In their letter, they urged Secretary Steven Mnuchin “to draw upon the conclusions of the Intelligence Community to identify and target for sanctions all those determined to be responsible for ongoing election interference, including any actors within the government of the Russian Federation, any Russian actors determined to be directly responsible, and those acting on their behalf or providing material or financial support for their efforts.” Given that Mnuchin is unlikely to displease President Donald Trump through agreeing that Russians are again interfering in a presidential election, it is probable that Senate Democrats are seeking to further their line of attack on Republicans that they are unwilling to defend the U.S. and its elections from Russia. They called on Mnuchin to use the authorities granted by Congress in the “Countering America’s Adversaries Through Sanctions Act” (P.L. 115-44) and Executive Order 13848 “Imposing Certain Sanctions in the Event of Foreign Interference in a United States Election.”
  • Epic Games has returned to court in an attempt to force Apple to put its popular multiplayer game, Fortnite back into the App Store. At present, those on iOS devices cannot download and play the newest version of the game released a few weeks ago. Even though Epic Games lost its request for a temporary restraining order that would order Apple to put the game back, it has filed for a preliminary injunction:
    • (1) restraining Defendant Apple Inc. (“Apple”) from removing, de-listing, refusing to list or otherwise making unavailable the app Fortnite or any other app on Epic’s Team ID ’84 account in Apple’s Developer Program, including any update of such an app, from the App Store on the basis that Fortnite offers in-app payment processing through means other than Apple’s In-App Purchase (“IAP”) or on any pretextual basis;
    • (2) restraining Apple from taking any adverse action against Epic, including but not limited to restricting, suspending, or terminating any other Apple Developer Program account of Epic or its affiliates, on the basis that Epic enabled in-app payment processing in Fortnite through means other than IAP or on the basis of the steps Epic took to do so;
    • (3) restraining Apple from removing, disabling, or modifying Fortnite or any code, script, feature, setting, certification, version or update thereof on any iOS user’s device; and
    • (4) requiring Apple to restore Epic’s Team ID ’84 account in Apple’s Developer Program.
    •  Epic Games asserts:
      • This motion is made on the grounds that: (1) Epic is likely to succeed on the merits of its claims that Apple’s conduct violates the Sherman Act; (2) absent a preliminary injunction, Epic is likely to suffer irreparable harm; (3) the balance of harms tips sharply in Epic’s favor; and (4) the public interest supports an injunction.
    • Considering that the judge ruled against Epic Games’ claim of irreparable harm in the motion for a temporary restraining order on the grounds that self-inflicted harm (i.e. Epic Game escalated by putting its own pay option on Fortnite to foil Apple’s 30% take on in-game sales and no public interest being present, one wonders if the company will prevail on this motion.
  • Apple filed a countersuit against Epic Games, arguing the latter breached its contract with the former and now must pay damages. In contrast, Epic Games is not suing for any monetary damages, surely a tactical decision to help its case in court and among interested observers.
    • Apple sought to portray Epic Games’ lawsuit this way:
      • Epic’s lawsuit is nothing more than a basic disagreement over money. Although Epic portrays itself as a modern corporate Robin Hood, in reality it is a multi-billion dollar enterprise that simply wants to pay nothing for the tremendous value it derives from the App Store. Epic’s demands for special treatment and cries of “retaliation” cannot be reconciled with its flagrant breach of contract and its own business practices, as it rakes in billions by taking commissions on game developers’ sales and charging consumers up to $99.99 for bundles of “V-Bucks.”
      • Epic decided that it would like to reap the benefits of the App Store without paying anything for them. Armed with the apparent view that Epic is too successful to play by the same rules as everyone else—and notwithstanding a public proclamation that Epic “w[ould] not accept special revenue sharing or payment terms just for ourselves”1—Epic CEO Tim Sweeney emailed Apple executives on June 30, 2020, requesting a “side letter” that would exempt Epic from its existing contractual obligations, including the App Store Review Guidelines (the “Guidelines”) that apply equally to all Apple developers. Among other things, Mr. Sweeney demanded a complete end-run around “Apple’s fees”—specifically, Epic wished to continue taking full advantage of the App Store while allowing consumers to pay Epic instead, leaving Apple to receive no payment whatsoever for the many services it provides developers and consumers.
    • Apple contended “[t]his Court should hold Epic to its contractual promises, award Apple compensatory and punitive damages, and enjoin Epic from engaging in further unfair business practices.”
  • The General Services Administration (GSA) released a draft Data Ethics Framework as part of implementing the Trump Administration’s Federal Data Strategy.
    • GSA noted
      • The Federal Data Strategy, delivered in December 2019, recognized the importance of ethics in its founding Principles. When the Federal Data Strategy team created the 2020 Action Plan, they specifically tasked the General Services Administration (GSA) with developing a Data Ethics Framework (Framework)in Action 14to help agency employees, managers, and leaders make ethical decisions as they acquire, manage, and use data.
      • The resulting Framework is intended to be a “living” resource and to be regularly updated by the CDO Council and ICSP. The Framework incorporates the input and terminology from stakeholders representing many domains, and who use different types of data in different ways. The developers of the Framework recognize that some terms may be used differently, depending on the context, type of data being used, and stage in the data lifecycle.
      • The Framework applies to all data types and data uses. The Framework consists of four parts:
        • About the Data Ethics Framework outlines the intended purpose and audience of this document
        • Data Ethics Defined explores the meaning of the term “data ethics,” as background to the Tenets provided in the following section
        • Data Ethics Tenets provides seven Tenets, or high-level principles, for using data ethically within the Federal Government
        • Data Ethics Tenets in Action describes the benefits of data ethics and contains use cases demonstrating how the Tenets can guide data activities within federal agencies and federally sponsored programs
      • The Administration claimed the 2020 Action Plan “establishes a solid foundation that will support implementation of the strategy over the next decade…[and] identifies initial actions for agencies that are essential for establishing processes, building capacity, and aligning existing efforts to better leverage data as a strategic asset.” The use of federal data holds a key place in the President’s Management Agenda (PMA) and, according to the Administration, will be a key driver in transforming how the federal government operates, particularly in relation to technology. The 2020 Action Plan lays out the steps agencies will be expected to take to realize the Administration’s 10-year Federal Data Strategy. As always, results will be informed by follow through and prioritization by the Office of Management and Budget (OMB) and buy-in from agency leadership.
      • Notably, the Administration tied the 2020 Action Plan to a number of other ongoing initiatives that rely heavily on data. The Administration said the plan “incorporates requirements of the Foundations for Evidence-Based Policymaking Act of 2018, the Geospatial Data Act of 2018, and Executive Order 13859 on Maintaining American Leadership in Artificial Intelligence.”
  • The Office of the Australian Information Commissioner (OAIC) published “its Corporate Plan for 2020-21, which sets out its strategic priorities and key activities for the next four years” according to its press release. The OAIC stated “[t]he plan identifies four strategic priorities that will help the OAIC achieve its vision to increase public trust and confidence in the protection of personal information and access to government-held information:
    • Advance online privacy protections for Australians
    • Influence and uphold privacy and information access rights frameworks
    • Encourage and support proactive release of government-held information, and
    • Contemporary approach to regulation.
    • The agency stated:
      • Over the coming year, the OAIC will continue to promote strong privacy protections for the use of personal information to prevent and manage the spread of COVID-19, including oversight of data handling within the COVIDSafe app system. 
      • Strengthening privacy protections in the online environment remains a key focus for the organisation, while privacy law reform will be a priority in 2020-21, with the Australian Government’s review of the Privacy Act an opportunity to ensure the regulatory framework can respond to new challenges in the digital environment.
      • Commissioner [Angelene] Falk said the OAIC will also enforce privacy safeguards under the Consumer Data Right and will continue its work to improve transparency and prevent harm to consumers through its oversight of the Notifiable Data Breaches scheme.
  • Ontario’s Ministry of Government and Consumer Services “launched consultations to improve the province’s privacy protection laws” and stakeholders “will have the opportunity to contribute to strengthening transparency and accountability concerning the collection, use and safeguarding of personal information online.” Ontario “is seeking advice on ways to:
    • Increase transparency for individuals, providing Ontarians with more detail about how their information is being used by businesses and organizations.
    • Enhance consent provisions allowing individuals to revoke consent at any time, and adopting an “opt-in” model for secondary uses of their information.
    • Introduce a right for individuals to request information related to them be deleted, subject to limitations (this is otherwise known as “right to erasure” or “the right to be forgotten”).
    • Introduce a right for individuals to obtain their data in a standard and portable digital format, giving them greater freedom to change service providers without losing their data (this is known as “data portability”).
    • Increase enforcement powers for the Information and Privacy Commissioner to ensure businesses comply with the law, including giving the commissioner the ability to impose penalties.
    • Introduce requirements for data that has been de-identified and derived from personal information to provide clarity of applicability of privacy protections.
    • Expand the scope and application of the law to include non-commercial organizations, including not-for-profits, charities, trade unions and political parties.
    • Create a legislative framework to enable the establishment of data trusts for privacy protective data sharing.
  • The United States (U.S.) Department of Homeland Security (DHS) Office of the Inspector General (OIG) issued “Progress and Challenges in Modernizing DHS’ Information Technology (IT) Systems and Infrastructure” and found fault with these three systems:
    • DHS-wide Human Resources IT (HRIT)
    • DHS Legacy Major IT Financial System that “[s]erves as Coast Guard and Transportation Security Agency’s (TSA) financial system of record.
    • Federal Emergency Management Agency (FEMA) Grants Management Mission Domain and Operational Environment
    • The OIG stated
      • The DHS 2019–2023 IT strategic plan included two distinct department-wide IT modernization initiatives: to adopt cloud-based computing and to consolidate data centers. However, not all components have complied with or fully embraced these efforts due to a lack of standard guidance and funding. Without consistent implementation of these efforts, DHS components remain hindered in their ability to provide personnel with more enhanced, up-to-date technology.
      • In the meantime, DHS continues to rely on deficient and outdated IT systems to perform mission-critical operations. We identified three legacy IT systems with significant operational challenges that negatively affected critical DHS functions, such as human resources and financial management, as well as disaster recovery mission operations. DHS has not made sufficient progress in replacing or augmenting these IT systems due to ineffective planning and inexperience in executing complex IT modernization efforts. Additionally, the DHS CIO has not performed mandated oversight of legacy IT to mitigate and reduce risks associated with outdated systems. Until DHS addresses these issues, it will continue to face significant challenges to accomplish mission operations efficiently and effectively
    • The OIG recommended:
      • We recommend the DHS OCIO develop department-wide guidance for implementing cloud technology and migrating legacy IT systems to the cloud. Recommendation
      • We recommend the DHS OCIO coordinate with components to develop and finalize a data center migration approach to accomplish strategic goals for reducing the footprint of DHS IT infrastructure. Recommendation
      • We recommend the DHS OCIO establish a process to assign risk ratings for major legacy IT investments, as required by the Federal Information Technology Acquisition Reform Act.
  • The University of Toronto’s Citizen Lab and the International Human Rights Program at the University of Toronto’s Faculty of Law published a report “To Surveil and Predict: A Human Rights Analysis of Algorithmic Policing in Canada” that “focuses on the human rights and constitutional law implications of the use of algorithmic policing technologies by law enforcement authorities.” The authors found:
    • The research conducted for this report found that multiple law enforcement agencies across Canada have started to use, procure, develop, or test a variety of algorithmic policing methods. These programs include using and both developing predictive policing technologies and using algorithmic surveillance tools. Additionally, some law enforcement agencies have acquired tools with the capability of algorithmic policing technology, but they are not currently using that capability because, to date, they have not decided to do so. 
    • The authors “analyze the potential impacts of algorithmic policing technologies on the following rights: the right to privacy; the right to freedoms of expression, peaceful assembly, and association; the right to equality and freedom from discrimination; the right to liberty and to be free from arbitrary detention; the right to due process; and the right to a remedy.”
  • The United States (U.S.) Department of Homeland Security (DHS) issued “the Electromagnetic Pulse (EMP) Program Status Report as part of an update on efforts underway in support of Executive Order (E.O.) 13865 on Coordinating National Resilience to Electromagnetic Pulses…[that] establishes resilience and security standards for U.S. critical infrastructure as a national priority.”
    • DHS stated
      • E.O.13865 states, “An electromagnetic pulse (EMP) has the potential to disrupt, degrade, and damage technology and critical infrastructure systems. Human-made or naturally occurring EMPs can affect large geographic areas, disrupting elements critical to the Nation’s security and economic prosperity, and could adversely affect global commerce and stability. The federal government must foster sustainable, efficient, and cost-effective approaches to improving the Nation’s resilience to the effects of EMPs.”
      • In accordance with E.O.13865, the Department has identified initial critical infrastructure and associated functions that are at greatest risk from an EMP and is focusing efforts on the development and implementation of evidence-based and independently-tested EMP protection and mitigation technologies and resilience best practices. Initial efforts within the Department, working across the federal interagency, have focused on risk management to both the Energy and Communications Sectors.
  • Two United States Magistrate Judges denied three requests for a geofence warrant to serve on Google to obtain cell phone data from an area of Chicago for three forty-five minutes periods on three different days. The courts took the unusual step of unsealing the opinions for the proceedings which are not adversarial because the person or people suspected of being involved with the alleged crime are presumably unaware and therefore cannot contest the warrant application. If Google took an adversarial position, there is no indication in the decisions the company did so. However, Google did state in a filing that “[b]etween 2017 and 2018, Google saw a 1,500% increase in geofence requests…[and] [b]etween 2018 and 2019, that figure shot up another 500%.”
    • Moreover, one wonders if prosecutors did not also seek similar warrant requests from other companies such as telecommunications providers. Nonetheless, the judges ruled the geofence warrant requests violated the Fourth Amendment to the U.S. Constitution in a number of ways and suggested that narrower, more particular requests might have been legal.
    • In the first denial, the magistrate judge explained:
      • As to the first geofence request, the government has probable cause to believe that the suspect received the stolen pharmaceuticals from a commercial enterprise located within the designated geofence area during the designated forty-five minute interval in the early afternoon hours on the day of the first geofence request. The geofence, which has a 100-meter radius, is in a densely populated city, and the area contains restaurants, various commercial establishments, and at least one large residential complex, complete with a swimming pool, workout facilities, and other amenities associated with upscale urban living.
      • The second and third geofence requests focus on the same commercial enterprise where the government has probable cause to believe that the suspect shipped some of the stolen pharmaceuticals to a buyer, who purchased the pharmaceuticals from the suspect at the government’s direction. Again, the government’s requested geofence is a I00-meter radius area extending from the commercial establishment where the suspect shipped the pharmaceuticals and covers two separate dates for forty-five minute intervals in the early afternoon hours. This geofence includes medical offices and other single and multi-floor commercial establishments that are likely to have multiple patrons during the early afternoon hours.
      • The warrant application contemplates that the information will be obtained in three stages: (l) Google will be required to disclose to the government an anonymized list of devices that specifies information including the corresponding unique device ID, timestamp, coordinates, and data source, if available, of the devices that reported their location within the geofence during the forty-five minute periods; (2) the government will then review the list to prioritize the devices about which it wishes to obtain associated information; and (3) Google will then be required to disclose to the government the information identifying the Google account(s) for those devices about which the government further inquiries. The warrant application includes no criteria or limitations as to which cellular telephones government agents can seek additional information.

Further Reading

  • A Saudi Prince’s Attempt to Silence Critics on Twitter” By Bradley Hope and Justin Scheck – WIRED. Considering the United States Department of Justice indictments against three Saudi nationals in November 2019 and resulting news stories (“Why Do We Tolerate Saudi Money in Tech?” – The New York Times and “Former Twitter employees charged with spying for Saudi Arabia by digging into the accounts of kingdom critics” – The Washington Post), one would think what news is there in this excerpt on a book. But we learn that Twitter’s anti-establishment stance led the company’s lawyers to suspend the Saudi Twitter employee who the target of a U.S. investigation which allowed him to flee the U.S. Government lawyers were livid. The bigger issue is foreign operatives infiltrated social media platforms and then reaping information about selected people, especially dissidents.
  • When Algorithms Give Real Students Imaginary Grades” By Meredith Broussard – The New York Times. The International Baccalaureate (IB) program used an algorithm to hand out grades this past spring when in-person exams were cancelled. It did not go well as you might imagine. The same was true in the United Kingdom for its A-level exams, causing a furor there. The case id made for never using algorithms in education or related fields.
  • Wheely ride-hailing app writes to UK privacy watchdog over Moscow data demands” By Simon Goodley – The Guardian. A British ride-sharing company wrote the United Kingdom’s data protection authority about data requests made by the Moscow Department of Transportation (MDOT) on individual riders. Wheely made the case to the Information Commissioner’s Office (ICO) that it could not hand over the data under the General Data Protection Regulation (GDPR) unlike some of the app’s rivals who apparently complied with the demand. It is not clear whether the company’s GDPR obligations would apply in another jurisdiction. It may possible Wheely is trying to smear the other companies in the U.K.
  • Deepfake porn is now mainstream. And major sites are cashing in” By Matt BurgessWired. Through the use of artificial intelligence technology, people are making fake pornography in which actresses’ faces are affixed to women’s bodies that are engaged in sexual acts. These deepfake porn videos are soaring in popularity, and there are often not good options for taking them down or taking legal action. This is another area in which technology has outpaced policy and law.
  • Most cyber-security reports only focus on the cool threats” By Catalin Cimpanu – ZDNet. Turns out that commercial threat reports are issued with an eye towards generating business and considering that governments and huge contractors have the deepest pockets, the issues of concern are covered while other less lucrative areas like threats to civil society are largely ignored. These reports also influence policymakers and give them a distorted picture of cyber threats.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Further Reading, Other Developments, and Coming Events (13 August)

Here are Further Reading, Other Developments, and Coming Events:

Coming Events

  • On 18 August, the National Institute of Standards and Technology (NIST) will host the “Bias in AI Workshop, a virtual event to develop a shared understanding of bias in AI, what it is, and how to measure it.”
  • The United States’ Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) announced that its third annual National Cybersecurity Summit “will be held virtually as a series of webinars every Wednesday for four weeks beginning September 16 and ending October 7:”
    • September 16: Key Cyber Insights
    • September 23: Leading the Digital Transformation
    • September 30: Diversity in Cybersecurity
    • October 7: Defending our Democracy
    • One can register for the event here.
  • The Senate Judiciary Committee’s Antitrust, Competition Policy & Consumer Rights Subcommittee will hold a hearing on 15 September titled “Stacking the Tech: Has Google Harmed Competition in Online Advertising?.” In their press release, Chair Mike Lee (R-UT) and Ranking Member Amy Klobuchar (D-MN) asserted:
    • Google is the dominant player in online advertising, a business that accounts for around 85% of its revenues and which allows it to monetize the data it collects through the products it offers for free. Recent consumer complaints and investigations by law enforcement have raised questions about whether Google has acquired or maintained its market power in online advertising in violation of the antitrust laws. News reports indicate this may also be the centerpiece of a forthcoming antitrust lawsuit from the U.S. Department of Justice. This hearing will examine these allegations and provide a forum to assess the most important antitrust investigation of the 21st century.

Other Developments

  • Senate Intelligence Committee Acting Chair Marco Rubio (R-FL) and Vice Chairman Mark Warner (D-VA) released a statement indicating the committee had voted to adopt the fifth and final volume of its investigation of the Russian Federation’s interference in the 2016 election. The committee had submitted the report to the Intelligence Community for vetting and have received the report with edits and redactions. The report could be released sometime over the next few weeks.  Rubio and Warner stated “the Senate Intelligence Committee voted to adopt the classified version of the final volume of the Committee’s bipartisan Russia investigation. In the coming days, the Committee will work to incorporate any additional views, as well as work with the Intelligence Community to formalize a properly redacted, declassified, publicly releasable version of the Volume 5 report.” The Senate Intelligence Committee’s has released four previous reports:
  • The National Institute of Standards and Technology (NIST) is accepting comments until 11 September on draft Special Publication 800-53B, “Control Baselines for Information Systems and Organizations,” a guidance document that will serve a key role in the United States government’s efforts to secure and protect the networks and systems it operates and those run by federal contractors. NIST explained:
    • This publication establishes security and privacy control baselines for federal information systems and organizations and provides tailoring guidance for those baselines. The use of the security control baselines is mandatory, in accordance with OMB Circular A-130 [OMB A-130] and the provisions of the Federal Information Security Modernization Act4 [FISMA], which requires the implementation of a set of minimum controls to protect federal information and  information systems. Whereas use of the privacy control baseline is not mandated by law or [OMB A-130], SP 800-53B, along with other supporting NIST publications, is designed to help organizations identify the security and privacy controls needed to manage risk and satisfy the security and privacy requirements in FISMA, the Privacy Act of 1974 [PRIVACT], selected OMB policies (e.g., [OMB A-130]), and designated Federal Information Processing Standards (FIPS), among others
  • The United States Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) released an “Election Vulnerability Reporting Guide
    to provide “election administrators with a step-by-step guide, list of resources, and a template for establishing a successful vulnerability disclosure program to address possible vulnerabilities in their state and local election systems…[and] [t]he six steps include:
    • Step 1: Identify Systems Where You Would Accept Security Testing, and those Off-Limits
    • Step 2: Draft an Easy-to-Read Vulnerability Disclosure Policy (See Appendix III)
    • Step 3: Establish a Way to Receive Reports/Conduct Follow-On Communication
    • Step 4: Assign Someone to Thank and Communicate with Researchers
    • Step 5: Assign Someone to Vet and Fix the Vulnerabilities
    • Step 6: Consider Sharing Information with Other Affected Parties
  • The United Kingdom’s Information Commissioner’s Office (ICO) has issued “Guidance on AI and data protection” that “clarifies how you can assess the risks to rights and freedoms that AI can pose from a data protection perspective; and the appropriate measures you can implement to mitigate them.” The ICO explained “[w]hile data protection and ‘AI ethics’ overlap, this guidance does not provide generic ethical or design principles for your use of AI.” The ICO stated “[i]t corresponds to data protection principles, and is structured as follows:
    • part one addresses accountability and governance in AI, including data protection impact assessments (DPIAs);
    • part two covers fair, lawful and transparent processing, including lawful bases, assessing and improving AI system performance, and mitigating potential discrimination;
    • part three addresses data minimisation and security; and
    • part four covers compliance with individual rights, including rights related to automated decision-making.
  •  20 state attorneys general wrote Facebook Chief Executive Officer Mark Zuckerberg and Chief Operating Officer Sheryl Sandberg “to request  that  you  take  additional  steps  to prevent   Facebook   from   being used   to   spread   disinformation   and   hate   and   to   facilitate discrimination.” They also asked “that you take more steps to provide redress for users who fall victim to intimidation and harassment, including violence and digital abuse.” The attorneys general said that “[b]ased on our collective experience, we believe that Facebook should take additional actions including the following steps—many of which are highlighted in Facebook’s recent Civil Rights Audit—to strengthen its commitment to civil rights and fighting disinformation and discrimination:
    • Aggressively enforce Facebook policies against hate speech and organized hate organizations: Although Facebook has developed policies against hate speech and organizations that peddle it, we remain concerned that Facebook’s policies on Dangerous Individuals and Organizations, including but not limited to its policies on white nationalist and white supremacist content, are not enforced quickly and comprehensively enough. Content that violates Facebook’s own policies too often escapes removal just because it comes as coded language, rather than specific magic words. And even where Facebook takes steps to address a particular violation, it often fails to proactively address the follow-on actions by replacement or splinter groups that quickly emerge.
    • Allow public, third-party audits of hate content and enforcement: To gauge the ongoing progress of Facebook’s enforcement efforts, independent experts should be permitted access to the data necessary to conduct regular, transparent third-party audits of hate and hate-related misinformation on the platform, including any information made available to the Global Oversight Board. As part of this effort, Facebook should capture data on the prevalence of different forms of hate content on the platform, whether or not covered by Facebook’s own community standards, thus allowing the public to determine whether enforcement of anti-hate policies differs based on the type of hate content at issue.
    • Commit to an ongoing, independent analysis of Facebook’s content population scheme and the prompt development of best practices guidance: By funneling users toward particular types of content, Facebook’s content population scheme, including its algorithms, can push users into extremist online communities that feature divisive and inflammatory messages, often directed at particular groups. Although Facebook has conducted research and considered programs to reduce this risk, there is still no mandatory guidance for coders and other teams involved in content population. Facebook should commit to an ongoing, independent analysis of its content population scheme, including its algorithms, and also continuously implement mandatory protocols as best practices are identified to curb bias and prevent recommendations of hate content and groups.
    • Expand policies limiting inflammatory advertisements that vilify minority groups: Although Facebook currently prohibits ads that claim that certain people, because of their membership in a protected group, pose a threat to the physical safety of communities or the nation, its policies still allow attacks that characterize such groups as threats to national culture or values. The current prohibition should be expanded to include such ads.
  • New Zealand’s Ministry of Statistics “launched the Algorithm Charter for Aotearoa New Zealand” that “signals that [the nation’s agencies] are committed to being consistent, transparent and accountable in their use of algorithms.”
    • The Ministry explained “[t]he Algorithm Charter is part of a wider ecosystem and works together with existing tools, networks and research, including:
      • Principles for the Safe and Effective Use of Data and Analytics (Privacy Commissioner and Government Chief Data Steward, 2018)
      • Government Use of Artificial Intelligence in New Zealand (New Zealand Law Foundation and Otago University, 2019)
      • Trustworthy AI in Aotearoa – AI Principles (AI Forum New Zealand, 2020)
      • Open Government Partnership, an international agreement to increase transparency.
      • Data Protection and Use Policy (Social Wellbeing Agency, 2020)
      • Privacy, Human Rights and Ethics Framework (Ministry of Social Development).
  • The European Union (EU) imposed its first cyber sanctions under its Framework for a Joint EU Diplomatic Response to Malicious Cyber Activities (aka the cyber diplomacy toolbox) against six hackers and three entities from the Russian Federation, the People’s Republic of China (PRC) and the Democratic People’s Republic of Korea for attacks against the against the Organisation for the Prohibition of Chemical Weapons (OPCW) in the Netherlands, the malware attacks known as Petya and WannaCry, and Operation Cloud Hopper. The EU’s cyber sanctions follow sanctions the United States has placed on a number of people and entities from the same nations and also indictments the U.S. Department of Justice has announced over the years. The sanctions are part of the effort to levy costs on nations and actors that conduct cyber attacks. The EU explained:
    • The attempted cyber-attack was aimed at hacking into the Wi-Fi network of the OPCW, which, if successful, would have compromised the security of the network and the OPCW’s ongoing investigatory work. The Netherlands Defence Intelligence and Security Service (DISS) (Militaire Inlichtingen- en Veiligheidsdienst – MIVD) disrupted the attempted cyber-attack, thereby preventing serious damage to the OPCW.
    • “WannaCry” disrupted information systems around the world by targeting information systems with ransomware and blocking access to data. It affected information systems of companies in the Union, including information systems relating to services necessary for the maintenance of essential services and economic activities within Member States.
    • “NotPetya” or “EternalPetya” rendered data inaccessible in a number of companies in the Union, wider Europe and worldwide, by targeting computers with ransomware and blocking access to data, resulting amongst others in significant economic loss. The cyber-attack on a Ukrainian power grid resulted in parts of it being switched off during winter.
    • “Operation Cloud Hopper” has targeted information systems of multinational companies in six continents, including companies located in the Union, and gained unauthorised access to commercially sensitive data, resulting in significant economic loss.
  • The United States’ Federal Communications Commission (FCC) is asking for comments on the Department of Commerce’s the National Telecommunications and Information Administration’s (NTIA) petition asking the agency to start a rulemaking to clarify alleged ambiguities in 47 USC 230 regarding the limits of the liability shield for the content others post online versus the liability protection for “good faith” moderation by the platform itself. The NTIA was acting per direction in an executive order allegedly aiming to correct online censorship. Executive Order 13925, “Preventing Online Censorship” was issued in late May after Twitter factchecked two of President Donald Trump’s Tweets regarding false claims made about mail voting in California in response to the COVID-19 pandemic. Comments are due by 2 September.
  • The Australian Competition & Consumer Commission (ACCC) released for public consultation a draft of “a mandatory code of conduct to address bargaining power imbalances between Australian news media businesses and digital platforms, specifically Google and Facebook.” The government in Canberra had asked the ACCC to draft this code earlier this year after talks broke down between the Australian Treasury
    • The ACCC explained
      • The code would commence following the introduction and passage of relevant legislation in the Australian Parliament. The ACCC released an exposure draft of this legislation on 31 July 2020, with consultation on the draft due to conclude on 28 August 2020. Final legislation is expected to be introduced to Parliament shortly after conclusion of this consultation process.
    • This is not the ACCC’s first interaction with the companies. Late last year, the ACCC announced a legal action against Google “alleging they engaged in misleading conduct and made false or misleading representations to consumers about the personal location data Google collects, keeps and uses” according to the agency’s press release. In its initial filing, the ACCC is claiming that Google mislead and deceived the public in contravention of the Australian Competition Law and Android users were harmed because those that switched off Location Services were unaware that their location information was still be collected and used by Google for it was not readily apparent that Web & App Activity also needed to be switched off.
    • A year ago, the ACCC released its final report in its “Digital Platforms Inquiry” that “proposes specific recommendations aimed at addressing some of the actual and potential negative impacts of digital platforms in the media and advertising markets, and also more broadly on consumers.”
  • The United States’ Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) issued “released core guidance documentation for the Trusted Internet Connections (TIC) program, developed to assist agencies in protecting modern information technology architectures and services.” CISA explained “In accordance with the Office of Management and Budget (OMB) Memorandum (M) 19-26: Update to the TIC Initiative, TIC 3.0 expands on the original initiative to drive security standards and leverage advances in technology to secure a wide spectrum of agency network architectures.” Specifically, CISA released three core guidance documents:
    • Program Guidebook (Volume 1) – Outlines the modernized TIC program and includes its historical context
    • Reference Architecture (Volume 2) – Defines the concepts of the program to guide and constrain the diverse implementations of the security capabilities
  • Senators Ron Wyden (D-OR), Bill Cassidy (R-LA) and ten other Members wrote the Federal Trade Commission (FTC) urging the agency “to investigate widespread privacy violations by companies in the advertising technology (adtech) industry that are selling private data about millions of Americans, collected without their knowledge or consent from their phones, computers, and smart TVs.” They asked the FTC “to use its authority to conduct broad industry probes under Section 6(b) of the FTC Act to determine whether adtech companies and their data broker partners have violated federal laws prohibiting unfair and deceptive business practices.” They argued “[t]he FTC should not proceed with its review of the Children’s Online Privacy Protection Act (COPPA) Rule before it has completed this investigation.”
  •  “100 U.S. women lawmakers and current and former legislators from around the world,” including Speaker of the House Nancy Pelosi (D-CA), sent a letter to Facebook CEO Mark Zuckerberg and COO Sheryl Sandberg urging the company “to take decisive action to protect women from rampant and increasing online attacks on their platform that have caused many women to avoid or abandon careers in politics and public service.” They noted “[j]ust a few days ago, a manipulated and widely shared video that depicted Speaker Pelosi slurring her speech was once again circulating on major social media platforms, gaining countless views before TikTok, Twitter, and YouTube all removed the footage…[and] [t]he video remains on Facebook and is labeled “partly false,” continuing to gain millions of views.” The current and former legislators “called on Facebook to enforce existing rules, including:
    • Quick removal of posts that threaten candidates with physical violence, sexual violence or death, and that glorify, incite or praise violence against women; disable the relevant accounts, and refer offenders to law enforcement.
    • Eliminate malicious hate speech targeting women, including violent, objectifying or dehumanizing speech, statements of inferiority, and derogatory sexual terms;
    • Remove accounts that repeatedly violate terms of service by threatening, harassing or doxing or that use false identities to attack women leaders and candidates; and
    • Remove manipulated images or videos misrepresenting women public figures.
  • The United States’ Departments of Commerce and Homeland Security released an update “highlighting more than 50 activities led by industry and government that demonstrate progress in the drive to counter botnet threats.” in May 2018, the agencies submitted “A Report to the President on Enhancing the Resilience of the Internet and Communications Ecosystem Against Botnets and Other Automated, Distributed Threats” that identified a number of steps and prompted a follow on “A Road Map Toward Resilience Against Botnets” released in November 2018.
  • United States (U.S.) Secretary of Commerce Wilbur Ross and European Commissioner for Justice Didier Reynders released a joint statement explaining that “[t]he U.S. Department of Commerce and the European Commission have initiated discussions to evaluate the potential for an enhanced EU-U.S. Privacy Shield framework to comply with the July 16 judgment of the Court of Justice of the European Union in the Schrems II case.”
    • Maximillian Schrems filed a complaint against Facebook with Ireland’s Data Protection Commission (DPC) in 2013, alleging that the company’s transfer of his personal data violated his rights under European Union law because of the mass U.S. surveillance revealed by former National Security Agency (NSA) contractor Edward Snowden. Ultimately, this case resulted in a 2015 Court of Justice of the European Union (CJEU) ruling that invalidated the Safe Harbor agreement under which the personal data of EU residents was transferred to the US by commercial concerns. The EU and US executed a follow on agreement, the EU-U.S. Privacy Shield, that was designed to address some of the problems the CJEU turned up, and the U.S. passed a law, the “Judicial Redress Act of 2015” (P.L. 114-126), to provide EU citizens a way to exercise their EU rights in US courts via the “Privacy Act of 1974.”
    • However, Schrems continued and soon sought to challenge the legality of the European Commission’s signing off on the Privacy Shield agreement, the adequacy decision issued in 2016, and also the use of standard contractual clauses (SCC) by companies for the transfer of personal data to the US. The CJEU struck down the adequacy decision, throwing into doubt many entities’ transfers out of the EU into the U.S. but upheld SCCs in a way that suggested EU data protection authorities (DPA) may need to review all such agreements to ensure they comply with EU law.
  • The European Commission (EC) announced an “an in-depth investigation to assess the proposed acquisition of Fitbit by Google under the EU Merger Regulation.” The EC voiced its concern “that the proposed transaction would further entrench Google’s market position in the online advertising markets by increasing the already vast amount of data that Google could use for personalisation of the ads it serves and displays.” The EC detailed its “preliminary competition concerns:
    • Following its first phase investigation, the Commission has concerns about the impact of the transaction on the supply of online search and display advertising services (the sale of advertising space on, respectively, the result page of an internet search engine or other internet pages), as well as on the supply of ”ad tech” services (analytics and digital tools used to facilitate the programmatic sale and purchase of digital advertising). By acquiring Fitbit, Google would acquire (i) the database maintained by Fitbit about its users’ health and fitness; and (ii) the technology to develop a database similar to Fitbit’s one.
    • The data collected via wrist-worn wearable devices appears, at this stage of the Commission’s review of the transaction, to be an important advantage in the online advertising markets. By increasing the data advantage of Google in the personalisation of the ads it serves via its search engine and displays on other internet pages, it would be more difficult for rivals to match Google’s online advertising services. Thus, the transaction would raise barriers to entry and expansion for Google’s competitors for these services, to the ultimate detriment of advertisers and publishers that would face higher prices and have less choice.
    • At this stage of the investigation, the Commission considers that Google:
      • is dominant in the supply of online search advertising services in the EEA countries (with the exception of Portugal for which market shares are not available);
      • holds a strong market position in the supply of online display advertising services at least in Austria, Belgium, Bulgaria, Croatia, Denmark, France, Germany, Greece, Hungary, Ireland, Italy, Netherlands, Norway, Poland, Romania, Slovakia, Slovenia, Spain, Sweden and the United Kingdom, in particular in relation to off-social networks display ads;
      • holds a strong market position in the supply of ad tech services in the EEA.
    • The Commission will now carry out an in-depth investigation into the effects of the transaction to determine whether its initial competition concerns regarding the online advertising markets are confirmed.
    • In addition, the Commission will also further examine:
      • the effects of the combination of Fitbit’s and Google’s databases and capabilities in the digital healthcare sector, which is still at a nascent stage in Europe; and
      • whether Google would have the ability and incentive to degrade the interoperability of rivals’ wearables with Google’s Android operating system for smartphones once it owns Fitbit.
    • In February after the deal had been announced, the European Data Protection Board (EDPB) made clear it position that Google and Fitbit will need to scrupulously observe the General Data Protection Regulation’s privacy and data security requirements if the body is sign off on the proposed $2.2 billion acquisition. Moreover, at present Google has not informed European Union (EU) regulators of the proposed deal. The deal comes at a time when both EU and U.S. regulators are already investigating Google for alleged antitrust and anticompetitive practices, and the EDPB’s opinion could carry weight in this process.
  • The United States’ (U.S.) Department of Homeland Security released a Privacy Impact Assessment for the U.S. Border Patrol (USPB) Digital Forensics Programs that details how it may conduct searches of electronic devices at the U.S. border and ports of entry. DHS explained
    • As part of USBP’s law enforcement duties, USBP may search and extract information from electronic devices, including: laptop computers; thumb drives; compact disks; digital versatile disks (DVDs); mobile phones; subscriber identity module (SIM) cards; digital cameras; vehicles; and other devices capable of storing electronic information.
    • Last year, a U.S. District Court held that U.S. Customs and Border Protection (CPB) and U.S. Immigration and Customs Enforcement’s (ICE) current practices for searches of smartphones and computers at the U.S. border are unconstitutional and the agency must have reasonable suspicion before conducting such a search. However, the Court declined the plaintiffs’ request that the information taken off of their devices be expunged by the agencies. This ruling follows a Department of Homeland Security Office of the Inspector General (OIG) report that found CPB “did not always conduct searches of electronic devices at U.S. ports of entry according to its Standard Operating Procedures” and asserted that “[t]hese deficiencies in supervision, guidance, and equipment management, combined with a lack of performance measures, limit [CPB’s] ability to detect and deter illegal activities related to terrorism; national security; human, drug, and bulk cash smuggling; and child pornography.”
    • In terms of a legal backdrop, the United States Supreme Court has found that searches and seizures of electronic devices at borders and airports are subject to lesser legal standards than those conducted elsewhere in the U.S. under most circumstances. Generally, the government’s interest in securing the border against the flow of contraband and people not allowed to enter allow considerable leeway to the warrant requirements for many other types of searches. However, in recent years two federal appeals courts (the Fourth and Ninth Circuits) have held that searches of electronic devices require suspicion on the part of government agents while another appeals court (the Eleventh Circuit) held differently. Consequently, there is not a uniform legal standard for these searches.
  • The Inter-American Development Bank (IDB) and the Organization of Americans States (OAS) released their second assessment of cybersecurity across Latin America and the Caribbean that used the Cybersecurity Capacity Maturity Model for Nations (CMM) developed at University of Oxford’s Global Cyber Security Capacity Centre (GSCC). The IDB and OAS explained:
    • When the first edition of the report “Cybersecurity: Are We Ready in Latin America and the Caribbean?” was released in March 2016, the IDB and the OAS aimed to provide the countries of Latin America and the Caribbean (LAC) not only with a picture of the state of cybersecurity but also guidance about the next steps that should be pursued to strengthen national cybersecurity capacities. This was the first study of its kind, presenting the state of cybersecurity with a comprehensive vision and covering all LAC countries.
    • The great challenges of cybersecurity, like those of the internet itself, are of a global nature. Therefore, it is undeniable that the countries of LAC must continue to foster greater cooperation among themselves, while involving all relevant actors, as well as establishing a mechanism for monitoring, analysis, and impact assessment related to cybersecurity both nationally and regionally. More data in relation to cybersecurity would allow for the introduction of a culture of cyberrisk management that needs to be extended both in the public and private sectors. Countries must be prepared to adapt quickly to the dynamic environment around us and make decisions based on a constantly changing threat landscape. Our member states may manage these risks by understanding the impact on and the likelihood of cyberthreats to their citizens, organizations, and national critical infrastructure. Moving to the next level of maturity will require a comprehensive and sustainable cybersecurity policy, supported by the country’s political agenda, with allocation of  financial resources and qualified human capital to carry it out.
    • The COVID-19 pandemic will pass, but events that will require intensive use of digital technologies so that the world can carry on will continue happening. The challenge of protecting our digital space will, therefore, continue to grow. It is the hope of the IDB and the OAS that this edition of the report will help LAC countries to have a better understanding of their current state of cybersecurity capacity and be useful in the design of the policy initiatives that will lead them to increase their level of cyberresilience.
  • The European Data Protection Supervisor (EDPS) issued an opinion on “the European Commission’s action plan for a comprehensive Union policy on preventing money laundering and terrorism financing (C(2020)2800 final), published on 7 May 2020.” The EDPS asserted:
    • While  the  EDPS acknowledges the  importance  of  the  fight  against money  laundering  and terrorism financing as an objective of general interest, we call for the legislation to strike a balance between the interference with the fundamental rights of privacy and personal data protection and  the measures that  are  necessary  to  effectively  achieve  the  general  interest goals on anti-money  laundering  and  countering the  financing  of terrorism (AML/CFT) (the principle of proportionality).
    • The EDPS recommends that the Commission monitors the effective implementation of the existing  AML/CFT  framework while ensuring that the  GDPR  and  the  data  protection framework are respected and complied with. This is particularly relevant for the works on the interconnection of central bank account mechanisms and beneficial ownership registers that should be largely inspired by the principles of data minimisation, accuracy and privacy-by-design and by default.  

Further Reading

  • China already has your data. Trump’s TikTok and WeChat bans can’t stop that.” By Aynne Kokas – The Washington Post. This article persuasively makes the case that even if a ban on TikTok and WeChat were to work, and there are substantive questions as to how a ban would given how widely the former has been downloaded, the People’s Republic of China (PRC) is almost certainly acquiring massive reams of data on Americans through a variety of apps, platforms, and games. For example, Tencent, owner of WeChat, has a 40% stake in Epic Games that has Fortnite, a massively popular multiplayer game (if you have never heard of it, ask one of the children in your family). Moreover, a recent change to PRC law mandates that companies operating in the PRC must share their data bases for cybersecurity reviews, which may be an opportunity aside from hacking and exfiltrating United States entities, to access data. In summation, if the Trump Administration is serious about stopping the flow of data from the U.S. to the PRC, these executive orders will do very little.
  • Big Tech Makes Inroads With the Biden Campaign” by David McCabe and Kenneth P. Vogel – The New York Times. Most likely long before former Vice President Joe Biden clinched the Democratic nomination, advisers volunteered to help plot out his policy positions, a process that intensified this year. Of course, this includes technology policy, and many of those volunteering for the campaign’s Innovation Policy Committee have worked or are working for large technology companies directly or as consultants or lobbyists. This piece details some of these people and their relationships and how the Biden campaign is managing possible conflicts of interest. Naturally, those on the left wing of the Democratic Party calling for tighter antitrust, competition, and privacy regulation are concerned that Biden might be pulled away from these positions despite his public statements arguing that the United States government needs to get tougher with some practices.
  • A Bible Burning, a Russian News Agency and a Story Too Good to Check Out” By Matthew Rosenberg and Julian E. Barnes – The New York Times. The Russian Federation seems to be using a new tactic with some success for sowing discord in the United States that is the information equivalent of throwing fuel onto a fire. In this case, a fake story manufactured by a Russian outlet was seized on by some prominent Republicans, in part, because it fits their preferred world view of protestors. In this instance, a Russian outlet created a fake story amplifying an actual event that went viral. We will likely see more of this, and it is not confined to fake stories intended to appeal to the right. The same is happening with content meant for the left wing in the United States.
  • Facebook cracks down on political content disguised as local news” by Sara Fischer – Axios. As part of its continuing effort to crack down on violations of its policies, Facebook will no longer allow groups with a political viewpoint to masquerade as news. The company and outside experts have identified a range of instances where groups propagating a viewpoint, as opposed to reporting, have used a Facebook exemption by pretending to be local news outlets.
  • QAnon groups have millions of members on Facebook, documents show” By Ari Sen and Brandy Zadrozny – NBC News. It appears as if some Facebooks are leaking the results of an internal investigation that identified more than 1 million users who are part of QAnon groups. Most likely these employees want the company to take a stronger stance on the conspiracy group QAnon like the company has with COVID-19 lies and misinformation.
  • And, since Senator Kamala Harris (D-CA) was named former Vice President Joe Biden’s (D-DE) vice presidential pick, this article has become even more relevant than when I highlighted it in late July: “New Emails Reveal Warm Relationship Between Kamala Harris And Big Tech” – HuffPost. Obtained via an Freedom of Information request, new email from Senator Kamala Harris’ (D-CA) tenure as her state’s attorney general suggest she was willing to overlook the role Facebook, Google, and others played and still play in one of her signature issues: revenge porn. This article makes the case Harris came down hard on a scammer running a revenge porn site but did not press the tech giants with any vigor to take down such material from their platforms. Consequently, the case is made if Harris is former Vice President Joe Biden’s vice presidential candidate, this would signal a go easy approach on large companies even though many Democrats have been calling to break up these companies and vigorously enforce antitrust laws. Harris has largely not engaged on tech issues during her tenure in the Senate. To be fair, many of these companies are headquartered in California and pump billions of dollars into the state’s economy annually, putting Harris in a tricky position politically. Of course, such pieces should be taken with a grain of salt since it may have been suggested or planted by one of Harris’ rivals for the vice president nomination or someone looking to settle a score.
  • Unwanted Truths: Inside Trump’s Battles With U.S. Intelligence Agencies” by Robert Draper – The New York Times. A deeply sourced article on the outright antipathy between President Donald Trump and Intelligence Community officials, particularly over the issue of how deeply Russia interfered in the election in 2016. A number of former officials have been fired or forced out because they refused to knuckle under to the White House’s desire to soften or massage conclusions of Russia’s past and current actions to undermine the 2020 election in order to favor Trump.
  • Huawei says it’s running out of chips for its smartphones because of US sanctions” By Kim Lyons – The Verge and “Huawei: Smartphone chips running out under US sanctions” by Joe McDonald – The Associated Press. United States (U.S.) sanctions have started biting the Chinese technology company Huawei, which announced it will likely run out of processor chips for its smartphones. U.S. sanctions bar any company from selling high technology items like processors to Huawei, and this capability is not independently available in the People’s Republic of China (PRC) at present.
  • Targeting WeChat, Trump Takes Aim at China’s Bridge to the World” By Paul Mozur and Raymond Zhong – The New York Times. This piece explains WeChat, the app, the Trump Administration is trying to ban in the United States (U.S.) without any warning. It is like a combination of Facebook, WhatsApp, news app, and payment platform and is used by more than 1.2 billion people.
  • This Tool Could Protect Your Photos From Facial Recognition” By Kashmir Hill – The New York Times. Researchers at the University of Chicago have found a method of subtly altering photos of people that appears to foil most facial recognition technologies. However, a number of experts interviewed said it is too late to stop companies like AI Clearview.
  • I Tried to Live Without the Tech Giants. It Was Impossible.” By Kashmir Hill – The New York Times. This New York Times reporter tried living without the products of large technology companies, which involved some fairly obvious challenges and some that were not so obvious. Of course, it was hard for her to skip Facebook, Instagram, and the like, but cutting out Google and Amazon proved hardest and basically impossible because of the latter’s cloud presence and the former’s web presence. The fact that some of the companies cannot be avoided if one wants to be online likely lends weight to those making the case these companies are anti-competitive.
  • To Head Off Regulators, Google Makes Certain Words Taboo” by Adrianne Jeffries – The Markup. Apparently, in what is a standard practice at large companies, employees at Google were coached to avoid using certain terms or phrases that antitrust regulators would take notice of such as: “market,” “barriers to entry,” and “network effects.” The Markup obtained a 16 August 2019 document titled “Five Rules of Thumb For Written Communications” that starts by asserting “[w]ords matter…[e]specially in antitrust laws” and goes on to advise Google’s employees:
    • We’re out to help users, not hurt competitors.
    • Our users should always be free to switch, and we don’t lock anyone in.
    • We’ve got lots of competitors, so don’t assume we control or dominate any market.
    • Don’t try and define a market or estimate our market share.
    • Assume every document you generate, including email, will be seen by regulators.
  • Facebook Fired An Employee Who Collected Evidence Of Right-Wing Pages Getting Preferential Treatment” By Craig Silverman and Ryan Mac – BuzzFeed News. A Facebook engineer was fired after adducing proof in an internal communications system that the social media platform is more willing to change false and negative ratings to claims made by conservative outlets and personalities than any other viewpoint. If this is true, it would be opposite to the narrative spun by the Trump Administration and many Republicans in Congress. Moreover, Facebook’s incentives would seem to align with giving conservatives more preferential treatment because many of these websites advertise on Facebook, the company probably does not want to get crosswise with the Administration, sensational posts and content drive engagement which increases user numbers that allows for higher ad rates, and it wants to appear fair and impartial.
  • How Pro-Trump Forces Work the Refs in Silicon Valley” By Ben Smith – The New York Times. This piece traces the nearly four decade old effort of Republicans to sway mainstream media and now Silicon Valley to its viewpoint.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Photo credit: Gerd Altmann on Pixabay

Further Reading, Other Developments, and Coming Events (24 July)

First things first, if you would like to receive my Technology Policy Update, email me. You can find some of these Updates from 2019 and 2020 here.

Here are Further Reading, Other Developments, and Coming Events.

Coming Events

  • On  27 July, the House Judiciary Committee’s Antitrust, Commercial, and Administrative Law Subcommittee will hold its sixth hearing on “Online Platforms and Market Power” titled “Examining the Dominance of Amazon, Apple, Facebook, and Google” that will reportedly have the heads of the four companies as witnesses.
  • On 28 July, the Senate Commerce, Science, and Transportation Committee’s Communications, Technology, Innovation, and the Internet Subcommittee will hold a hearing titled “The PACT Act and Section 230: The Impact of the Law that Helped Create the Internet and an Examination of Proposed Reforms for Today’s Online World.”
  • On 28 July the House Science, Space, and Technology Committee’s Investigations and Oversight and Research and Technology Subcommittees will hold a joint virtual hearing titled “The Role of Technology in Countering Trafficking in Persons” with these witnesses:
    • Ms. Anjana Rajan, Chief Technology Officer, Polaris
    • Mr. Matthew Daggett, Technical Staff, Humanitarian Assistance and Disaster Relief Systems Group, Lincoln Laboratory, Massachusetts Institute of Technology
    • Ms. Emily Kennedy, President and Co-Founder, Marinus Analytics
  •  On 28 July, the House Homeland Security Committee’s Cybersecurity, Infrastructure Protection, & Innovation Subcommittee will hold a hearing titled “Secure, Safe, and Auditable: Protecting the Integrity of the 2020 Elections” with these witnesses:
    • Mr. David Levine, Elections Integrity Fellow, Alliance for Securing Democracy, German Marshall Fund of the United States
    • Ms. Sylvia Albert, Director of Voting and Elections, Common Cause
    • Ms. Amber McReynolds, Chief Executive Officer, National Vote at Home Institute
    • Mr. John Gilligan, President and Chief Executive Officer, Center for Internet Security, Inc.
  • On 30 July the House Oversight and Reform Committee will hold a hearing on the tenth “Federal Information Technology Acquisition Reform Act” (FITARA) scorecard on federal information technology.
  • On 30 July, the Senate Commerce, Science, and Transportation Committee’s Security Subcommittee will hold a hearing titled “The China Challenge: Realignment of U.S. Economic Policies to Build Resiliency and Competitiveness” with these witnesses:
    • The Honorable Nazak Nikakhtar, Assistant Secretary for Industry and Analysis, International Trade Administration, U.S. Department of Commerce
    • Dr. Rush Doshi, Director of the Chinese Strategy Initiative, The Brookings Institution
    • Mr. Michael Wessel, Commissioner, U.S. – China Economic and Security Review Commission
  • On 4 August, the Senate Armed Services Committee will hold a hearing titled “Findings and Recommendations of the Cyberspace Solarium Commission” with these witnesses:
    • Senator Angus S. King, Jr. (I-ME), Co-Chair, Cyberspace Solarium Commission
    • Representative Michael J. Gallagher (R-WI), Co-Chair, Cyberspace Solarium Commission
    • Brigadier General John C. Inglis, ANG (Ret.), Commissioner, Cyberspace Solarium Commission
  • On 6 August, the Federal Communications Commission (FCC) will hold an open meeting to likely consider the following items:
    • C-band Auction Procedures. The Commission will consider a Public Notice that would adopt procedures for the auction of new flexible-use overlay licenses in the 3.7–3.98 GHz band (Auction 107) for 5G, the Internet of Things, and other advanced wireless services. (AU Docket No. 20-25)
    • Radio Duplication Rules. The Commission will consider a Report and Order that would eliminate the radio duplication rule with regard to AM stations and retain the rule for FM stations. (MB Docket Nos. 19-310. 17-105)
    • Common Antenna Siting Rules. The Commission will consider a Report and Order that would eliminate the common antenna siting rules for FM and TV broadcaster applicants and licensees. (MB Docket Nos. 19-282, 17-105)
    • Telecommunications Relay Service. The Commission will consider a Report and Order to repeal certain TRS rules that are no longer needed in light of changes in technology and voice communications services. (CG Docket No. 03-123)

Other Developments

  • Slack filed an antitrust complaint with the European Commission (EC) against Microsoft alleging that the latter’s tying Microsoft Teams to Microsoft Office is a move designed to push the former out of the market. A Slack vice president said in a statement “Slack threatens Microsoft’s hold on business email, the cornerstone of Office, which means Slack threatens Microsoft’s lock on enterprise software.” While the filing of a complaint does not mean the EC will necessarily investigate, under its new leadership the EC has signaled in a number of ways its intent to address the size of some technology companies and the effect on competition.
  • The National Institute of Standards and Technology (NIST) has issued for comment NIST the 2nd Draft of NISTIR 8286, Integrating Cybersecurity and Enterprise Risk Management (ERM). NIST claimed this guidance document “promotes greater understanding of the relationship between cybersecurity risk management and ERM, and the benefits of integrating those approaches…[and] contains the same main concepts as the initial public draft, but their presentation has been revised to clarify the concepts and address other comments from the public.” Comments are due by 21 August 2020.
  • The United States National Security Commission on Artificial Intelligence (NSCAI) published its Second Quarter Recommendations, a compilation of policy proposals made this quarter. NSCAI said it is still on track to release its final recommendations in March 2021. The NSCAI asserted
    • The recommendations are not a comprehensive follow-up to the interim report or first quarter memorandum. They do not cover all areas that will be included in the final report. This memo spells out recommendations that can inform ongoing deliberations tied to policy, budget, and legislative calendars. But it also introduces recommendations designed to build a new framework for pivoting national security for the artificial intelligence (AI) era.
    • The NSCAI stated it “has focused its analysis and recommendations on six areas:
    • Advancing the Department of Defense’s internal AI research and development capabilities. The Department of Defense (DOD) must make reforms to the management of its research and development (R&D) ecosystem to enable the speed and agility needed to harness the potential of AI and other emerging technologies. To equip the R&D enterprise, the NSCAI recommends creating an AI software repository; improving agency- wide authorized use and sharing of software, components, and infrastructure; creating an AI data catalog; and expanding funding authorities to support DOD laboratories. DOD must also strengthen AI Test and Evaluation, Verification and Validation capabilities by developing an AI testing framework, creating tools to stand up new AI testbeds, and using partnered laboratories to test market and market-ready AI solutions. To optimize the transition from technological breakthroughs to application in the field, Congress and DOD need to reimagine how science and technology programs are budgeted to allow for agile development, and adopt the model of multi- stakeholder and multi-disciplinary development teams. Furthermore, DoD should encourage labs to collaborate by building open innovation models and a R&D database.
    • Accelerating AI applications for national security and defense. DOD must have enduring means to identify, prioritize, and resource the AI- enabled applications necessary to fight and win. To meet this challenge, the NSCAI recommends that DOD produce a classified Technology Annex to the National Defense Strategy that outlines a clear plan for pursuing disruptive technologies that address specific operational challenges. We also recommend establishing mechanisms for tactical experimentation, including by integrating AI-enabled technologies into exercises and wargames, to ensure technical capabilities meet mission and operator needs. On the business side, DOD should develop a list of core administrative functions most amenable to AI solutions and incentivize the adoption of commercially available AI tools.
    • Bridging the technology talent gap in government. The United States government must fundamentally re-imagine the way it recruits and builds a digital workforce. The Commission envisions a government-wide effort to build its digital talent base through a multi-prong approach, including: 1) the establishment of a National Reserve Digital Corps that will bring private sector talent into public service part-time; 2) the expansion of technology scholarship for service programs; and, 3) the creation of a national digital service academy for growing federal technology talent from the ground up.
    • Protecting AI advantages for national security through the discriminate use of export controls and investment screening. The United States must protect the national security sensitive elements of AI and other critical emerging technologies from foreign competitors, while ensuring that such efforts do not undercut U.S. investment and innovation. The Commission proposes that the President issue an Executive Order that outlines four principles to inform U.S. technology protection policies for export controls and investment screening, enhance the capacity of U.S. regulatory agencies in analyzing emerging technologies, and expedite the implementation of recent export control and investment screening reform legislation. Additionally, the Commission recommends prioritizing the application of export controls to hardware over other areas of AI-related technology. In practice, this requires working with key allies to control the supply of specific semiconductor manufacturing equipment critical to AI while simultaneously revitalizing the U.S. semiconductor industry and building the technology protection regulatory capacity of like-minded partners. Finally, the Commission recommends focusing the Committee on Foreign Investment in the United States (CFIUS) on preventing the transfer of technologies that create national security risks. This includes a legislative proposal granting the Department of the Treasury the authority to propose regulations for notice and public comment to mandate CFIUS filings for investments into AI and other sensitive technologies from China, Russia and other countries of special concern. The Commission’s recommendations would also exempt trusted allies and create fast tracks for vetted investors.
    • Reorienting the Department of State for great power competition in the digital age. Competitive diplomacy in AI and emerging technology arenas is a strategic imperative in an era of great power competition. Department of State personnel must have the organization, knowledge, and resources to advocate for American interests at the intersection of technology, security, economic interests, and democratic values. To strengthen the link between great power competition strategy, organization, foreign policy planning, and AI, the Department of State should create a Strategic Innovation and Technology Council as a dedicated forum for senior leaders to coordinate strategy and a Bureau of Cyberspace Security and Emerging Technology, which the Department has already proposed, to serve as a focal point and champion for security challenges associated with emerging technologies. To strengthen the integration of emerging technology and diplomacy, the Department of State should also enhance its presence and expertise in major tech hubs and expand training on AI and emerging technology for personnel at all levels across professional areas. Congress should conduct hearings to assess the Department’s posture and progress in reorienting to address emerging technology competition.
    • Creating a framework for the ethical and responsible development and fielding of AI. Agencies need practical guidance for implementing commonly agreed upon AI principles, and a more comprehensive strategy to develop and field AI ethically and responsibly. The NSCAI proposes a “Key Considerations” paradigm for agencies to implement that will help translate broad principles into concrete actions.
  • The Danish Defence Intelligence Service’s Centre for Cyber Security (CFCS) released its fifth annual assessment of the cyber threat against Denmark and concluded:
    • The cyber threat pose a serious threat to Denmark. Cyber attacks mainly carry economic and political consequences.
    • Hackers have tried to take advantage of the COVID-19 pandemic. This constitutes a new element in the general threat landscape.
    • The threat from cyber crime is VERY HIGH. No one is exempt from the threat. There is a growing threat from targeted ransomware attacks against Danish public authorities and private companies.  The threat from cyber espionage is VERY HIGH.
    • The threat is especially directed against public authorities dealing with foreign and security policy issues as well as private companies whose knowledge is of interest to foreign states. 
    • The threat from destructive cyber attacks is LOW. It is less likely that foreign states will launch destructive cyber attacks against Denmark. Private companies and public authorities operating in conflict-ridden regions are at a greater risk from this threat. 
    • The threat from cyber activism is LOW. Globally, the number of cyber activism attacks has dropped in recent years,and cyber activists rarely focus on Danish public authorities and private companies. The threat from cyber terrorism is NONE. Serious cyber attacks aimed at creating effects similar to those of conventional terrorism presuppose a level of technical expertise and organizational resources that militant extremists, at present, do not possess. Also, the intention remains limited. 
    • The technological development, including the development of artificial intelligence and quantum computing, creates new cyber security possibilities and challenges.

Further Reading

  • Accuse, Evict, Repeat: Why Punishing China and Russia for Cyberattacks Fails” – The New York Times. This piece points out that the United States (US) government is largely using 19th Century responses to address 21st Century conduct by expelling diplomats, imposing sanctions, and indicting hackers. Even a greater use of offensive cyber operations does not seem to be deterring the US’s adversaries. It may turn out that the US and other nations will need to focus more on defensive measures and securing its valuable data and information.
  • New police powers to be broad enough to target Facebook” – Sydney Morning Herald. On the heels of a 2018 law that some argue will allow the government in Canberra to order companies to decrypt users communications, Australia is considering the enactment of new legislation because of concern among the nation’s security services about end-to-end encryption and dark browsing. In particular, Facebook’s proposed changes to secure its networks is seen as fertile ground of criminals, especially those seeking to prey on children sexually.
  • The U.S. has a stronger hand in its tech battle with China than many suspect” – The Washington Post. A national security writer makes the case that the cries that the Chinese are coming may prove as overblown as similar claims made about the Japanese during the 1980s and the Russian during the Cold War. The Trump Administration has used some levers that may appear to impede the People’s Republic of China’s attempt to displace the United States. In all, this writer is calling for more balance in viewing the PRC and some of the challenges it poses.
  • Facebook is taking a hard look at racial bias in its algorithms” – Recode. After a civil rights audit that was critical of Facebook, the company is assembling and deploying teams to try to deal with the biases in its algorithms on Facebook and Instagram. Critics doubt the efforts will turn out well because economic incentives are aligned against rooting out such biases and the lack of diversity at the company.
  • Does TikTok Really Pose a Risk to US National Security?” – WIRED. This article asserts TikTok is probably no riskier than other social media apps even with the possibility that the People’s Republic of China (PRC) may have access to user data.
  • France won’t ban Huawei, but encouraging 5G telcos to avoid it: report” – Reuters. Unlike the United States, the United Kingdom, and others, France will not outright ban Huawei from their 5G networks but will instead encourage their telecommunications companies to use European manufacturers. Some companies already have Huawei equipment on the networks and may receive authorization to use the company’s equipment for up to five more years. However, France is not planning on extending authorizations past that deadline, which will function a de facto sunset. In contrast, authorizations for Ericsson or Nokia equipment were provided for eight years. The head of France’s cybersecurity agency stressed that France was not seeking to move against the People’s Republic of China (PRC) but is responding to security concerns.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Further Reading and Other Developments (29 June)

First things first, if you would like to receive my Technology Policy Update, email me. You can find some of these Updates from 2019 and 2020 here.

Other Developments

  • The Senate Commerce, Science, and Transportation Committee held an oversight hearing on the Federal Communications Commission (FCC) with the FCC Chair and four Commissioners.
  • New Zealand’s Parliament passed the “Privacy Act 2020,” a major update of its 1993 statute that would, according to New Zealand’s Privacy Commissioner, do the following:
    • Mandatory notification of harmful privacy breaches. If organisations or businesses have a privacy breach that poses a risk of serious harm, they are required to notify the Privacy Commissioner and affected parties. This change brings New Zealand in line with international best practice.
    • Introduction of compliance orders. The Commissioner may issue compliance notices to require compliance with the Privacy Act. Failure to follow a compliance notice could result a fine of up to $10,000.
    • Binding access determinations. If an organisation or business refuses to make personal information available upon request, the Commissioner will have the power to demand release.
    • Controls on the disclosure of information overseas. Before disclosing New Zealanders’ personal information overseas, New Zealand organisations or businesses will need to ensure those overseas entities have similar levels of privacy protection to those in New Zealand.
    • New criminal offences. It will be an offence to mislead an organisation or business in a way that affects someone’s personal information or to destroy personal information if a request has been made for it.  The maximum fine for these offences is $10,000.
    • Explicit application to businesses whether or not they have a legal or physical presence in New Zealand. If an international digital platform is carrying on business in New Zealand, with the New Zealanders’ personal information, there will be no question that they will be obliged to comply with New Zealand law regardless of where they, or their servers are based.
  • The United States’ National Archives’ Information Security Oversight Office (ISOO) submitted its annual report to the White House and found:
    • Our Government’s ability to protect and share Classified National Security Information and Controlled Unclassified Information (CUI) continues to present serious challenges to our national security. While dozens of agencies now use various advanced technologies to accomplish their missions, a majority of them still rely on antiquated information security management practices. These practices have not kept pace with the volume of digital data that agencies create and these problems will worsen if we do not revamp our data collection methods for overseeing information security programs across the Government. We must collect and analyze data that more accurately reflects the true health of these programs in the digital age.
    • However, ISOO noted progress on efforts to better secure and protect CUI but added “[f]ull implementation will require additional resources, including dedicated funds and more full-time staff.”
    • Regarding classified information, ISOO found “Classified National Security Information policies and practices remain outdated and are unable to keep pace with the volume of digital data that agencies create.”
  • The Australian Strategic Policy Institute’s International Cyber Policy Centre released its most recent “Covid-19 Disinformation & Social Media Manipulation” report titled “ID2020, Bill Gates and the Mark of the Beast: how Covid-19catalyses existing online conspiracy movements:”
    • Against the backdrop of the global Covid-19 pandemic, billionaire philanthropist Bill Gates has become the subject of a diverse and rapidly expanding universe of conspiracy theories. As an example, a recent poll found that 44% of Republicans and 19% of Democrats in the US now believe that Gates is linked to a plot to use vaccinations as a pretext to implant microchips into people. And it’s not just America: 13% of Australians believe that Bill Gates played a role in the creation and spread of the coronavirus, and among young Australians it’s 20%. Protests around the world, from Germany to Melbourne, have included anti-Gates chants and slogans.
    • This report takes a close look at a particular variant of the Gates conspiracy theories, which is referred to here as the ID2020 conspiracy (named after the non-profit ID2020 Alliance, which the conspiracy theorists claim has a role in the narrative), as a case study for examining the dynamics of online conspiracy theories on Covid-19. Like many conspiracy theories, that narrative builds on legitimate concerns, in this case about privacy and surveillance in the context of digital identity systems, and distorts them in extreme and unfounded ways.
  • The Pandemic Response Accountability Committee (PRAC) released “TOP CHALLENGES FACING FEDERAL AGENCIES:  COVID-19 Emergency Relief and Response Efforts” for those agencies that received the bulk of funds under the “Coronavirus Aid, Relief, and Economic Security (CARES) Act” (P.L. 116-136). PRAC is housed within the Council of the Inspectors General on Integrity and Efficiency (CIGIE) is comprised of “21 Offices of Inspector General (OIG) overseeing agencies who received the bulk of the emergency funding.” PRAC stated
    • CIGIE previously has identified information technology (IT) security and management as a long-standing, serious, and ubiquitous challenge that impacts agencies across the government, highlighting agencies’ dependence on reliable and secure IT systems to perform their mission-critical functions.  Key areas of concern have included safeguarding federal systems against cyberattacks and insider threats, modernizing and managing federal IT systems, ensuring continuity of operations, and recruiting and retaining a highly skilled cybersecurity workforce.  
    • These concerns remain a significant challenge, but are impacted by (1) widespread reliance on maximum telework to continue agency operations during the pandemic, which has strained agency networks and shifted IT resources, and (2) additional opportunities and targets for cyberattacks created by remote access to networks and increases in online financial activity.
  • Following the completion of a European Union-People’s Republic of China summit, European Commission President Ursula von der Leyen pointed to a number of ongoing technology-related issues between the EU and the PRC, including:
    • [W]e continue to have an unbalanced trade and investment relationship. We have not made the progress we aimed for in last year’s Summit statement in addressing market access barriers. We need to follow up on these commitments urgently. And we also need to have more ambition on the Chinese side in order to conclude negotiations on an investment agreement. These two actions would address the asymmetry in our respective market access and would improve the level playing field between us. In order to conclude the investment agreement, we would need in particular substantial commitments from China on the behaviour of state-owned enterprises, transparency in subsidies, and transparency on the topic of forced technology transfers.
    • We have raised these issues at the same time with President Xi and Premier Li that we expect that China will show the necessary level of ambition to conclude these negotiations by the end of this year. I think it is important that we have now a political, high-level approach on these topics.
    • I have also made it clear that China needs to engage seriously on a reform of the World Trade Organization, in particular on the future negotiations on industrial subsidies. This is the relevant framework where we have to work together on the topic – and it is a difficult topic – but this is the framework, which we have to establish to have common binding rules we agree on.
    • And we must continue to work on tackling Chinese overcapacity, for example in the steel and metal sectors, and in high technology. Here for us it is important that China comes back to the international negotiation table, that we sit down there and find solutions.
    • We also pointed out the importance of the digital transformation and its highly assertive approach to the security, the resilience and the stability of digital networks, systems and value chains. We have seen cyberattacks on hospitals and dedicated computing centres. Likewise, we have seen a rise of online disinformation. We pointed out clearly that this cannot be tolerated.
  • United States Secretary of State Mike Pompeo issued a statement titled “The Tide Is Turning Toward Trusted 5G Vendors,” in which he claimed:
    • The tide is turning against Huawei as citizens around the world are waking up to the danger of the Chinese Communist Party’s surveillance state. Huawei’s deals with telecommunications operators around the world are evaporating, because countries are only allowing trusted vendors in their 5G networks. Examples include the Czech Republic, Poland, Sweden, Estonia, Romania, Denmark, and Latvia. Recently, Greece agreed to use Ericsson rather than Huawei to develop its 5G infrastructure.
  • Germany’s highest court, the Bundesgerichtshof (BGH), ruled against Facebook’s claim that the country’s antitrust regulator was wrong in its finding that it was abusing its dominant position in combining data on German nationals and residents across its platforms. Now the matter will go down to a lower German court that is expected to heed the higher court’s ruling and allow the Bundeskartellamt’s restrictions to limit Facebook’s activity.
  • France’s Conseil d’État upheld the Commission nationale de l’informatique et des libertés’ (CNIL) 2019 fine of €50 million of Google under the General Data Protection Regulation (GDPR) “for lack of transparency, inadequate information and lack of valid consent regarding the ads personalization.”
  • A Virginia court ruled against House Intelligence Committee Ranking Member Devin Nunes (R-CA) in his suit against Twitter and Liz Mair, a Republican consultant, and Twitter accounts @devincow and @DevinNunesMom regarding alleged defamation.
  • The California Secretary of State has listed the ballot initiative to add the “California Privacy Rights Act” to the state’s law, in large part, to amend the “California Consumer privacy Act” (CCPA) (AB 375) as having qualified for November’s ballot.

Further Reading

  • Wrongfully Accused by an Algorithm” – The New York Times. In what should have been predictable and foreseeable given the error rate of many facial recognition algorithms at identifying correctly people of color, an African American was wrongly identified by this technology, causing him to be released. Those in the field and experts stress positive identifications are supposed to only be one piece of evidence, but in this case, it was the only evidence police had. After a store loss specialists agreed a person in low grade photo was the likely shoplifter, police arrested the man. Eventually, the charges were dismissed, initially with prejudice leaving open the possibility of future prosecution but later the district attorney cleared all charges and expunged the arrest.
  • Pentagon Says it Needs ‘More Time’ Fixing JEDI Contract“ – Nextgov. The saga of the Department of Defense’s Joint Enterprise Defense Infrastructure cloud contract continues. Amazon and Microsoft will need to submit revised bids for the possibly $10 billion procurement as the Department of Defense (DOD) is trying to cure the problems turned up by a federal court in the suit brought by Amazon. These bids would be evaluated later this summer, according to a recent DOD court filing. The next award of this contract could trigger another bid protest just as the first award caused Amazon to challenge Microsoft’s victory.
  • EU pushing ahead with digital tax despite U.S. resistance, top official says” – Politico. In an Atlantic Council event, European Commission Executive Vice President Margrethe Vestager stated the European Union will move ahead with an EU-wide digital services tax despite the recent pullout of the United States from talks on such a tax. The Organization for Economic Co-operation and Development had convened multi-lateral talks to resolve differences on how a global digital services tax will ideally function with most of the nations involved arguing for a 2% tax to be assessed in the nation where the transaction occurs as opposed to where the company is headquartered. EU officials claim agreement was within reach when the US removed itself from the talks. An EU-wide tax is of a piece with a more aggressive stance taken by the EU towards US technology companies, a number of which are currently under investigation for antitrust and anti-competitive behaviors.
  • Verizon joins ad boycott of Facebook over hateful content” – Associated Press. The telecommunications company joined a number of other companies in pulling their advertising from Facebook organized by the ADL (the Anti-Defamation League), the NAACP, Sleeping Giants, Color Of Change, Free Press and Common Sense. The #StopHateforProfit “asks large Facebook advertisers to show they will not support a company that puts profit over safety,” and thus far, a number of companies are doing just that, including Eddie Bauer, Patagonia, North Face, Ben & Jerry’s, and others. In a statement, a Facebook spokesperson stated “[o]ur conversations with marketers and civil rights organizations are about how, together, we can be a force for good.” While Facebook has changed course due to this and other pressure regarding content posted or ads placed on its platform by most recently removing a Trump campaign ad with Nazi imagery, the company has not changed its position on allowing political ads with lies.
  • The UK’s contact tracing app fiasco is a master class in mismanagement” – MIT Technology Review. This after-action report on the United Kingdom’s National Health Service’s efforts to build its own COVID-19 contact tracing app is grim. The NHS is basically scrapping its work and opting for the Google/Apple API. However, the government in London is claiming “we will now be taking forward a solution that brings together the work on our app and the Google/Apple solution.” A far too ambitious plan married to organizational chaos led to the crash of the NHS effort.
  • Trump administration sees no loophole in new Huawei curb” – Reuters. Despite repeated arguments by trade experts the most recent United States Department of Commerce regulations on Huawei will not cut off access to high technology components, Secretary of Commerce Wilbur Ross claimed “[t]he Department of Commerce does not see any loopholes in this rule…[and] [w]e reaffirm that we will implement the rule aggressively and pursue any attempt to evade its intent.”
  • Defense Department produces list of Chinese military-linked companies” – Axios. Likely in response to a letter sent last year by Senate Minority Leader Chuck Schumer (D-NY) and Senator Tom Cotton (R-AR), the Department of Defense has finally fulfilled a requirement in the FY 1999 National Defense Authorization Act to update a list of “those persons operating directly or indirectly in the United States or any of its territories and possessions that are Communist Chinese military companies.” The DOD has complied and compiled a list of People’s Republic of China (PRC) entities linked to the PRC military. This provision in the FY 1999 NDAA also grants the President authority to “exercise International Emergency Economic Powers Act (IEEPA) authorities” against listed entities, which could include serious sanctions.
  • Andrew Yang is pushing Big Tech to pay users for data” – The Verge. Former candidate for the nomination of the Democratic Party for President Andrew Yang has stated the Data Dividend Project, “a movement dedicated to taking back control of our personal data: our data is our property, and if we allow companies to use it, we should get paid for it.” Additionally, “[i]ts primary objective is to establish and enforce data property rights under laws such as the California Consumer Privacy Act (CCPA), which went into effect on January 1, 2020.” California Governor Gavin Newsom proposed a similar program in very vague terms in a State of California speech but never followed up on it, and Senator John Kennedy (R-LA) has introduced the “Own Your Own Data Act” (S. 806) to provide people with rights to sell their personal data.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Photo by Retha Ferguson from Pexels

Senate Commerce Marks Up Three Technology Bills

Three targeted bills are sent to the full Senate to address a range of technology issues.  

First things first, if you would like to receive my Technology Policy Update, email me. You can find some of these Updates from 2019 and 2020 here.

The Senate Commerce, Science, and Transportation Committee marked up a number of technology related bills at a 20 May executive session:

  • The “Identifying Outputs of Generative Adversarial Networks (IOGAN) Act” (S. 2904), which was amended twice before being reported out with an amendment in the nature of a substitute and another amendment changing the substitute. Broadly speaking, this bill would task the National Science Foundation with sponsoring and funding research into how to detect and prevent deep fakes through the use of artificial intelligence and machine learning.
  • The “Cybersecurity Competitions to Yield Better Efforts to Research the Latest Exceptionally Advanced Problems (CYBER LEAP) Act of 2020” (S. 3712) would require the Department of Commerce to conduct “grand challenges” for:
    • Building more resilient  systems  that measurably and exponentially raise adversary costs  of carrying out common cyber attacks
    • Empowering the people of the United States with an appropriate and measurably sufficient level of digital literacy to make safe and secure decisions online.
    • Developing a cybersecurity workforce with measurable skills to protect and maintain information systems.
    • Advancing cybersecurity efforts in response to emerging  technology, such as artificial intelligence, quantum science, and next generation communications technologies.
    • Maintaining a high sense of usability while improving the security and safety of online  activity of individuals in the United States.
    • Reducing cybersecurity risks to Federal networks and systems, and improving the response of Federal agencies to cybersecurity incidents on such networks and systems.
  • The “Spectrum IT Modernization Act of 2020” (S. 3717) requires the National Telecommunications and Information Administration (NTIA) to “submit to Congress a report that contains the plan of the NTIA to modernize and automate the infrastructure of the NTIA relating to managing the use of Federal spectrum by covered agencies so as to more efficiently manage that use” within 8 months of enactment. This bill could require agencies such as the Department of Defense to modernize any such IT used to manage federal spectrum.

In December, the House sent the Senate a bill related to the IOGAN Act also named the “Identifying Outputs of Generative Adversarial Networks Act” (H.R.4355) that “directs  the  NSF to  support  research  on  manipulated  or  synthesized  content  and  information  security,  including  fundamental  research  on  digital  media  forensic  tools,  social  and  behavioral  research,  and  research  awards  coordinated  with  other  federal  agencies  and  programs.” Consequently, it is possible a compromise bill passes this year.

Neither of the other bills have companion House legislation.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.