Further Reading, Other Developments, and Coming Events (13 October)

Further Reading

  •  “False Rumors Often Start at the Top” By Shira Ovide — The New York Times. This piece traces how misinformation can arise from poorly phrased or ambiguous statements and utterances from authorities or famous people. Throw in a very liberal dose of people misinterpreting, and it’s a miracle there’s any clear communication online.
  • With election day looming, Twitter imposes new limits on U.S. politicians — and ordinary users, too” By Elizabeth Dwoskin and Craig Timberg — The Washington Post. The social media platform will police misinformation and lies spread by American politicians with more than 100,000 followers, especially with respect to the outcome of elections that have not yet been decided. This change is part of a suite of measures to blunt the viral nature of incorrect or maliciously intended Tweets. An interesting change is one designed to add friction to retweeting by asking the user if they want to add their thoughts to a Tweet they are trying to retweet. Perhaps, such modifications point the way to blunting how quickly bad or wrong information goes viral.
  • Why Facebook Can’t Fix Itself” By Andrew Marantz — New Yorker. This article lays bare the central tension in the social media platform: its income is driven by content that outrages or hooks people and any serious effort to remove lies, misinformation, hate speech, and extremist material would remove the content it needs to outrage and hook people.
  • Feds may target Google’s Chrome browser for breakup” By Leah Nylen — Politico. It appears there may be two antitrust suits against Google targeting three of the company’s businesses: online advertising, the online search market, and its Google Chrome browser. The United States Department of Justice and state attorneys general may ask courts to break up the company. Of course, the resolution of such a massive undertaking could take years to play out.
  • Cyber Command has sought to disrupt the world’s largest botnet, hoping to reduce its potential impact on the election” By Ellen Nakashima — The Washington Post and “Microsoft seeks to disrupt Russian criminal botnet it fears could seek to sow confusion in the presidential election” By Jay Greene and Ellen Nakashima — The Washington Post. United States (U.S.) Cyber Command and Microsoft went at the same botnet from different directions ahead of the U.S. election in an attempt to batter and disorganize the Russian organization enough to foil any possible ransomware attacks on election systems.

Other Developments

  • The National Security Commission on Artificial Intelligence (NSCAI) sent its “its 2020 Interim Report and Third Quarter Recommendations” to Congress and the Trump Administration ahead of the March 2021 due date for its final report. Notably, the NSCAI is calling for Congress and the White House to figure out which entity in the Executive Office of the President (EOP) should lead and coordinate the United States’ (U.S.) artificial intelligence (AI) efforts. Again, the NSCAI framed AI as being a key part of the “great power” struggle between the U.S. and rivals like the People’s Republic of China; although it is often unsaid that the U.S. is also theoretically competing with ostensible allies like the United Kingdom (UK) and the European Union (EU) in leading AI development and reaping the national security and economic gains projected to accompany being the preeminent nation on this field. However, as with many of these commissions, Congress and the Administration must navigate the jurisdictions of current government stakeholders who are almost always reluctant to relinquish their claims to a policy field and will often work to preserve their role even at the cost of frustrating larger efforts. It is very likely Congress folds recommendations into a future National Defense Authorization Act (NDAA), quite possibly the bill for FY 2022 since the final report will be delivered in the midst of the drafting and consideration of the annual bill to set national security policy.
    • Nonetheless, the NSCAI stated “[t]his report represents our third quarterly memo as well as our second interim report mandated by Congress….[and] we present 66 recommendations flowing from several key ideas:
      • First, we must defend democracies from AI-enabled disinformation and other malign uses of AI by our adversaries.
      • Second, the government should expand and democratize basic AI research—the wellspring of our technological advantages.
      • Third, the government must build a digital ecosystem within national security departments and agencies for AI R&D.
      • Fourth, connecting technologists and operators will be the key to leveraging AI in all national security missions.
      • Fifth, we must close the tech talent deficit by strengthening STEM education, recruiting the best minds from around the world, and training the national security workforce.
      • Sixth, we must build a resilient domestic microelectronics industrial base.
      • Seventh, we will need interconnected strategies for technologies associated with AI including biotechnology and quantum computing.
      • Eighth, we cannot only focus on domestic initiatives in a global competition.
    • The NSCAI declared “[w]e must lead the development of AI technical standards and norms in international forums, and strengthen AI partnerships with allies and partners to build a digital future reflecting our values and protecting our interests.”
    • The NSCAI asserted:
      • The totality of the recommendations illustrates a key point: Laying out a vision is not enough. A winning strategy demands major muscle movements in and across departments and agencies, and significant executive and legislative action. It requires overcoming the technical, bureaucratic, and human obstacles to change, and driving very specific policies.
      • We believe the United States needs a new White House-led technology council to elevate AI- driven technology developments to the center of national decision-making, and a technology advisor to lead a new technology competitiveness strategy that integrates the complex interplay between technology, national security, and economic policies.
  • Key Republican stakeholders introduced the “Beat CHINA for 5G Act of 2020” that would require the Federal Communications Commission (FCC) to auction off a prized piece of mid-band spectrum to speed the roll out of 5G in the United States (U.S.) The bill was introduced by Commerce, Science, and Transportation Committee Chair Roger Wicker (R-MS), Communications, Technology, Innovation, and the Internet Subcommittee Chair John Thune (R-SD), House Energy and Commerce Committee Ranking Member Greg Walden (R-OR), and Oversight and Investigations Subcommittee Brett Guthrie (R-KY), and Representative Bob Latta (R-OH).
    • In their press release, they claimed:
      • The Beat CHINA for 5G Act of 2020 would empower the FCC to open more critical mid-band spectrum for non-federal, commercial wireless use by requiring the FCC to begin an auction of the 3.45-3.55 GHz band by December 2021.
      • In February 2018, the National Telecommunications and Information Administration (NTIA) identified the 3.45-3.55 GHz band as a candidate for potential repurposing. Earlier this year, NTIA released a technical report indicating that spectrum sharing opportunities were possible in this band.
      • In August 2020, the White House announced that it would make 100 MHz of mid-band spectrum in the 3.45-3.55 GHz band available for non-federal, commercial wireless use. In September 2020, the FCC took a first step to start transitioning existing services to make this band available for 5G use.
      • These actions by the Trump Administration are crucial to growing our economy and enhancing our national security. This legislation is the final step to making sure there are no delays and this auction stays on track.
    • In early August, the White House and the Department of Defense (DOD) announced it would make available 100 MHz of mid-band spectrum in the 3450-3550 MHz band. (See here for more detail.)
  • In a press release, the Department of Defense (DOD) detailed its “$600 million in awards for 5G experimentation and testing at five U.S. military test sites, representing the largest full-scale 5G tests for dual-use applications in the world.” These awards were made largely to prominent private sector technology and telecommunications companies vying to play prominent roles in 5G. However, of course, no awards were made to companies from the People’s Republic of China (PRC). Nonetheless, this announcement may provoke further claims from Members of Congress and stakeholders that the DOD’s effort is the camel’s nose under the tent of a nationalized 5G system.
    • This announcement is part of the DOD’s 5G Strategy that “provides the DOD approach to implementing the National Strategy to Secure 5G and aligns with the National Defense Authorization Act for Fiscal Year 2020 (FY2020), Section 254…[that] is also consistent with National Defense Strategy guidance to lead in key areas of great power competition and lethality to ensure 5G’s ‘impact on the battle network of the future.’”
    • In a related DOD release, it was explained:
      • The effort — Tranche 1 of the department’s larger 5G initiative — will accelerate adoption of 5G technology, enhance the effectiveness and lethality of U.S. combat forces, and further the development and use of common 5G standards to ensure interoperability with military partners and allies.
    • The DOD added:
      • Each installation will partner military Services, industry leaders, and academic experts to advance the Department’s 5G capabilities. Projects will include piloting 5G-enabled augmented/virtual reality for mission planning and training, testing 5G-enabled Smart Warehouses, and evaluating 5G technologies to enhance distributed command and control.
    • The DOD provided details on the 5G experimentation for these Tranche 1 sites:
      • Joint Base Lewis-McChord (JBLM), Washington – Augmented Reality/Virtual Reality Training 
        • The objective of this project is to rapidly field a scalable, resilient, and secure 5G network to provide a test bed for experimentation with a 5G-enabled Augmented Reality/Virtual Reality (AR/VR) capability for mission planning, distributed training, and operational use.  Industry partners at this site include:
        • GBL System Corp. (GBL): GBL’s Samsung-based 5G testbed will utilize mid-band spectrum to provide high capacity, low latency coverage at JBLM (Approximately 3 sq. mi.) and Yakima Training Center (Approximately 15 sq. mi.).
        • AT&T: AT&T will develop a system to allow use of 5G connectivity with present training devices.
        • Oceus Networks: Oceus will develop and field a Commercial Off-The-Shelf (COTS) based 5G handheld called Tough Mobile Device-5G (TMD-5G) for the field training environment.
        • Booz-Allen Hamilton (BAH): BAH will deliver an Army-owned, multivendor prototype for combat-like training using AR/VR technology in 5G-enhanced training locations based on an Open Systems Architecture (OSA).
      • Naval Base San Diego (NBSD), California – 5G Smart Warehousing (Transshipment)
        • The objective of this project is to develop a 5G-enabled Smart Warehouse focused on transshipment between shore facilities and naval units, to increase the efficiency and fidelity of naval logistic operations, including identification, recording, organization, storage, retrieval, and transportation of materiel and supplies.  Additionally, the project will create a proving ground for testing, refining, and validating emerging 5G-enabled technologies.  Industry partners at this site include:
        • AT&T: AT&T will quickly deploy (within 9 months) a network based on commercially available equipment to support 4G and 5G utilizing cellular spectrum in both the sub-6 GHz and millimeter wave bands.
        • GE Research: GE Research 5G-enabled applications will support real-time asset tracking, warehouse modeling and predictive analytics.
        • Vectrus Mission Solutions Corporation (Vectrus): Vectrus applications will provide industry-leading capabilities for inventory management, network security, robotic material moving, & environmental sensing.
        • Deloitte Consulting LLP (Deloitte): Deloitte will support a wide array of applications including Autonomous Mobile Robots, Unmanned Aircraft System (UAS) with autonomous drones, biometrics, cameras, AR/VR, and digitally tracked inventory.
      • Marine Corps Logistics Base (MCLB) Albany, Georgia – 5G Smart Warehousing (Vehicular)
        • This project will develop a 5G-enabled Smart Warehouse focused on vehicular storage and maintenance, to increase the efficiency and fidelity of MCLB Albany logistic operations, including identification, recording, organization, storage, retrieval, and inventory control of materiel and supplies.  Additionally, the project will create a proving ground for testing, refining, and validating emerging 5G-enabled technologies.  Industry partners at this site include:
        • Federated Wireless: Federated Wireless leverages open standards and an open solution to provide a testbed with both indoor and outdoor coverage, supporting a growing segment of the US 5G equipment market. 
        • GE Research: The GE approach will support real-time asset tracking, facility modeling and predictive analytics.
        • KPMG LLP: KPMG applications will create an integrated, automated, and digitized process for equipment and product movement throughout the warehouse.
        • Scientific Research Corporation (SRC): SRC’s 5G-enabled offering will demonstrate automated management and control of warehouse logistics, asset and inventory tracking, environmental management, and facility access control.
      • Nellis Air Force Base, Nevada – Distributed Command and Control
        • The objective of this effort is to develop a testbed for use of 5G technologies to aid in Air, Space, and Cyberspace lethality while enhancing command and control (C2) survivability.  Specifically, a 5G network will be employed to disaggregate and mobilize the existing C2 architectures in an agile combat employment scenario.
        • Industry partners at this site include:
        • AT&T: AT&T will provide an initially fixed then mobile 5G environment with high capacity and low latency to support the connectivity requirements associated with the mobile combined air operations centers.
      • Hill Air Force Base, Utah – Dynamic Spectrum Utilization
        • This project addresses the challenge of enabling Air Force radars to dynamically share spectrum with 5G cellular services.  The project will develop sharing/coexistence system prototypes and evaluate their effectiveness with real-world, at-scale networks in controlled environments.  The objective of this effort is to develop effective methodologies to allow the sharing or coexistence between airborne radar systems and 5G cellular telephony systems in the 3.1 – 3.45 GHz band.  Industry partners at this site include:
        • Nokia: The Nokia testbed includes traditional as well as open standards architectures including high-power massive multi-antenna systems.
        • General Dynamics Mission Systems, Inc. (GDMS): GDMS will develop and field a novel coexistence application that includes independent tracking of radar signals to support the radio access network in mitigation actions.
        • Booz Allen Hamilton (BAH): BAH’s approach utilizes Artificial Intelligence to provide a complete coexistence system with rapid response to interference. 
        • Key Bridge Wireless LLC: Key Bridge will demonstrate an adaptation of an existing commercial spectrum sharing approach for the 3.1-3.45 GHz band as a low risk solution to the coexistence issues.
        • Shared Spectrum Company (SSC): SSC’s approach aims to maintain continuous 5G communications via early radar detections and 5G-enabled Dynamic Spectrum Access.
        • Ericsson: Ericsson’s novel approach employs the 5G infrastructure to provide the required sensing coupled with Machine Learning and 5G-enabled spectrum aggregation.
  • Facebook announced it is suing two companies for data scraping in a suit filed in California state court. In its complaint, Facebook asserted:
    • Beginning no later than September 2019 and continuing until at least September 2020, Defendants BrandTotal Ltd. (BrandTotal ́) and Unimania, Inc. (Unimania) developed and distributed internet browser extensions (malicious extensions) designed to improperly collect data from Twitter, YouTube, LinkedIn, Amazon, Facebook, and Instagram. Defendants distributed the malicious extensions on the Google Chrome Store. Anyone who installed one of Defendants malicious extensions essentially self-compromised their browsers to run automated programs designed to collect data about its user from specific websites. As to Facebook and Instagram, when a user visited those sites with a self-compromised browser, Defendants used the malicious extensions to connect to Facebook computers and collect or scrape ́ user profile information (including name, user ID, gender, date of birth, relationship status, and location information), advertisements and advertising metrics (including name of the advertiser, image and text of the advertisement, and user interaction and reaction metrics), and user Ad Preferences (user advertisement interest information). Defendants used the data collected by the malicious extensions to sell marketing intelligence, and other services through the website brandtotal.com. Defendants’ conduct was not authorized by Facebook.
    • Facebook brings this action to stop Defendants’ violations of Facebook’s and Instagram’s Terms and Policies. Facebook also brings this action to obtain damages and disgorgement for breach of contract and unjust enrichment.
    • Of course, it is a bit entertaining to see Facebook take issue with the data collection techniques of others given the myriad ways it tracks so many people across the internet especially when they are not even interacting with Facebook or logged into the social media platform. See here, here, and here for more on Facebook’s practices, some of which may even be illegal in a number of countries, and, of course, some of the most egregious practices led to the record $5 billion fine levied by the Federal Trade Commission.

Coming Events

  • The European Union Agency for Cybersecurity (ENISA), Europol’s European Cybercrime Centre (EC3) and the Computer Emergency Response Team for the EU Institutions, Bodies and Agencies (CERT-EU) will hold the 4th annual IoT Security Conference series “to raise awareness on the security challenges facing the Internet of Things (IoT) ecosystem across the European Union:”
    • Artificial Intelligence – 14 October at 15:00 to 16:30 CET
    • Supply Chain for IoT – 21 October at 15:00 to 16:30 CET
  • The House Intelligence Committee will conduct a virtual hearing titled “Misinformation, Conspiracy Theories, and ‘Infodemics’: Stopping the Spread Online.”
  • The Federal Communications Commission (FCC) will hold an open commission meeting on 27 October, and the agency has released a tentative agenda:
    • Restoring Internet Freedom Order Remand – The Commission will consider an Order on Remand that would respond to the remand from the U.S. Court of Appeals for the D.C. Circuit and conclude that the Restoring Internet Freedom Order promotes public safety, facilitates broadband infrastructure deployment, and allows the Commission to continue to provide Lifeline support for broadband Internet access service. (WC Docket Nos. 17-108, 17-287, 11- 42)
    • Establishing a 5G Fund for Rural America – The Commission will consider a Report and Order that would establish the 5G Fund for Rural America to ensure that all Americans have access to the next generation of wireless connectivity. (GN Docket No. 20-32)
    • Increasing Unlicensed Wireless Opportunities in TV White Spaces – The Commission will consider a Report and Order that would increase opportunities for unlicensed white space devices to operate on broadcast television channels 2-35 and expand wireless broadband connectivity in rural and underserved areas. (ET Docket No. 20-36)
    • Streamlining State and Local Approval of Certain Wireless Structure Modifications – The Commission will consider a Report and Order that would further accelerate the deployment of 5G by providing that modifications to existing towers involving limited ground excavation or deployment would be subject to streamlined state and local review pursuant to section 6409(a) of the Spectrum Act of 2012. (WT Docket No. 19-250; RM-11849)
    • Revitalizing AM Radio Service with All-Digital Broadcast Option – The Commission will consider a Report and Order that would authorize AM stations to transition to an all-digital signal on a voluntary basis and would also adopt technical specifications for such stations. (MB Docket Nos. 13-249, 19-311)
    • Expanding Audio Description of Video Content to More TV Markets – The Commission will consider a Report and Order that would expand audio description requirements to 40 additional television markets over the next four years in order to increase the amount of video programming that is accessible to blind and visually impaired Americans. (MB Docket No. 11-43)
    • Modernizing Unbundling and Resale Requirements – The Commission will consider a Report and Order to modernize the Commission’s unbundling and resale regulations, eliminating requirements where they stifle broadband deployment and the transition to next- generation networks, but preserving them where they are still necessary to promote robust intermodal competition. (WC Docket No. 19-308)
    • Enforcement Bureau Action – The Commission will consider an enforcement action.
  • On October 29, the Federal Trade Commission (FTC) will hold a seminar titled “Green Lights & Red Flags: FTC Rules of the Road for Business workshop” that “will bring together Ohio business owners and marketing executives with national and state legal experts to provide practical insights to business and legal professionals about how established consumer protection principles apply in today’s fast-paced marketplace.”
  • The Senate Commerce, Science, and Transportation Committee will reportedly hold a hearing on 29 October regarding 47 U.S.C. 230 with testimony from:
    • Jack Dorsey, Chief Executive Officer of Twitter;
    • Sundar Pichai, Chief Executive Officer of Alphabet Inc. and its subsidiary, Google; and 
    • Mark Zuckerberg, Chief Executive Officer of Facebook.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Gerd Altmann from Pixabay

Update To Pending Legislation In U.S. Congress, Part VI

An AI resolution was introduced in Congress to shape the national strategy, and a committee of jurisdiction looks at a national commission on AI’s recommendations.

Last week, we looked at the artificial intelligence (AI) legislation that could move during the balance of the Congressional year, but two recent developments should also be noted. I neglected to explain the introduction of a resolution “[e]xpressing the sense of Congress with respect to the principles that should guide the national artificial intelligence strategy of the United States.” Of course, this is not legislation and would have no legal force this Administration or future Administrations would need to heed. Rather, this effort is intended to serve as guide for future legislation and future administrative action.

Representatives Will Hurd (R-TX) and Robin Kelly (D-IL) introduced this resolution that was cosponsored by Representatives Steve Chabot (R-OH), Gerald Connolly (D-VA), Marc Veasey (D-TX), Seth Moulton (D-MA), Michael Cloud (R-TX), and Jim Baird (R-IN).

Hurd and Kelly have been working with the Bipartisan Policy Center, a Washington, D.C. think tank founded by four former Senate Majority Leaders to produce policy consensus of the sort that used to happen in Congress. They worked together on four white papers on AI:

The resolution states “[i]t is the sense of Congress that the following principles should guide the national artificial intelligence strategy of the United States:

(1) Global leadership.

(2) A prepared workforce.

(3) National security.

(4) Effective research and development.

(5) Ethics, reduced bias, fairness, and privacy.”

By way of contrast, the February 2019 Executive Order (EO) 13859 on Maintaining American Leadership in Artificial Intelligence stated “[i]t is the policy of the United States Government to sustain and enhance the scientific, technological, and economic leadership position of the United States in AI R&D and deployment through a coordinated Federal Government strategy, the American AI Initiative (Initiative), guided by five principles:

(a) The United States must drive technological breakthroughs in AI across the Federal Government, industry, and academia in order to promote scientific discovery, economic competitiveness, and national security.

(b) The United States must drive development of appropriate technical standards and reduce barriers to the safe testing and deployment of AI technologies in order to enable the creation of new AI-related industries and the adoption of AI by today’s industries.

(c) The United States must train current and future generations of American workers with the skills to develop and apply AI technologies to prepare them for today’s economy and jobs of the future.

(d) The United States must foster public trust and confidence in AI technologies and protect civil liberties, privacy, and American values in their application in order to fully realize the potential of AI technologies for the American people.

(e) The United States must promote an international environment that supports American AI research and innovation and opens markets for American AI industries, while protecting our technological advantage in AI and protecting our critical AI technologies from acquisition by strategic competitors and adversarial nations.

While the Trump Administration’s materials on the EO have mentioned civil liberties and privacy, they have largely not examined the potential effects of AI with respect to bias and fairness. Democrats have generally been keener to investigate potential problems with the algorithms underlying AI and similar technologies perpetuating racial and ethnic biases in western society. For example, facial recognition technology misidentifies African Americans, Latinos, and Asian Americans at much higher rates than American men of European descent. The Hurd/Kelly resolution would seem to focus more on these issues than the Trump Administration’s public materials on its AI efforts.

The two efforts would seem fairly close on the role the U.S. would ideally play in international development of AI. The nation would lead the development and implementation of AI under both plans with the additional gloss that the Trump Administration is more transparent in its notion that leading the world in AI will help ensure continued American military and commercial dominance in technology. Both are motivated, in significant part, by concerns that the People’s Republic of China (PRC), may continue on its current technological trajectory, surpass the U.S. in AI, and then be poised to lead the world according to its values in this field. It is possible the AI effort in the U.S. will be informed as much by competition as were various fields in the mid-20th Century by the Cold War with the Russians.

Otherwise, both are focused on workforce development, both in order to foster the types of education and training needed for people to work in AI and to help people in industries revolutionized or disrupted by AI. Likewise, both are concerned with maximizing R&D funding and efforts.

Last week, the House Armed Services Committee’s Intelligence and Emerging Threats and Capabilities Subcommittee conducted a virtual hearing titled “Interim Review of the National Security Commission on Artificial Intelligence Effort and Recommendations” with these witnesses:

  • Dr. Eric Schmidt , Chairman, National Security Commission on Artificial Intelligence 
  • HON Robert Work, Vice Chairman, National Security Commission on Artificial Intelligence, HON Mignon Clyburn, Commissioner, National Security Commission on Artificial Intelligence 
  • Dr. José-Marie Griffiths, Commissioner, National Security Commission on Artificial Intelligence

Chair James Langevin (D-RI) stated:

  • Our intent for this commission was to ensure a bipartisan whole-of-government effort focused on solving national security issues, and we appreciate the leadership and hard work of our witnesses in supporting the commission’s efforts in that spirit.
  • [T]his Commission is working through the difficult issues requiring national investments in research and software development and new approaches on how to apply AI appropriately for national security missions; attract and hold onto the best talent; protect and build upon our technical advantages; best partner with our allies on AI; stay ahead of the threat posed by this technology in the hands of adversaries; and implement ethical requirements for responsible American-built AI.
  • Indeed, last year the Defense Innovation Board, which was also chaired until recently by Dr. Schmidt, helped the Department begin the necessary discussions on ethics in AI.
  • I applaud the Commission for being forward leaning by not only releasing an initial and annual report as required in law, but also releasing quarterly recommendations. Ranking Member [Elise] Stefanik (R-NY) and I, along with Chair Adam Smith (D-WA) and Ranking Member Mac Thornberry (R-TX), were pleased to support a package of provisions in this year’s House version of the FY 2021 National Defense Authorization Act (NDAA) (the “William M. (Mac) Thornberry National Defense Authorization Act for Fiscal Year 2021” (H.R.6395)) based on the Commission’s first quarter’s recommendations. The House version carried 11 provisions, with the majority deriving from the Commission’s call to Strengthen the AI Workforce. We are pleased that both Commissioner Griffiths and Commissioner Clyburn are with us today to testify on the need for action on AI talent. 
  • On that note, we must implement policies that promote a sound economic, political, and strategic environment on U.S. soil where global collaboration, discovery, and innovation can all thrive. The open dialogue and debate resident in academia and the research community can be anathema to the requirement for secrecy in the Department of Defense.
  • But we must recognize – and embrace – how our free society provides the competitive advantage that lets us innovate faster than our great power competitors. Our free society enables a dynamic innovation ecosystem, and federally funded open basic research focused on discovery has allowed American universities to develop an innovation base that has effectively functioned as a talent acquisition program for the U.S. economy. And that talent is required today as much as ever to solve our most pressing national security challenges.
  • Indeed, great power competition is also a race for talent. We are looking forward to hearing about your efforts, the observations and recommendations you’ve already developed, and your plan to continue until you submit the Commission’s final report in the spring.

Ranking Member Elise Stefanik (R-NY) noted she introduced a bill in March 2018 to establish a national commission on AI and cosponsored the 11 amendments to H.R.6395 that added the Commission’s first quarter recommendations to the House’s FY 2021 NDAA. She asserted this represents a remarkable achievement that speaks to the quality of the recommendations made to policymakers. Stefanik said in her remarks before the Commission she spoke about the need for AI to be transformative and stressed that if AI does not fundamentally change the way the U.S. operates, adapt the collective defense, change workforce policy, change priorities and shift resources, then the U.S. is failing to embrace the technology to its fullest. She expressed pleasure that many of the initial recommendations address these issues.

Stefanik claimed the last several weeks have provided glimpses at the power of AI. She said the Defense Advanced Research Projects Agency’s (DARPA) AlphaDogFight demonstration that pitted an experienced fighter pilot against an algorithm developed by a minoty woman owned small business from Maryland. Stefanik noted AI decisively won, and Secretary of Defense Mark Esper characterized the victory as the “tectonic impact of machine learning on the future of warfighting.” Stefanik said a hypervelocity weapon shot down a cruise missile with the help of an advanced battle management system powered by powerful data analytics and AI capabilities. She said the head of Northern Command remarked afterwards “I am not skeptic after watching today.”

Stefanik stated that the policy governing AI is equally as important as technical demonstrations, specifically the development of standards, ethical principles, accountability, and the appropriate level of human oversight. She asserted all of these will be critical to ensuring Americans trust the use of AI. Stefanik contended that the Commission’s work is crucial in ensuring an enduring partnership of the military, academia, and the private sector built on trust, democratic ideals, and mutual values.

In their joint testimony, the four Commissioners stated:

We are encouraged to see several NSCAI recommendations reflected in the House and Senate versions of this year’s NDAA, and would like to take this opportunity to comment on the importance of legislative action in five key areas. We believe it is crucial for these recommendations to reach the President’s desk and become law.

1. Expanding AI Research and Development

Both the House and Senate bills feature encouraging actions on federal government investment in AI research and development, public-private coordination, and establishment of technical standards. The Commission shares these priorities.

We want to emphasize the importance of creating a National AI Research Resource. There is a growing divide in AI research between “haves” in the private sector and “have nots” in academia. Much of today’s AI research depends on access to resource-intensive computation and large, curated data sets. These are held primarily in companies. We fear that this growing gap will degrade research and training at our universities.

2. DOD Organizational Reforms

We have made a number of proposals to ensure the Department of Defense (DOD) is well positioned to excel in the AI era. In particular, we want to emphasize the need for a senior-level Steering Committee on Emerging Technology. This top-down approach would help the Department overcome some of the bureaucratic challenges that are impeding AI adoption. It would also focus concept and capability development on emerging threats, and guide defense investments to ensure strategic advantage against near-peer competitors.

Importantly, we believe this Steering Committee must include the Intelligence Community (IC). A central goal of our recommendation is to create a leadership mechanism that bridges DOD and the IC. This would better integrate intelligence analysis related to emerging technologies with defense capability development. And it would help ensure that DOD and the IC have a shared vision of national security needs and coherent, complementary investment strategies.

3. Microelectronics

We believe the United States needs a national strategy for microelectronics. Recent advances in AI have depended heavily on advances in available computing power. To preserve U.S. global leadership in AI, we need to preserve leadership in the underlying microelectronics.

In our initial reports, the Commission has put forward specific recommendations to lay the groundwork for long-term access to resilient, trusted, and assured microelectronics. We propose a portfolio-based approach to take advantage of American strengths and ensure the United States stays ahead of competitors in this field.

4. Ethical and Responsible Use

Determining how to use AI responsibly is central to the Commission’s work. We recently published a detailed “paradigm” of issues and practices that government agencies should consider in developing and fielding AI. We believe these proposals can help DOD and the IC to operationalize their AI ethics principles.

Within the government, it is important to develop an understanding of these principles and practices, and an awareness of the risks and limitations associated with AI systems. That is why we recommend that DOD, the IC, Department of Homeland Security (DHS), and Federal Bureau of Investigation (FBI) should conduct self-assessments. These should focus on several issues:

  • Whether the department/agency has access to adequate in-house expertise––including ethical, legal, and technical expertise––to assist in the development and fielding of responsible AI systems;
  • Whether current procurement processes sufficiently encourage or require such expertise to be utilized in acquiring commercial AI systems; and,
  • Whether organizations have the ability and resources to consult outside experts when in-house expertise is insufficient.

5. Workforce Reforms

Much of the Commission’s early work has focused on building an AI-ready national security workforce. This includes recruiting experts and developers, training end users, identifying talented individuals, and promoting education. If the government cannot improve its recruitment and hiring, or raise the level of AI knowledge in its workforce, we will struggle to achieve any significant AI progress.

In particular, we support several provisions in the current versions of the NDAA. These include:

  • Training courses in AI and related topics for human resources practitioners, to improve the government’s recruitment of AI talent.
  • The creation of unclassified workspaces. This would allow organizations to hire and utilize new employees more quickly, while their security clearances are in process.
  • A pilot program for the use of electronic portfolios to evaluate applicants for certain technical positions. Because AI and software development are sometimes self-taught fields, experts do not always have resumes that effectively convey their knowledge. The pilot program would pair HR professionals with subject matter experts to better assess candidates’ previous work as a tangible demonstration of his or her capabilities.
  • A program to track and reward the completion of certified AI training and courses. This would help agencies identify and capitalize on AI talent within the ranks.
  • A mechanism for hiring university faculty with relevant expertise to serve as part-time researchers in government laboratories. The government would benefit from access to more outside experts. We believe this mechanism should apply not only to DOD but also to DHS, Department of Commerce, DOE, and the IC.
  • Expanding the use of public-private talent exchange programs in DOD. We recommend expanding both the number of participants in general and the number of exchanges with AI-focused companies in particular. We also recommend creating an office to manage civilian talent exchanges and hold their billets.
  • An addition to the Armed Services Vocational Aptitude Battery Test to include testing for computational thinking. This would provide the military with a systematic way to identify potential AI talent.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Photo by Owen Beard on Unsplash

Pending Legislation In U.S. Congress, Part VI

At this point, Congress is just looking to organize U.S. AI efforts, maximize resources, and better understand the field.

Today, let us survey bills on artificial intelligence (AI), an area of growing interest and concern among Democratic and Republican Members. Lawmakers and staff have been grappling with this new technology, and, at this point, are looking to study and foster its development, particularly in maintain the technological dominance of the United States (U.S.) There are some bills that may get enacted this year. However, any legislative action would play out against extensive executive branch AI efforts. In any event, Congress does not seem close to passing legislation that would regulate the technology and is looking to rely on existing statutes and regulators (e.g. the Federal Trade Commission’s powers to police unfair and deceptive practices.)

The bill with the best chances of enactment at present is the “National Artificial Intelligence Initiative Act of 2020” (H.R.6216), which was added to the “William M. (Mac) Thornberry National Defense Authorization Act for Fiscal Year 2021” (H.R.6395), a bill that has other mostly defense related AI provisions.

Big picture, H.R. 6216 would require better coordination of federal AI initiatives, research, and funding, and more involvement in the development of voluntary, consensus-based standards for AI. Much of this would happen through the standing up of a new “National Artificial Intelligence Initiative Office” by the Office of Science and Technology Policy (OSTP) in the White House. This new entity would be the locus of AI activities and programs in the United States’ (U.S.) government with the ultimate goal of ensuring the nation is the world’s foremost developer and user of the new technology.

Moreover, OSTP would “acting through the National Science and Technology Council…establish or designate an Interagency Committee to coordinate Federal programs and activities in support of the Initiative.” This body would “provide for interagency coordination of Federal artificial intelligence research, development, and demonstration activities, development of voluntary consensus standards and guidelines for research, development, testing, and adoption of ethically developed, safe, and trustworthy artificial intelligence systems, and education and training activities and programs of Federal departments and agencies undertaken pursuant to the Initiative.” The committee would need to “develop a strategic plan for AI” within two years and update it every three years thereafter. Moreover, the committee would need to “propose an annually coordinated interagency budget for the Initiative to the Office of Management and Budget (OMB) that is intended to ensure that the balance of funding across the Initiative is sufficient to meet the goals and priorities established for the Initiative.” However, OMB would be under no obligation to take notice of this proposal save for pressure from AI stakeholders in Congress or AI champions in any given Administration. The Secretary of Energy would create a ‘‘National Artificial Intelligence Advisory Committee” to advise the President and National Artificial Intelligence Initiative Office on a range of AI policy matters.

Federal agencies would be permitted to award funds to new Artificial Intelligence Research Institutes to pioneer research in any number of AI fields or considerations. The bill does not authorize any set amount of money for this program and instead kicks the decision over to the Appropriations Committees on any funding. The National Institute of Standards and Technology (NIST) must “support measurement research and development of best practices and voluntary standards for trustworthy artificial intelligence systems” and “support measurement research and development of best practices and voluntary standards for trustworthy artificial intelligence systems” among other duties. NIST must “shall work to develop, and periodically update, in collaboration with other public and private sector organizations, including the National Science Foundation and the Department of Energy, a voluntary risk management framework for the trustworthiness of artificial intelligence systems.” NIST would also “develop guidance to facilitate the creation of voluntary data sharing arrangements between industry, federally funded research centers, and Federal agencies for the purpose of advancing artificial intelligence research and technologies.”

The National Science Foundation (NSF) would need to “fund research and education activities in artificial intelligence systems and related fields, including competitive awards or grants to institutions of higher education or eligible non-profit organizations (or consortia thereof).” The Department of Energy must “carry out a cross-cutting research and development program to advance artificial intelligence tools, systems, capabilities, and workforce needs and to improve the reliability of artificial intelligence methods and solutions relevant to the mission of the Department.” This department would also be tasked with advancing “expertise in artificial intelligence and high-performance computing in order to improve health outcomes for veteran populations.”

According to a fact sheet issued by the House Science, Space, and Technology Committee, [t]he legislation will:

  • Formalize interagency coordination and strategic planning efforts in AI research, development, standards, and education through an Interagency Coordination Committee and a coordination office managed by the Office of Science and Technology Policy (OSTP).
  • Create an advisory committee to better inform the Coordination Committee’s strategic plan, track the state of the science around artificial intelligence, and ensure the Initiative is meeting its goals.
  • Create a network of AI institutes, coordinated through the National Science Foundation, that any Federal department of agency could fund to create partnerships between the academia and the public and private sectors to accelerate AI research focused on an economic sector, social sector, or on a cross-cutting AI challenge.
  • Support basic AI measurement research and standards development at the National Institute for Standards and Technology(NIST) and require NIST to create a framework for managing risks associated with AI systems and best practices for sharing data to advance trustworthy AI systems.
  • Support research at the National Science Foundation (NSF) across a wide variety of AI related research areas to both improve AI systems and use those systems to advance other areas of science. This section requires NSF to include an obligation for an ethics statement for all research proposals to ensure researchers are considering, and as appropriate, mitigating potential societal risks in carrying out their research.
  • Support education and workforce development in AI and related fields, including through scholarships and traineeships at NSF.
  • Support AI research and development efforts at the Department of Energy (DOE), utilize DOE computing infrastructure for AI challenges, promote technology transfer, data sharing, and coordination with other Federal agencies, and require an ethics statement for DOE funded research as required at NSF.
  • Require studies to better understand workforce impacts and opportunities created by AI, and identify the computing resources necessary to ensure the United States remains competitive in AI.

 As mentioned, the House’s FY 2021 NDAA has a number of other AI provisions, including:

  • Section 217–Modification of Joint Artificial Intelligence Research, Development, and Transition Activities. This section would amend section 238 of the John S. McCain National Defense Authorization Act for Fiscal Year 2019 (Public Law 115-232) by assigning responsibility for the Joint Artificial Intelligence Center (JAIC) to the Deputy Secretary of Defense and ensure data access and visibility for the JAIC.
  • Section 224–Board of Directors for the Joint Artificial Intelligence Center. This section would direct the Secretary of Defense to create and resource a Board of Directors for the Joint Artificial Intelligence Center (JAIC), comprised of senior Department of Defense officials, as well as civilian directors not employed by the Department of Defense. The objective would be to have a standing body over the JAIC that can bring governmental and non-governmental experts together for the purpose of assisting the Department of Defense in correctly integrating and operationalizing artificial intelligence technologies.
  • Section 242–Training for Human Resources Personnel in Artificial Intelligence and Related Topics. This section would direct the Secretary of Defense to develop and implement a program to provide human resources personnel with training in the fields of software development, data science, and artificial intelligence, as such fields relate to the duties of such personnel, not later 1 year after the date of the enactment of this Act.
  • Section 248–Acquisition of Ethically and Responsibly Developed Artificial Intelligence Technology. This section would direct the Secretary of Defense, acting through the Board of Directors of the Joint Artificial Intelligence Center, to conduct an assessment to determine whether the Department of Defense has the ability to ensure that any artificial intelligence technology acquired by the Department is ethically and responsibly developed.
  • Section 805–Acquisition Authority of the Director of the Joint Artificial Intelligence Center. This section would authorize the Director of the Joint Artificial Intelligence Center with responsibility for the development, acquisition, and sustainment of artificial intelligence technologies, services, and capabilities through fiscal year 2025.

The “FUTURE of Artificial Intelligence Act of 2020” (S.3771) was marked up and reported out of the Senate Commerce, Science, and Transportation Committee in July 2020. This bill would generally “require the Secretary of Commerce to establish the Federal Advisory Committee on the Development and Implementation of Artificial Intelligence” to advise the department on a range of AI related matters, including competitiveness, workforce, education, ethics training and development, the open sharing of data and research, international cooperation, legal and civil rights, government efficiency, and others. Additionally, a subcommittee will be empaneled to focus on the intersection of AI and law enforcement and national security issues. 18 months after enactment, this committee must submit its findings in a report to Congress and the Department of Commerce. A bill with the same titled has been introduced in the House (H.R.7559) but has not been acted upon. This bill would “require the Director of the National Science Foundation, in consultation with the Director of the Office of Science and Technology Policy, to establish an advisory committee to advise the President on matters relating to the development of artificial intelligence.”

The same day S.3771 was marked up, the committee took up another AI bill: the “Advancing Artificial Intelligence Research Act of 2020” (S.3891) that would “require the Director of the National Institute of Standards and Technology (NIST) to advance the development of technical standards for artificial intelligence, to establish the National Program to Advance Artificial Intelligence Research, to promote research on artificial intelligence at the National Science Foundation” (NSF). $250 million a year would be authorized for NIST to distribute for AI research. NIST would also need to establish at least six AI research institutes. The NSF would “establish  a  pilot  program to assess the feasibility and advisability of awarding  grants  for  the  conduct  of  research in rapidly evolving, high priority topics.”

In early November 2019, the Senate Homeland Security & Governmental Affairs Committee marked up the “AI in Government Act of 2019” (S.1363) that would establish an AI Center of Excellence in the General Services Administration (GSA) to:

  • promote the efforts of the Federal Government in developing innovative uses of and acquiring artificial intelligence technologies by the Federal Government;
  • improve cohesion and competency in the adoption and use of artificial intelligence within the Federal Government

The bill stipulates that both of these goals would be pursued “for the purposes of benefitting the public and enhancing the productivity and efficiency of Federal Government operations.”

The Office of Management and Budget (OMB) must “issue a memorandum to the head of each agency that shall—

  • inform the development of policies regarding Federal acquisition and use by agencies regarding technologies that are empowered or enabled by artificial intelligence;
  • recommend approaches to remove barriers for use by agencies of artificial intelligence technologies in order to promote the innovative application of those technologies while protecting civil liberties, privacy, civil rights, and economic and national security; and
  • identify best practices for identifying, assessing, and mitigating any discriminatory impact or bias on the basis of any classification protected under Federal nondiscrimination laws, or any unintended consequence of the use of artificial intelligence by the Federal Government.”

OMB is required to coordinate the drafting of this memo with the Office of Science and Technology Policy, GSA, other relevant agencies, and other key stakeholders.

This week, the House passed its version of S.1363, the “AI in Government Act of 2019” (H.R.2575), by voice vote sending it over to the Senate.

In September 2019, the House sent another AI bill to the Senate where it has not been taken up. The “Advancing Innovation to Assist Law Enforcement Act” (H.R.2613) would task the Financial Crimes Enforcement Network (FinCEN) with studying

  • the status of implementation and internal use of emerging technologies, including AI, digital identity technologies, blockchain technologies, and other innovative technologies within FinCEN;
  • whether AI, digital identity technologies, blockchain technologies, and other innovative technologies can be further leveraged to make FinCEN’s data analysis more efficient and effective; and
  • how FinCEN could better utilize AI, digital identity technologies, blockchain technologies, and other innovative technologies to more actively analyze and disseminate the information it collects and stores to provide investigative leads to Federal, State, Tribal, and local law enforcement, and other Federal agencies…and better support its ongoing investigations when referring a case to the Agencies.”

All of these bills are being considered against a backdrop of significant Trump Administration action on AI, using existing authority to manage government operations. The Administration sees AI as playing a key role in ensuring and maintaining U.S. dominance in military affairs and in other realms.

Most recently, OMB and the Office of Science and Technology Policy (OSTP) released their annual guidance to United States department and agencies to direct their budget requests for FY 2022 with respect to research and development (R&D). OMB and OSTP explained:

For FY2022, the five R&D budgetary priorities in this memorandum ensure that America remains at the global forefront of science and technology (S&T) discovery and innovation. The Industries of the Future (IotF) -artificial intelligence (AI), quantum information sciences (QIS), advanced communication networks/5G, advanced manufacturing, and biotechnology-remain the Administration’s top R&D priority.

Specifically, regarding AI, OMB and OSTP stated

Artificial Intelligence: Departments and agencies should prioritize research investments consistent with the Executive Order (EO) 13859 on Maintaining American Leadership in Artificial Intelligence and the 2019 update of the National Artificial Intelligence Research and Development Strategic Plan. Transformative basic research priorities include research on ethical issues of AI, data-efficient and high performance machine learning (ML) techniques, cognitive AI, secure and trustworthy Al, scalable and robust AI, integrated and interactive AI, and novel AI hardware. The current pandemic highlights the importance of use-inspired AI research for healthcare, including AI for discovery of therapeutics and vaccines; Al-based search of publications and patents for scientific insights; and Al for improved imaging, diagnosis, and data analysis. Beyond healthcare, use-inspired AI research for scientific and engineering discovery across many domains can help the Nation address future crises. AI infrastructure investments are prioritized, including national institutes and testbeds for AI development, testing, and evaluation; data and model resources for AI R&D; and open knowledge networks. Research is also prioritized for the development of AI measures, evaluation methodologies, and standards, including quantification of trustworthy AI in dimensions of accuracy, fairness, robustness, explainability, and transparency.

In February 2020, OSTP published the “American Artificial Intelligence Initiative: Year One Annual Report” in which the agency claimed “the Trump Administration has made critical progress in carrying out this national strategy and continues to make United States leadership in [artificial intelligence] (AI) a top priority.” OSTP asserted that “[s]ince the signing of the EO, the United States has made significant progress on achieving the objectives of this national strategy…[and] [t]his document provides both a summary of progress and a continued long-term vision for the American AI Initiative.” However, some agencies were working on AI-related initiatives independently of the EO, but the White House has folded those into the larger AI strategy it is pursuing. Much of the document recites already announced developments and steps.

However, OSTP seems to reference a national AI strategy that differs a bit from the one laid out in EO 13859 and appears to represent the Administration’s evolved thinking on how to address AI across a number of dimensions in the form of “key policies and practices:”

1)  Invest in AI research and development: The United States must promote Federal investment in AI R&D in collaboration with industry, academia, international partners and allies, and other non- Federal entities to generate technological breakthroughs in AI. President Trump called for a 2-year doubling of non-defense AI R&D in his fiscal year (FY) 2021 budget proposal, and in 2019 the Administration updated its AI R&D strategic plan, developed the first progress report describing the impact of Federal R&D investments, and published the first-ever reporting of government-wide non-defense AI R&D spending.

2)  Unleash AI resources: The United States must enhance access to high-quality Federal data, models, and computing resources to increase their value for AI R&D, while maintaining and extending safety, security, privacy, and confidentiality protections. The American AI Initiative called on Federal agencies to identify new opportunities to increase access to and use of Federal data and models. In 2019, the White House Office of Management and Budget established the Federal Data Strategy as a framework for operational principles and best practices around how Federal agencies use and manage data. 

3) Remove barriers to AI innovation: The United States must reduce barriers to the safe development, testing, deployment, and adoption of AI technologies by providing guidance for the governance of AI consistent with our Nation’s values and by driving the development of appropriate AI technical standards. As part of the American AI Initiative, The White House published for comment the proposed United States AI Regulatory Principles, the first AI regulatory policy that advances innovation underpinned by American values and good regulatory practices. In addition, the National Institute of Standards and Technology (NIST) issued the first-ever strategy for Federal engagement in the development of AI technical standards. 

4) Train an AI-ready workforce: The United States must empower current and future generations of American workers through apprenticeships; skills programs; and education in science, technology, engineering, and mathematics (STEM), with an emphasis on computer science, to ensure that American workers, including Federal workers, are capable of taking full advantage of the opportunities of AI. President Trump directed all Federal agencies to prioritize AI-related apprenticeship and job training programs and opportunities. In addition to its R&D focus, the National Science Foundation’s new National AI Research Institutes program will also contribute to workforce development, particularly of AI researchers. 

5) Promote an international environment supportive of American AI innovation: The United States must engage internationally to promote a global environment that supports American AI research and innovation and opens markets for American AI industries while also protecting our technological advantage in AI. Last year, the United States led historic efforts at the Organisation for Economic Cooperation and Development (OECD) to develop the first international consensus agreements on fundamental principles for the stewardship of trustworthy AI. The United States also worked with its international partners in the G7 and G20 to adopt similar AI principles. 

6) Embrace trustworthy AI for government services and missions: The United States must embrace technology such as artificial intelligence to improve the provision and efficiency of government services to the American people and ensure its application shows due respect for our Nation’s values, including privacy, civil rights, and civil liberties. The General Services Administration established an AI Center of Excellence to enable Federal agencies to determine best practices for incorporating AI into their organizations. 

Also in February 2020, the Department of Defense (DOD) announced in a press release that it “officially adopted a series of ethical principles for the use of Artificial Intelligence today following recommendations provided to Secretary of Defense Dr. Mark T. Esper by the Defense Innovation Board last October.” The DOD claimed “[t]he adoption of AI ethical principles aligns with the DOD AI strategy objective directing the U.S. military lead in AI ethics and the lawful use of AI systems.” The Pentagon added “[t]he DOD’s AI ethical principles will build on the U.S. military’s existing ethics framework based on the U.S. Constitution, Title 10 of the U.S. Code, Law of War, existing international treaties and longstanding norms and values.” The DOD stated “[t]he DOD Joint Artificial Intelligence Center (JAIC) will be the focal point for coordinating implementation of AI ethical principles for the department.”

The DOD explained that “[t]hese principles will apply to both combat and non-combat functions and assist the U.S. military in upholding legal, ethical and policy commitments in the field of AI…[and] encompass five major areas:

  • Responsible. DOD personnel will exercise appropriate levels of judgment and care, while remaining responsible for the development, deployment, and use of AI capabilities.
  • Equitable. The Department will take deliberate steps to minimize unintended bias in AI capabilities.
  • Traceable. The Department’s AI capabilities will be developed and deployed such that relevant personnel possess an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including with transparent and auditable methodologies, data sources, and design procedure and documentation.
  • Reliable. The Department’s AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire life-cycles.
  • Governable. The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.

It bears note that the DOD’s recitation of these five AI Ethics differs from those drafted by the Defense Innovation Board. Notably, in “Equitable,” the Defense Innovation Board also included that the “DOD should take deliberate steps to avoid unintended bias in the development and deployment of combat or non-combat AI systems that would inadvertently cause harm to persons” (emphasis added.) Likewise, in “Governable,” the Board recommended that “DOD AI systems should be designed and engineered to fulfill their intended function while possessing the ability to detect and avoid unintended harm or disruption, and for human or automated disengagement or deactivation of deployed systems that demonstrate unintended escalatory or other behavior (emphasis added.)

Additionally, the DOD has declined, at least at this time, to adopt the recommendations made by the Board regarding the use of AI:

1. Formalize these principles via official DOD channels. 

2. Establish a DOD-wide AI Steering Committee. 

3. Cultivate and grow the field of AI engineering. 

4. Enhance DOD training and workforce programs. 

5. Invest in research on novel security aspects of AI. 

6. Invest in research to bolster reproducibility. 

7. Define reliability benchmarks. 

8. Strengthen AI test and evaluation techniques. 

9. Develop a risk management methodology. 

10. Ensure proper implementation of AI ethics principles. 

11. Expand research into understanding how to implement AI ethics principles.

12. Convene an annual conference on AI safety, security, and robustness. 

In January 2020 OMB rand OSTP requested comments on a draft “Guidance for Regulation of Artificial Intelligence Applications” that would be issued to federal agencies as directed by EO 13859. OMB listed the 10 AI principles agencies must in regulating AI in the private sector, some of which have some overlap with the DOD’s Ethics:

  • Public trust in AI
  • Public participation
  • Scientific integrity and information quality
  • Risk assessment and management
  • Benefits and costs
  • Flexibility
  • Fairness and non-discrimination
  • Disclosure and transparency
  • Safety and security
  • Interagency coordination


OSTP explained how the ten AI principles should be used:

Consistent with law, agencies should take into consideration the following principles when formulating regulatory and non-regulatory approaches to the design, development, deployment, and operation of AI applications, both general and sector-specific. These principles, many of which are interrelated, reflect the goals and principles in Executive Order 13859. Agencies should calibrate approaches concerning these principles and consider case-specific factors to optimize net benefits. Given that many AI applications do not necessarily raise novel issues, these considerations also reflect longstanding Federal regulatory principles and practices that are relevant to promoting the innovative use of AI. Promoting innovation and growth of AI is a high priority of the United States government. Fostering innovation and growth through forbearing from new regulations may be appropriate. Agencies should consider new regulation only after they have reached the decision, in light of the foregoing section and other considerations, that Federal regulation is necessary.

In November 2019 , the National Security Commission on Artificial Intelligence (NSCAI) released its interim report and explained that “[b]etween now and the publication of our final report, the Commission will pursue answers to hard problems, develop concrete recommendations on “methods and means” to integrate AI into national security missions, and make itself available to Congress and the executive branch to inform evidence-based decisions about resources, policy, and strategy.” The Commission released its initial report in July that laid out its work plan. 

In July 2020, NSCAI published its Second Quarter Recommendations, a compilation of policy proposals made this quarter. NSCAI said it is still on track to release its final recommendations in March 2021. The NSCAI asserted

The recommendations are not a comprehensive follow-up to the interim report or first quarter memorandum. They do not cover all areas that will be included in the final report. This memo spells out recommendations that can inform ongoing deliberations tied to policy, budget, and legislative calendars. But it also introduces recommendations designed to build a new framework for pivoting national security for the artificial intelligence (AI) era.

In August 2019, NIST published “U.S. LEADERSHIP IN AI: A Plan for Federal Engagement in Developing Technical Standards and Related Tools” as required by EO 13859. The EO directed the Secretary of Commerce, through NIST, to issue “a plan for Federal engagement in the development of technical standards and related tools in support of reliable, robust, and trustworthy systems that use AI technologies” that must include:

(A) Federal priority needs for standardization of AI systems development and deployment;
(B)  identification  of  standards  development  entities  in  which  Federal  agencies  should  seek  membership  with  the  goal  of  establishing  or  supporting United States technical leadership roles; and

(C) opportunities for and challenges to United States leadership in standardization related to AI technologies.

NIST’s AI plan meets those requirements in the broadest of strokes and will require much from the Administration and agencies to be realized, including further steps required by the EO.

Finally, all these Trump Administration efforts are playing out at the same global processes are as well. In late May 2019, the Organization for Economic Cooperation and Development (OECD) adopted recommendations from the OECD Council on Artificial Intelligence (AI), and non-OECD members Argentina, Brazil, Colombia, Costa Rica, Peru and Romania also pledged to adhere to the recommendations. Of course, OECD recommendations have no legal binding force on any nation, but standards articulated by the OECD are highly respected and sometime do form the basis for nations’ approaches on an issue like the 1980 OECD recommendations on privacy. Moreover, the National Telecommunications and Information Administration (NTIA) signaled the Trump Administration’s endorsement of the OECD effort. In February 2020, the European Commission (EC) released its latest policy pronouncement on artificial intelligence, “On Artificial Intelligence – A European approach to excellence and trust,” in which the Commission articulates its support for “a regulatory and investment oriented approach with the twin objective of promoting the uptake of AI and of addressing the risks associated with certain uses of this new technology.” The EC stated that “[t]he purpose of this White Paper is to set out policy options on how to achieve these objectives…[but] does not address the development and use of AI for military purposes.”

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Gerd Altmann from Pixabay

Further Reading, Other Developments, and Coming Events (16 September)

Coming Events

  • The United States’ Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) announced that its third annual National Cybersecurity Summit “will be held virtually as a series of webinars every Wednesday for four weeks beginning September 16 and ending October 7:”
    • September 16: Key Cyber Insights
    • September 23: Leading the Digital Transformation
    • September 30: Diversity in Cybersecurity
    • October 7: Defending our Democracy
    • One can register for the event here.
  • The House Homeland Security Committee will hold a hearing titled “Worldwide Threats to the Homeland” on 17 September with the following witnesses:
    • Chad Wolf, Department of Homeland Security
    • Christopher Wray, Director, Federal Bureau of Investigation
    • Christopher Miller, Director, National Counterterrorism Center (NCTC)
  • On 17 September, the House Energy and Commerce Committee’s Communications & technology Subcommittee will hold a hearing titled “Trump FCC: Four Years of Lost Opportunities.”
  • The House Armed Services Committee’s Intelligence and Emerging Threats and Capabilities Subcommittee will hold a hearing’ titled “Interim Review of the National Security Commission on Artificial Intelligence Effort and Recommendations” on 17 September with these witnesses:
    • Dr. Eric Schmidt , Chairman, National Security Commission on Artificial Intelligence 
    • HON Robert Work, Vice Chairman, National Security Commission on Artificial Intelligence, HON Mignon Clyburn, Commissioner, National Security Commission on Artificial Intelligence 
    • Dr. José-Marie Griffiths, Commissioner, National Security Commission on Artificial Intelligence
  • On 22 September, the Federal Trade Commission (FTC) will hold a public workshop “to examine the potential benefits and challenges to consumers and competition raised by data portability.” The agency has released its agenda and explained:
    • The workshop will also feature four panel discussions that will focus on: case studies on data portability rights in the European Union, India, and California; case studies on financial and health portability regimes; reconciling the benefits and risks of data portability; and the material challenges and solutions to realizing data portability’s potential.
  • The Senate Judiciary Committee’s Intellectual Property Subcommittee will hold a hearing “Examining Threats to American Intellectual Property: Cyber-attacks and Counterfeits During the COVID-19 Pandemic” with these witnesses:
    • Adam Hickey, Deputy Assistant Attorney General National Security Division, Department of Justice
    • Clyde Wallace, Deputy Assistant Director Cyber Division, Federal Bureau of Investigation
    • Steve Francis, Assistant Director, HSI Global Trade Investigations Division Director, National Intellectual Property Rights Center, U.S. Immigration and Customs Enforcement, Department of Homeland Security
    • Bryan S. Ware, Assistant Director for Cybersecurity Cyber Security and Infrastructure Security Agency, Department of Homeland Security
  • The Senate Judiciary Committee’s Antitrust, Competition Policy & Consumer Rights Subcommittee will hold a hearing on 30 September titled “Oversight of the Enforcement of the Antitrust Laws” with Federal Trade Commission Chair Joseph Simons and United States Department of Justice Antitrust Division Assistant Attorney General Makan Delhrahim.
  • The Federal Communications Commission (FCC) will hold an open meeting on 30 September and has made available its agenda with these items:
    • Facilitating Shared Use in the 3.1-3.55 GHz Band. The Commission will consider a Report and Order that would remove the existing non-federal allocations from the 3.3-3.55 GHz band as an important step toward making 100 megahertz of spectrum in the 3.45-3.55 GHz band available for commercial use, including 5G, throughout the contiguous United States. The Commission will also consider a Further Notice of Proposed Rulemaking that would propose to add a co-primary, non-federal fixed and mobile (except aeronautical mobile) allocation to the 3.45-3.55 GHz band as well as service, technical, and competitive bidding rules for flexible-use licenses in the band. (WT Docket No. 19-348)
    • Expanding Access to and Investment in the 4.9 GHz Band. The Commission will consider a Sixth Report and Order that would expand access to and investment in the 4.9 GHz (4940-4990 MHz) band by providing states the opportunity to lease this spectrum to commercial entities, electric utilities, and others for both public safety and non-public safety purposes. The Commission also will consider a Seventh Further Notice of Proposed Rulemaking that would propose a new set of licensing rules and seek comment on ways to further facilitate access to and investment in the band. (WP Docket No. 07-100)
    • Improving Transparency and Timeliness of Foreign Ownership Review Process. The Commission will consider a Report and Order that would improve the timeliness and transparency of the process by which it seeks the views of Executive Branch agencies on any national security, law enforcement, foreign policy, and trade policy concerns related to certain applications filed with the Commission. (IB Docket No. 16-155)
    • Promoting Caller ID Authentication to Combat Spoofed Robocalls. The Commission will consider a Report and Order that would continue its work to implement the TRACED Act and promote the deployment of caller ID authentication technology to combat spoofed robocalls. (WC Docket No. 17-97)
    • Combating 911 Fee Diversion. The Commission will consider a Notice of Inquiry that would seek comment on ways to dissuade states and territories from diverting fees collected for 911 to other purposes. (PS Docket Nos. 20-291, 09-14)
    • Modernizing Cable Service Change Notifications. The Commission will consider a Report and Order that would modernize requirements for notices cable operators must provide subscribers and local franchising authorities. (MB Docket Nos. 19-347, 17-105)
    • Eliminating Records Requirements for Cable Operator Interests in Video Programming. The Commission will consider a Report and Order that would eliminate the requirement that cable operators maintain records in their online public inspection files regarding the nature and extent of their attributable interests in video programming services. (MB Docket No. 20-35, 17-105)
    • Reforming IP Captioned Telephone Service Rates and Service Standards. The Commission will consider a Report and Order, Order on Reconsideration, and Further Notice of Proposed Rulemaking that would set compensation rates for Internet Protocol Captioned Telephone Service (IP CTS), deny reconsideration of previously set IP CTS compensation rates, and propose service quality and performance measurement standards for captioned telephone services. (CG Docket Nos. 13-24, 03-123)
    • Enforcement Item. The Commission will consider an enforcement action.

Other Developments

  • The United States House of Representatives took up and passed two technology bills on 14 September. One of the bills, “Internet of Things (IoT) Cybersecurity Improvement Act of 2020” (H.R. 1668), was discussed in yesterday’s Technology Policy Update as part of an outlook on Internet of Things (IoT) legislation (see here for analysis). The House passed a revised version by voice vote, but its fate in the Senate may lie with the Senate Homeland Security & Governmental Affairs Committee, whose chair, Senator Ron Johnson (R-WI), has blocked a number of technology bills during his tenure to the chagrin of some House stakeholders. The House also passed the “AI in Government Act of 2019” (H.R.2575) that would establish an AI Center of Excellence within the General Services Administration that would
    • “(1) advise and promote the efforts of the Federal Government in developing innovative uses of artificial intelligence by the Federal Government to the benefit of the public; and
    • (2) improve cohesion and competency in the use of artificial intelligence.”
    • Also, this bill would direct the Office of Management and Budget (OMB) to “issue a memorandum to the head of each agency that shall—
      • inform the development of artificial intelligence governance approaches by those agencies regarding technologies and applications that—
        • are empowered or enabled by the use of artificial intelligence within that agency; and
        • advance the innovative use of artificial intelligence for the benefit of the public while upholding civil liberties, privacy, and civil rights;
      • consider ways to reduce barriers to the use of artificial intelligence in order to promote innovative application of those technologies for the benefit of the public, while protecting civil liberties, privacy, and civil rights;
      • establish best practices for identifying, assessing, and mitigating any bias on the basis of any classification protected under Federal nondiscrimination laws or other negative unintended consequence stemming from the use of artificial intelligence systems; and
      • provide a template of the required contents of the agency Governance Plans
    • The House Energy and Commerce Committee marked up and reported out more than 30 bills last week including:
      • The “Consumer Product Safety Inspection Enhancement Act” (H.R. 8134) that “would amend the Consumer Product Safety Act to enhance the Consumer Product Safety Commission’s (CPSC) ability to identify unsafe consumer products entering the United States, especially e-commerce shipments entering under the de minimis value exemption. Specifically, the bill would require the CPSC to enhance the targeting, surveillance, and screening of consumer products. The bill also would require electronic filing of certificates of compliance for all consumer products entering the United States.
      • The bill directs the CPSC to: 1) examine a sampling of de minimis shipments and shipments coming from China; 2) detail plans and timelines to effectively address targeting and screening of de minimis shipments; 3) establish metrics by which to evaluate the effectiveness of the CPSC’s efforts in this regard; 4) assess projected technology, resources, and staffing necessary; and 5) submit a report to Congress regarding such efforts. The bill further directs the CPSC to hire at least 16 employees every year until staffing needs are met to help identify violative products at ports.
      • The “AI for Consumer Product Safety Act” (H.R. 8128) that “would direct the Consumer Product Safety Commission (CPSC) to establish a pilot program to explore the use of artificial intelligence for at least one of the following purposes: 1) tracking injury trends; 2) identifying consumer product hazards; 3) monitoring the retail marketplace for the sale of recalled consumer products; or 4) identifying unsafe imported consumer products.” The revised bill passed by the committee “changes the title of the bill to the “Consumer Safety Technology Act”, and adds the text based on the Blockchain Innovation Act (H.R. 8153) and the Digital Taxonomy Act (H.R. 2154)…[and] adds sections that direct the Department of Commerce (DOC), in consultation with the Federal Trade Commission (FTC), to conduct a study and submit to Congress a report on the state of blockchain technology in commerce, including its use to reduce fraud and increase security.” The revised bill “would also require the FTC to submit to Congress a report and recommendations on unfair or deceptive acts or practices relating to digital tokens.”
      • The “American Competitiveness Of a More Productive Emerging Tech Economy Act” or the “American COMPETE Act” (H.R. 8132) “directs the DOC and the FTC to study and report to Congress on the state of the artificial intelligence, quantum computing, blockchain, and the new and advanced materials industries in the U.S…[and] would also require the DOC to study and report to Congress on the state of the Internet of Things (IoT) and IoT manufacturing industries as well as the three-dimensional printing industry” involving “among other things:1) listing industry sectors that develop and use each technology and public-private partnerships focused on promoting the adoption and use of each such technology; 2) establishing a list of federal agencies asserting jurisdiction over such industry sectors; and 3) assessing risks and trends in the marketplace and supply chain of each technology.
      • The bill would direct the DOC to study and report on the effect of unmanned delivery services on U.S. businesses conducting interstate commerce. In addition to these report elements, the bill would require the DOC to examine safety risks and effects on traffic congestion and jobs of unmanned delivery services.
      • Finally, the bill would require the FTC to study and report to Congress on how artificial intelligence may be used to address online harms, including scams directed at senior citizens, disinformation or exploitative content, and content furthering illegal activity.
  • The National Institute of Standards and Technology (NIST) issued NIST Interagency or Internal Report 8272 “Impact Analysis Tool for Interdependent Cyber Supply Chain Risks” designed to help public and private sector entities better address complicated, complex supply chain risks. NIST stated “[t]his publication de-scribes how to use the Cyber Supply Chain Risk Management (C-SCRM) Interdependency Tool that has been developed to help federal agencies identify and assess the potential impact of cybersecurity events in their interconnected supply chains.” NIST explained
    • More organizations are becoming aware of the importance of identifying cybersecurity risks associated with extensive, complicated supply chains. Several solutions have been developed to help manage supply chains; most focus on contract management or compliance. There is a need to provide organizations with a systematic and more usable way to evaluate the potential impacts of cyber supply chain risks relative to an organization’s risk appetite. This is especially important for organizations with complex supply chains and highly interdependent products and suppliers.
    • This publication describes one potential way to visualize and measure these impacts: a Cyber Supply Chain Risk Management (C-SCRM) Interdependency Tool (hereafter “Tool”), which is designed to provide a basic measurement of the potential impact of a cyber supply chain event. The Tool is not intended to measure the risk of an event, where risk is defined as a function of threat, vulnerability, likelihood, and impact. Research conducted by the authors of this publication found that, at the time of publication, existing cybersecurity risk tools and research focused on threats, vulnerabilities, and likelihood, but impact was frequently overlooked. Thus, this Tool is intended to bridge that gap and enable users and tool developers to create a more complete understanding of an organization’s risk by measuring impact in their specific environments.
    • The Tool also provides the user greater visibility over the supply chain and the relative importance of particular projects, products, and suppliers (hereafter referred to as “nodes”) compared to others. This can be determined by examining the metrics that contribute to a node’s importance, such as the amount of access a node has to the acquiring organization’s IT network, physical facilities, and data. By understanding which nodes are the most important in their organization’s supply chain, the user can begin to understand the potential impact a disruption of that node may cause on business operations. The user can then prioritize the completion of risk mitigating actions to reduce the impact a disruption would cause to the organization’s supply chain and overall business.
  • In a blog post, Microsoft released its findings on the escalating threats to political campaigns and figures during the run up to the United States’ (U.S.) election. This warning also served as an advertisement for Microsoft’s security products. But, be that as it may, these findings echo what U.S. security services have been saying for months. Microsoft stated
    • In recent weeks, Microsoft has detected cyberattacks targeting people and organizations involved in the upcoming presidential election, including unsuccessful attacks on people associated with both the Trump and Biden campaigns, as detailed below. We have and will continue to defend our democracy against these attacks through notifications of such activity to impacted customers, security features in our products and services, and legal and technical disruptions. The activity we are announcing today makes clear that foreign activity groups have stepped up their efforts targeting the 2020 election as had been anticipated, and is consistent with what the U.S. government and others have reported. We also report here on attacks against other institutions and enterprises worldwide that reflect similar adversary activity.
    • We have observed that:
      • Strontium, operating from Russia, has attacked more than 200 organizations including political campaigns, advocacy groups, parties and political consultants
      • Zirconium, operating from China, has attacked high-profile individuals associated with the election, including people associated with the Joe Biden for President campaign and prominent leaders in the international affairs community
      • Phosphorus, operating from Iran, has continued to attack the personal accounts of people associated with the Donald J. Trump for President campaign
    • The majority of these attacks were detected and stopped by security tools built into our products. We have directly notified those who were targeted or compromised so they can take action to protect themselves. We are sharing more about the details of these attacks today, and where we’ve named impacted customers, we’re doing so with their support.
    • What we’ve seen is consistent with previous attack patterns that not only target candidates and campaign staffers but also those they consult on key issues. These activities highlight the need for people and organizations involved in the political process to take advantage of free and low-cost security tools to protect themselves as we get closer to election day. At Microsoft, for example, we offer AccountGuard threat monitoring, Microsoft 365 for Campaigns and Election Security Advisors to help secure campaigns and their volunteers. More broadly, these attacks underscore the continued importance of work underway at the United Nations to protect cyberspace and initiatives like the Paris Call for Trust and Security in Cyberspace.
  • The European Data Protection Supervisor (EDPS) has reiterated and expanded upon his calls for caution, prudence, and adherence to European Union (EU) law and principles in the use of artificial intelligence, especially as the EU looks to revamp its approach to AI and data protection. In a blog post, EDPS Wojciech Wiewiórowski stated:
    • The expectations of the increasing use of AI and the related economic advantages for those who control the technologies, as well as its appetite for data, have given rise to fierce competition about technological leadership. In this competition, the EU strives to be a frontrunner while staying true to its own values and ideals.
    • AI comes with its own risks and is not an innocuous, magical tool, which will heal the world harmlessly. For example, the rapid adoption of AI by public administrations in hospitals, utilities and transport services, financial supervisors, and other areas of public interest is considered in the EC White Paper ‘essential’, but we believe that prudency is needed. AI, like any other technology, is a mere tool, and should be designed to serve humankind. Benefits, costs and risks should be considered by anyone adopting a technology, especially by public administrations who process great amounts of personal data.
    • The increase in adoption of AI has not been (yet?) accompanied by a proper assessment of what the impact on individuals and on our society as a whole will likely be. Think especially of live facial recognition (remote biometric identification in the EC White Paper). We support the idea of a moratorium on automated recognition in public spaces of human features in the EU, of faces but also and importantly of gait, fingerprints, DNA, voice, keystrokes and other biometric or behavioural signals.
    • Let’s not rush AI, we have to get it straight so that it is fair and that it serves individuals and society at large.
    • The context in which the consultation for the Data Strategy was conducted gave a prominent place to the role of data in matters of public interest, including combating the virus. This is good and right as the GDPR was crafted so that the processing of personal data should serve humankind. There are existing conditions under which such “processing for the public good” could already take place, and without which the necessary trust of data subjects would not be possible.
    • However, there is a substantial persuasive power in the narratives nudging individuals to ‘volunteer’ their data to address highly moral goals. Concepts such as ‘Data altruism”, or ‘Data donation” and their added value are not entirely clear and there is a need to better define and lay down their scope, and possible purposes, for instance, in the context of scientific research in the health sector. The fundamental right to the protection of personal data cannot be ‘waived’ by the individual concerned, be it through a ‘donation’ or through a ‘sale’ of personal data. The data controller is fully bound by the personal data rules and principles, such as purpose limitation even when processing data that have been ‘donated’ i.e. when consent to the processing had been given by the individual.

Further Reading

  • Peter Thiel Met With The Racist Fringe As He Went All In On Trump” By Rosie Gray and Ryan Mac — BuzzFeed News. A fascinating article about one of the technology world’s more interesting figures. As part of his decision to ally himself with Donald Trump when running for president, Peter Thiel also met with avowed white supremacists. However, it appears that the alliance is no longer worthy of his financial assistance or his public support as he supposedly was disturbed about the Administration’s response to the pandemic. However, Palantir, his company has flourished during the Trump Administration and may be going public right before matters may change under a Biden Administration.
  • TikTok’s Proposed Deal Seeks to Mollify U.S. and China” By David McCabe, Ana Swanson and Erin Griffith — The New York Times. ByteDance is apparently trying to mollify both Washington and Beijing in bringing Oracle onboard as “trusted technology partner,” for the arrangement may be acceptable to both nations under their export control and national security regimes. Oracle handling and safeguarding TikTokj user data would seem to address the Trump Administration’s concerns, but not selling the company nor permitting Oracle to access its algorithm for making recommendations would seem to appease the People’s Republic of China (PRC). Moreover, United States (U.S.) investors would hold control over TikTok even though PRC investors would maintain their stakes. Such an arrangement may satisfy the Committee on Foreign Investment in the United States (CFIUS), which has ordered ByteDance to sell the app that is an integral part of TikTok. The wild card, as always, is where President Donald Trump ultimately comes out on the deal.
  • Oracle’s courting of Trump may help it land TikTok’s business and coveted user data” By Jay Greene and Ellen Nakashima — The Washington Post. This piece dives into why Oracle, at first blush, seems like an unlikely suitor to TikTok, but it’s eroding business position visa vis cloud companies like Amazon explains its desire to diversify. Also, Oracle’s role as a data broker makes all the user data available from TikTok very attractive.
  • Chinese firm harvests social media posts, data of prominent Americans and military” By Gerry Shih — The Washington Post. Another view on Shenzhen Zhenhua Data Technology, the entity from the People’s Republic of China (PRC) exposed for collecting the personal data of more than 2.4 million westerners, many of whom hold positions of power and influence. This article quotes a number of experts allowed to look at what was leaked of the data base who are of the view the PRC has very little in the way of actionable intelligence, at this point. The country is leveraging publicly available big data from a variety of sources and may ultimately makes something useful from these data.
  • “‘This is f—ing crazy’: Florida Latinos swamped by wild conspiracy theories” By Sabrina Rodriguez and Marc Caputo — Politico. A number of sources are spreading rumors about former Vice President Joe Biden and the Democrats generally in order to curb support among a key demographic the party will need to carry overwhelmingly to win Florida.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Photo by Alexander Sinn on Unsplash

Further Reading, Other Developments, and Coming Events (24 July)

First things first, if you would like to receive my Technology Policy Update, email me. You can find some of these Updates from 2019 and 2020 here.

Here are Further Reading, Other Developments, and Coming Events.

Coming Events

  • On  27 July, the House Judiciary Committee’s Antitrust, Commercial, and Administrative Law Subcommittee will hold its sixth hearing on “Online Platforms and Market Power” titled “Examining the Dominance of Amazon, Apple, Facebook, and Google” that will reportedly have the heads of the four companies as witnesses.
  • On 28 July, the Senate Commerce, Science, and Transportation Committee’s Communications, Technology, Innovation, and the Internet Subcommittee will hold a hearing titled “The PACT Act and Section 230: The Impact of the Law that Helped Create the Internet and an Examination of Proposed Reforms for Today’s Online World.”
  • On 28 July the House Science, Space, and Technology Committee’s Investigations and Oversight and Research and Technology Subcommittees will hold a joint virtual hearing titled “The Role of Technology in Countering Trafficking in Persons” with these witnesses:
    • Ms. Anjana Rajan, Chief Technology Officer, Polaris
    • Mr. Matthew Daggett, Technical Staff, Humanitarian Assistance and Disaster Relief Systems Group, Lincoln Laboratory, Massachusetts Institute of Technology
    • Ms. Emily Kennedy, President and Co-Founder, Marinus Analytics
  •  On 28 July, the House Homeland Security Committee’s Cybersecurity, Infrastructure Protection, & Innovation Subcommittee will hold a hearing titled “Secure, Safe, and Auditable: Protecting the Integrity of the 2020 Elections” with these witnesses:
    • Mr. David Levine, Elections Integrity Fellow, Alliance for Securing Democracy, German Marshall Fund of the United States
    • Ms. Sylvia Albert, Director of Voting and Elections, Common Cause
    • Ms. Amber McReynolds, Chief Executive Officer, National Vote at Home Institute
    • Mr. John Gilligan, President and Chief Executive Officer, Center for Internet Security, Inc.
  • On 30 July the House Oversight and Reform Committee will hold a hearing on the tenth “Federal Information Technology Acquisition Reform Act” (FITARA) scorecard on federal information technology.
  • On 30 July, the Senate Commerce, Science, and Transportation Committee’s Security Subcommittee will hold a hearing titled “The China Challenge: Realignment of U.S. Economic Policies to Build Resiliency and Competitiveness” with these witnesses:
    • The Honorable Nazak Nikakhtar, Assistant Secretary for Industry and Analysis, International Trade Administration, U.S. Department of Commerce
    • Dr. Rush Doshi, Director of the Chinese Strategy Initiative, The Brookings Institution
    • Mr. Michael Wessel, Commissioner, U.S. – China Economic and Security Review Commission
  • On 4 August, the Senate Armed Services Committee will hold a hearing titled “Findings and Recommendations of the Cyberspace Solarium Commission” with these witnesses:
    • Senator Angus S. King, Jr. (I-ME), Co-Chair, Cyberspace Solarium Commission
    • Representative Michael J. Gallagher (R-WI), Co-Chair, Cyberspace Solarium Commission
    • Brigadier General John C. Inglis, ANG (Ret.), Commissioner, Cyberspace Solarium Commission
  • On 6 August, the Federal Communications Commission (FCC) will hold an open meeting to likely consider the following items:
    • C-band Auction Procedures. The Commission will consider a Public Notice that would adopt procedures for the auction of new flexible-use overlay licenses in the 3.7–3.98 GHz band (Auction 107) for 5G, the Internet of Things, and other advanced wireless services. (AU Docket No. 20-25)
    • Radio Duplication Rules. The Commission will consider a Report and Order that would eliminate the radio duplication rule with regard to AM stations and retain the rule for FM stations. (MB Docket Nos. 19-310. 17-105)
    • Common Antenna Siting Rules. The Commission will consider a Report and Order that would eliminate the common antenna siting rules for FM and TV broadcaster applicants and licensees. (MB Docket Nos. 19-282, 17-105)
    • Telecommunications Relay Service. The Commission will consider a Report and Order to repeal certain TRS rules that are no longer needed in light of changes in technology and voice communications services. (CG Docket No. 03-123)

Other Developments

  • Slack filed an antitrust complaint with the European Commission (EC) against Microsoft alleging that the latter’s tying Microsoft Teams to Microsoft Office is a move designed to push the former out of the market. A Slack vice president said in a statement “Slack threatens Microsoft’s hold on business email, the cornerstone of Office, which means Slack threatens Microsoft’s lock on enterprise software.” While the filing of a complaint does not mean the EC will necessarily investigate, under its new leadership the EC has signaled in a number of ways its intent to address the size of some technology companies and the effect on competition.
  • The National Institute of Standards and Technology (NIST) has issued for comment NIST the 2nd Draft of NISTIR 8286, Integrating Cybersecurity and Enterprise Risk Management (ERM). NIST claimed this guidance document “promotes greater understanding of the relationship between cybersecurity risk management and ERM, and the benefits of integrating those approaches…[and] contains the same main concepts as the initial public draft, but their presentation has been revised to clarify the concepts and address other comments from the public.” Comments are due by 21 August 2020.
  • The United States National Security Commission on Artificial Intelligence (NSCAI) published its Second Quarter Recommendations, a compilation of policy proposals made this quarter. NSCAI said it is still on track to release its final recommendations in March 2021. The NSCAI asserted
    • The recommendations are not a comprehensive follow-up to the interim report or first quarter memorandum. They do not cover all areas that will be included in the final report. This memo spells out recommendations that can inform ongoing deliberations tied to policy, budget, and legislative calendars. But it also introduces recommendations designed to build a new framework for pivoting national security for the artificial intelligence (AI) era.
    • The NSCAI stated it “has focused its analysis and recommendations on six areas:
    • Advancing the Department of Defense’s internal AI research and development capabilities. The Department of Defense (DOD) must make reforms to the management of its research and development (R&D) ecosystem to enable the speed and agility needed to harness the potential of AI and other emerging technologies. To equip the R&D enterprise, the NSCAI recommends creating an AI software repository; improving agency- wide authorized use and sharing of software, components, and infrastructure; creating an AI data catalog; and expanding funding authorities to support DOD laboratories. DOD must also strengthen AI Test and Evaluation, Verification and Validation capabilities by developing an AI testing framework, creating tools to stand up new AI testbeds, and using partnered laboratories to test market and market-ready AI solutions. To optimize the transition from technological breakthroughs to application in the field, Congress and DOD need to reimagine how science and technology programs are budgeted to allow for agile development, and adopt the model of multi- stakeholder and multi-disciplinary development teams. Furthermore, DoD should encourage labs to collaborate by building open innovation models and a R&D database.
    • Accelerating AI applications for national security and defense. DOD must have enduring means to identify, prioritize, and resource the AI- enabled applications necessary to fight and win. To meet this challenge, the NSCAI recommends that DOD produce a classified Technology Annex to the National Defense Strategy that outlines a clear plan for pursuing disruptive technologies that address specific operational challenges. We also recommend establishing mechanisms for tactical experimentation, including by integrating AI-enabled technologies into exercises and wargames, to ensure technical capabilities meet mission and operator needs. On the business side, DOD should develop a list of core administrative functions most amenable to AI solutions and incentivize the adoption of commercially available AI tools.
    • Bridging the technology talent gap in government. The United States government must fundamentally re-imagine the way it recruits and builds a digital workforce. The Commission envisions a government-wide effort to build its digital talent base through a multi-prong approach, including: 1) the establishment of a National Reserve Digital Corps that will bring private sector talent into public service part-time; 2) the expansion of technology scholarship for service programs; and, 3) the creation of a national digital service academy for growing federal technology talent from the ground up.
    • Protecting AI advantages for national security through the discriminate use of export controls and investment screening. The United States must protect the national security sensitive elements of AI and other critical emerging technologies from foreign competitors, while ensuring that such efforts do not undercut U.S. investment and innovation. The Commission proposes that the President issue an Executive Order that outlines four principles to inform U.S. technology protection policies for export controls and investment screening, enhance the capacity of U.S. regulatory agencies in analyzing emerging technologies, and expedite the implementation of recent export control and investment screening reform legislation. Additionally, the Commission recommends prioritizing the application of export controls to hardware over other areas of AI-related technology. In practice, this requires working with key allies to control the supply of specific semiconductor manufacturing equipment critical to AI while simultaneously revitalizing the U.S. semiconductor industry and building the technology protection regulatory capacity of like-minded partners. Finally, the Commission recommends focusing the Committee on Foreign Investment in the United States (CFIUS) on preventing the transfer of technologies that create national security risks. This includes a legislative proposal granting the Department of the Treasury the authority to propose regulations for notice and public comment to mandate CFIUS filings for investments into AI and other sensitive technologies from China, Russia and other countries of special concern. The Commission’s recommendations would also exempt trusted allies and create fast tracks for vetted investors.
    • Reorienting the Department of State for great power competition in the digital age. Competitive diplomacy in AI and emerging technology arenas is a strategic imperative in an era of great power competition. Department of State personnel must have the organization, knowledge, and resources to advocate for American interests at the intersection of technology, security, economic interests, and democratic values. To strengthen the link between great power competition strategy, organization, foreign policy planning, and AI, the Department of State should create a Strategic Innovation and Technology Council as a dedicated forum for senior leaders to coordinate strategy and a Bureau of Cyberspace Security and Emerging Technology, which the Department has already proposed, to serve as a focal point and champion for security challenges associated with emerging technologies. To strengthen the integration of emerging technology and diplomacy, the Department of State should also enhance its presence and expertise in major tech hubs and expand training on AI and emerging technology for personnel at all levels across professional areas. Congress should conduct hearings to assess the Department’s posture and progress in reorienting to address emerging technology competition.
    • Creating a framework for the ethical and responsible development and fielding of AI. Agencies need practical guidance for implementing commonly agreed upon AI principles, and a more comprehensive strategy to develop and field AI ethically and responsibly. The NSCAI proposes a “Key Considerations” paradigm for agencies to implement that will help translate broad principles into concrete actions.
  • The Danish Defence Intelligence Service’s Centre for Cyber Security (CFCS) released its fifth annual assessment of the cyber threat against Denmark and concluded:
    • The cyber threat pose a serious threat to Denmark. Cyber attacks mainly carry economic and political consequences.
    • Hackers have tried to take advantage of the COVID-19 pandemic. This constitutes a new element in the general threat landscape.
    • The threat from cyber crime is VERY HIGH. No one is exempt from the threat. There is a growing threat from targeted ransomware attacks against Danish public authorities and private companies.  The threat from cyber espionage is VERY HIGH.
    • The threat is especially directed against public authorities dealing with foreign and security policy issues as well as private companies whose knowledge is of interest to foreign states. 
    • The threat from destructive cyber attacks is LOW. It is less likely that foreign states will launch destructive cyber attacks against Denmark. Private companies and public authorities operating in conflict-ridden regions are at a greater risk from this threat. 
    • The threat from cyber activism is LOW. Globally, the number of cyber activism attacks has dropped in recent years,and cyber activists rarely focus on Danish public authorities and private companies. The threat from cyber terrorism is NONE. Serious cyber attacks aimed at creating effects similar to those of conventional terrorism presuppose a level of technical expertise and organizational resources that militant extremists, at present, do not possess. Also, the intention remains limited. 
    • The technological development, including the development of artificial intelligence and quantum computing, creates new cyber security possibilities and challenges.

Further Reading

  • Accuse, Evict, Repeat: Why Punishing China and Russia for Cyberattacks Fails” – The New York Times. This piece points out that the United States (US) government is largely using 19th Century responses to address 21st Century conduct by expelling diplomats, imposing sanctions, and indicting hackers. Even a greater use of offensive cyber operations does not seem to be deterring the US’s adversaries. It may turn out that the US and other nations will need to focus more on defensive measures and securing its valuable data and information.
  • New police powers to be broad enough to target Facebook” – Sydney Morning Herald. On the heels of a 2018 law that some argue will allow the government in Canberra to order companies to decrypt users communications, Australia is considering the enactment of new legislation because of concern among the nation’s security services about end-to-end encryption and dark browsing. In particular, Facebook’s proposed changes to secure its networks is seen as fertile ground of criminals, especially those seeking to prey on children sexually.
  • The U.S. has a stronger hand in its tech battle with China than many suspect” – The Washington Post. A national security writer makes the case that the cries that the Chinese are coming may prove as overblown as similar claims made about the Japanese during the 1980s and the Russian during the Cold War. The Trump Administration has used some levers that may appear to impede the People’s Republic of China’s attempt to displace the United States. In all, this writer is calling for more balance in viewing the PRC and some of the challenges it poses.
  • Facebook is taking a hard look at racial bias in its algorithms” – Recode. After a civil rights audit that was critical of Facebook, the company is assembling and deploying teams to try to deal with the biases in its algorithms on Facebook and Instagram. Critics doubt the efforts will turn out well because economic incentives are aligned against rooting out such biases and the lack of diversity at the company.
  • Does TikTok Really Pose a Risk to US National Security?” – WIRED. This article asserts TikTok is probably no riskier than other social media apps even with the possibility that the People’s Republic of China (PRC) may have access to user data.
  • France won’t ban Huawei, but encouraging 5G telcos to avoid it: report” – Reuters. Unlike the United States, the United Kingdom, and others, France will not outright ban Huawei from their 5G networks but will instead encourage their telecommunications companies to use European manufacturers. Some companies already have Huawei equipment on the networks and may receive authorization to use the company’s equipment for up to five more years. However, France is not planning on extending authorizations past that deadline, which will function a de facto sunset. In contrast, authorizations for Ericsson or Nokia equipment were provided for eight years. The head of France’s cybersecurity agency stressed that France was not seeking to move against the People’s Republic of China (PRC) but is responding to security concerns.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Further Reading and Other Developments (17 July)

First things first, if you would like to receive my Technology Policy Update, email me. You can find some of these Updates from 2019 and 2020 here.

Speaking of which, the Technology Policy Update is being published daily during the week, and here are the Other Developments and Further Reading from this week.

Other Developments

  • Acting Senate Intelligence Committee Chair Marco Rubio (R-FL), Senate Foreign Relations Committee Chair Jim Risch (R-ID), and Senators Chris Coons (D-DE) and John Cornyn (R-TX) wrote Secretary of Commerce Wilbur Ross and Secretary of Defense Mike Esper “to ask that the Administration take immediate measures to bring the most advanced digital semiconductor manufacturing capabilities to the United States…[which] are critical to our American economic and national security and while our nation leads in the design of semiconductors, we rely on international manufacturing for advanced semiconductor fabrication.” This letter follows the Trump Administration’s May announcement that the Taiwan Semiconductor Manufacturing Corporation (TSMC) agreed to build a $12 billion plant in Arizona. It also bears note that one of the amendments pending to the “National Defense Authorization Act for Fiscal Year 2021“ (S.4049) would establish a grants program to stimulate semiconductor manufacturing in the US.
  • Senators Mark R. Warner (D-VA), Mazie K. Hirono (D-HI) and Bob Menendez (D-NJ) sent a letter to Facebook “regarding its failure to prevent the propagation of white supremacist groups online and its role in providing such groups with the organizational infrastructure and reach needed to expand.” They also “criticized Facebook for being unable or unwilling to enforce its own Community Standards and purge white supremacist and other violent extremist content from the site” and posed “a series of questions regarding Facebook’s policies and procedures against hate speech, violence, white supremacy and the amplification of extremist content.”
  • The Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) published the Pipeline Cyber Risk Mitigation Infographic that was “[d]eveloped in coordination with the Transportation Security Administration (TSA)…[that] outlines activities that pipeline owners/operators can undertake to improve their ability to prepare for, respond to, and mitigate against malicious cyber threats.”
  • Representative Kendra Horn (D-OK) and 10 other Democrats introduced legislation “requiring the U.S. government to identify, analyze, and combat efforts by the Chinese government to exploit the COVID-19 pandemic” that was endorsed by “[t]he broader Blue Dog Coalition” according to their press release. The “Preventing China from Exploiting COVID-19 Act” (H.R.7484) “requires the Director of National Intelligence—in coordination with the Secretaries of Defense, State, and Homeland Security—to prepare an assessment of the different ways in which the Chinese government has exploited or could exploit the pandemic, which originated in China, in order to advance China’s interests and to undermine the interests of the United States, its allies, and the rules-based international order.” Horn and her cosponsors stated “[t]he assessment must be provided to Congress within 90 days and posted in unclassified form on the DNI’s website.”
  • The Supreme Court of Canada upheld the “Genetic Non-Discrimination Act” and denied a challenge to the legality of the statute brought by the government of Quebec, the Attorney General of Canada, and others. The court found:
    • The pith and substance of the challenged provisions is to protect individuals’ control over their detailed personal information disclosed by genetic tests, in the broad areas of contracting and the provision of goods and services, in order to address Canadians’ fears that their genetic test results will be used against them and to prevent discrimination based on that information. This matter is properly classified within Parliament’s power over criminal law. The provisions are supported by a criminal law purpose because they respond to a threat of harm to several overlapping public interests traditionally protected by the criminal law — autonomy, privacy, equality and public health.
  • The U.S.-China Economic and Security Review Commission published a report “analyzing the evolution of U.S. multinational enterprises (MNE) operations in China from 2000 to 2017.” The Commission found MNE’s operations in the People’s Republic of China “may indirectly erode the  United  States’  domestic industrial competitiveness  and  technological  leadership relative  to  China” and “as U.S. MNE activity in China increasingly focuses on the production of high-end technologies, the risk  that  U.S.  firms  are  unwittingly enabling China to  achieve  its industrial  policy and  military  development objectives rises.”
  • The Federal Communications Commission (FCC) and Huawei filed their final briefs in their lawsuit before the United States Court of Appeals for the Fifth Circuit arising from the FCC’s designation of Huawei as a “covered company” for purposes of a rule that denies Universal Service Funds (USF) “to purchase or obtain any equipment or services produced or provided by a covered company posing a national security threat to the integrity of communications networks or the communications supply chain.” Huawei claimed in its brief that “[t]he rulemaking and “initial designation” rest on the FCC’s national security judgments..[b]ut such judgments fall far afield of the FCC’s statutory  authority  and  competence.” Huawei also argued “[t]he USF rule, moreover, contravenes the Administrative Procedure Act (APA) and the Due Process Clause.” The FCC responded in its filing that “Huawei challenges the FCC’s decision to exclude carriers whose networks are vulnerable to foreign interference, contending that the FCC has neither statutory nor constitutional authority to make policy judgments involving “national security”…[but] [t]hese arguments are premature, as Huawei has not yet been injured by the Order.” The FCC added “Huawei’s claim that the Communications Act textually commits all policy determinations with national security implications to the President is demonstrably false.”
  • European Data Protection Supervisor (EDPS) Wojciech Wiewiórowski released his Strategy for 2020-2024, “which will focus on Digital Solidarity.” Wiewiórowski explained that “three core pillars of the EDPS strategy outline the guiding actions and objectives for the organisation to the end of 2024:
    • Foresight: The EDPS will continue to monitor legal, social and technological advances around the world and engage with experts, specialists and data protection authorities to inform its work.
    • Action: To strengthen the EDPS’ supervision, enforcement and advisory roles the EDPS will promote coherence in the activities of enforcement bodies in the EU and develop tools to assist the EU institutions, bodies and agencies to maintain the highest standards in data protection.
    • Solidarity: While promoting digital justice and privacy for all, the EDPS will also enforce responsible and sustainable data processing, to positively impact individuals and maximise societal benefits in a just and fair way.
  • Facebook released a Civil Rights Audit, an “investigation into Facebook’s policies and practices began in 2018 at the behest and encouragement of the civil rights community and some members of Congress.” Those charged with conducting the audit explained that they “vigorously advocated for more and would have liked to see the company go further to address civil rights concerns in a host of areas that are described in detail in the report” including but not limited to
    • A stronger interpretation of its voter suppression policies — an interpretation that makes those policies effective against voter suppression and prohibits content like the Trump voting posts — and more robust and more consistent enforcement of those policies leading up to the US 2020 election.
    • More visible and consistent prioritization of civil rights in company decision-making overall.
    • More resources invested to study and address organized hate against Muslims, Jews and other targeted groups on the platform.
    • A commitment to go beyond banning explicit references to white separatism and white nationalism to also prohibit express praise, support and representation of white separatism and white nationalism even where the terms themselves are not used.
    • More concrete action and specific commitments to take steps to address concerns about algorithmic bias or discrimination.
    • They added that “[t]his report outlines a number of positive and consequential steps that the company has taken, but at this point in history, the Auditors are concerned that those gains could be obscured by the vexing and heartbreaking decisions Facebook has made that represent significant setbacks for civil rights.”
  • The National Security Commission on Artificial Intelligence (NSCAI) released a white paper titled “The Role of AI Technology in Pandemic Response and Preparedness” that “outlines a series of investments and initiatives that the United States must undertake to realize the full potential of AI to secure our nation against pandemics.” NSCAI noted its previous two white papers:
  • Secretary of Defense Mark Esper announced that Chief Technology Officer Michael J.K. Kratsios has “been designated to serve as Acting Under Secretary of Defense for Research and Engineering” even though he does not have a degree in science. The last Under Secretary held a PhD. However, Kratsios worked for venture capitalist Peter Thiel who backed President Donald Trump when he ran for office in 2016.
  • The United States’ Department of Transportation’s Federal Railroad Administration (FRA) issued research “to develop a cyber security risk analysis methodology for communications-based connected railroad technologies…[and] [t]he use-case-specific implementation of the methodology can identify potential cyber attack threats, system vulnerabilities, and consequences of the attack– with risk assessment and identification of promising risk mitigation strategies.”
  • In a blog post, a National Institute of Standards and Technology (NIST) economist asserted cybercrime may be having a much larger impact on the United States’ economy than previously thought:
    • In a recent NIST report, I looked at losses in the U.S. manufacturing industry due to cybercrime by examining an underutilized dataset from the Bureau of Justice Statistics, which is the most statistically reliable data that I can find. I also extended this work to look at the losses in all U.S. industries. The data is from a 2005 survey of 36,000 businesses with 8,079 responses, which is also by far the largest sample that I could identify for examining aggregated U.S. cybercrime losses. Using this data, combined with methods for examining uncertainty in data, I extrapolated upper and lower bounds, putting 2016 U.S. manufacturing losses to be between 0.4% and 1.7% of manufacturing value-added or between $8.3 billion and $36.3 billion. The losses for all industries are between 0.9% and 4.1% of total U.S. gross domestic product (GDP), or between $167.9 billion and $770.0 billion. The lower bound is 40% higher than the widely cited, but largely unconfirmed, estimates from McAfee.
  • The Government Accountability Office (GAO) advised the Federal Communications Commission (FCC) that it needs a comprehensive strategy for implementing 5G across the United States. The GAO concluded
    • FCC has taken a number of actions regarding 5G deployment, but it has not clearly developed specific and measurable performance goals and related measures–with the involvement of relevant stakeholders, including National Telecommunications and Information Administration (NTIA)–to manage the spectrum demands associated with 5G deployment. This makes FCC unable to demonstrate whether the progress being made in freeing up spectrum is achieving any specific goals, particularly as it relates to congested mid-band spectrum. Additionally, without having established specific and measurable performance goals with related strategies and measures for mitigating 5G’s potential effects on the digital divide, FCC will not be able to assess the extent to which its actions are addressing the digital divide or what actions would best help all Americans obtain access to wireless networks.
  • The Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) issued “Time Guidance for Network Operators, Chief Information Officers, and Chief Information Security Officers” “to inform public and private sector organizations, educational institutions, and government agencies on time resilience and security practices in enterprise networks and systems…[and] to address gaps in available time testing practices, increasing awareness of time-related system issues and the linkage between time and cybersecurity.”
  • Fifteen Democratic Senators sent a letter to the Department of Defense, Office of the Director of National Intelligence (ODNI), Department of Homeland Security (DHS), Federal Bureau of Investigations (FBI), and U.S. Cyber Command, urging them “to take additional measures to fight influence campaigns aimed at disenfranchising voters, especially voters of color, ahead of the 2020 election.” They called on these agencies to take “additional measures:”
    • The American people and political candidates are promptly informed about the targeting of our political processes by foreign malign actors, and that the public is provided regular periodic updates about such efforts leading up to the general election.
    • Members of Congress and congressional staff are appropriately and adequately briefed on continued findings and analysis involving election related foreign disinformation campaigns and the work of each agency and department to combat these campaigns.
    • Findings and analysis involving election related foreign disinformation campaigns are shared with civil society organizations and independent researchers to the maximum extent which is appropriate and permissible.
    • Secretary Esper and Director Ratcliffe implement a social media information sharing and analysis center (ISAC) to detect and counter information warfare campaigns across social media platforms as authorized by section 5323 of the Fiscal Year 2020 National Defense Authorization Act.
    • Director Ratcliffe implement the Foreign Malign Influence Response Center to coordinate a whole of government approach to combatting foreign malign influence campaigns as authorized by section 5322 of the Fiscal Year 2020 National Defense Authorization Act.
  • The Information Technology and Innovation Foundation (ITIF) unveiled an issue brief “Why New Calls to Subvert Commercial Encryption Are Unjustified” arguing “that government efforts to subvert encryption would negatively impact individuals and businesses.” ITIF offered these “key takeaways:”
    • Encryption gives individuals and organizations the means to protect the confidentiality of their data, but it has interfered with law enforcement’s ability to prevent and investigate crimes and foreign threats.
    • Technological advances have long frustrated some in the law enforcement community, giving rise to multiple efforts to subvert commercial use of encryption, from the Clipper Chip in the 1990s to the San Bernardino case two decades later.
    • Having failed in these prior attempts to circumvent encryption, some law enforcement officials are now calling on Congress to invoke a “nuclear option”: legislation banning “warrant-proof” encryption.
    • This represents an extreme and unjustified measure that would do little to take encryption out of the hands of bad actors, but it would make commercial products less secure for ordinary consumers and businesses and damage U.S. competitiveness.
  • The White House released an executive order in which President Donald Trump determined “that the Special Administrative Region of Hong Kong (Hong Kong) is no longer sufficiently autonomous to justify differential treatment in relation to the People’s Republic of China (PRC or China) under the particular United States laws and provisions thereof set out in this order.” Trump further determined “the situation with respect to Hong Kong, including recent actions taken by the PRC to fundamentally undermine Hong Kong’s autonomy, constitutes an unusual and extraordinary threat, which has its source in substantial part outside the United States, to the national security, foreign policy, and economy of the United States…[and] I hereby declare a national emergency with respect to that threat.” The executive order would continue the Administration’s process of changing policy to ensure Hong Kong is treated the same as the PRC.
  • President Donald Trump also signed a bill passed in response to the People’s Republic of China (PRC) passing legislation the United States and other claim will strip Hong Kong of the protections the PRC agreed to maintain for 50 years after the United Kingdom (UK) handed over the city. The “Hong Kong Autonomy Act” “requires the imposition of sanctions on Chinese individuals and banks who are included in an annual State Department list found to be subverting Hong Kong’s autonomy” according to the bill’s sponsor Representative Brad Sherman (D-CA).
  • Representative Stephen Lynch, who chairs House Oversight and Reform Committee’s National Security Subcommittee, sent letters to Apple and Google “after the Office of the Director of National Intelligence (ODNI) and the Federal Bureau of Investigation (FBI) confirmed that mobile applications developed, operated, or owned by foreign entities, including China and Russia, could potentially pose a national security risk to American citizens and the United States” according to his press release. He noted in letters sent by the technology companies to the Subcommittee that:
    • Apple confirmed that it does not require developers to submit “information on where user data (if any such data is collected by the developer’s app) will be housed” and that it “does not decide what user data a third-party app can access, the user does.”
    • Google stated that it does “not require developers to provide the countries in which their mobile applications will house user data” and acknowledged that “some developers, especially those with a global user base, may store data in multiple countries.”
    • Lynch is seeking “commitments from Apple and Google to require information from application developers about where user data is stored, and to make users aware of that information prior to downloading the application on their mobile devices.”
  • Minnesota Attorney General Keith Ellison announced a settlement with Frontier Communications that “concludes the three major investigations and lawsuits that the Attorney General’s office launched into Minnesota’s major telecoms providers for deceptive, misleading, and fraudulent practices.” The Office of the Attorney General (OAG) stated
    • Based on its investigation, the Attorney General’s Office alleged that Frontier used a variety of deceptive and misleading practices to overcharge its customers, such as: billing customers more than they were quoted by Frontier’s agents; failing to disclose fees and surcharges in its sales presentations and advertising materials; and billing customers for services that were not delivered.
    • The OAG “also alleged that Frontier sold Minnesotans expensive internet services with so-called “maximum speed” ratings that were not attainable, and that Frontier improperly advertised its service as “reliable,” when in fact it did not provide enough bandwidth for customers to consistently receive their expected service.”
  • The European Data Protection Board (EDPB) issued guidelines “on the criteria of the Right to be Forgotten in the search engines cases under the GDPR” that “focuses solely on processing by search engine providers and delisting requests  submitted by data subjects” even Article 17 of the General Data Protection Regulation applies to all data controllers. The EDPB explained “This paper is divided into two topics:
    • The first topic concerns the grounds a data subject can rely on for a delisting request sent to a search engine provider pursuant to Article 17.1 GDPR.
    • The second topic concerns the exceptions to the Right to request delisting according to Article 17.3 GDPR.
  • The Australian Competition & Consumer Commission (ACCC) “is seeking views on draft Rules and accompanying draft Privacy Impact Assessment that authorise third parties who are accredited at the ‘unrestricted’ level to collect Consumer Data Right (CDR) data on behalf of another accredited person.” The ACCC explained “[t]his will allow accredited persons to utilise other accredited parties to collect CDR data and provide other services that facilitate the provision of goods and services to consumers.” In a March explanatory statement, the ACCC stated “[t]he CDR is an economy-wide reform that will apply sector-by-sector, starting with the banking sector…[and] [t]he objective of the CDR is to provide individual and business consumers (consumers) with the ability to efficiently and conveniently access specified data held about them by businesses (data holders), and to authorise the secure disclosure of that data to third parties (accredited data recipients) or to themselves.” The ACCC noted “[t]he CDR is regulated by both the ACCC and the Office of the Australian Information Commissioner (OAIC) as it concerns both competition and consumer matters as well as the privacy and confidentiality of consumer data.” Input is due by 20 July.
  • Office of the Inspector General (OIG) for the Department of the Interior (Interior) found that even though the agency spends $1.4 billion annually on cybersecurity “[g]uarding against increasing cybersecurity threats” remains one of Interior’s top challenges. The OIG asserted Interior “continues to struggle to implement an enterprise information technology (IT) security program that balances compliance, cost, and risk while enabling bureaus to meet their diverse missions.”
  • In a summary of its larger investigation into “Security over Information Technology Peripheral Devices at Select Office of Science Locations,” the Department of Energy’s Office of the Inspector General (OIG) that “identified weaknesses related to access controls and configuration settings” for peripheral devices (e.g. thumb drives, printers, scanners and other connected devices)  “similar in type to those identified in prior evaluations of the Department’s unclassified cybersecurity program.”
  • The House Homeland Security Committee’s Cybersecurity, Infrastructure Protection, and Innovation Subcommittee Ranking Member John Katko (R-NY) “a comprehensive national cybersecurity improvement package” according to his press release, consisting of these bills:
    • The “Cybersecurity and Infrastructure Security Agency Director and Assistant Directors Act:”  This bipartisan measure takes steps to improve guidance and long-term strategic planning by stabilizing the CISA Director and Assistant Directors positions. Specifically, the bill:
      • Creates a 5-year term for the CISA Director, with a limit of 2 terms. The term of office for the current Director begins on date the Director began to serve.
      • Elevates the Director to the equivalent of a Deputy Secretary and Military Service Secretaries.
      • Depoliticizes the Assistant Director positions, appointed by the Secretary of the Department of Homeland Security (DHS), categorizing them as career public servants. 
    • The “Strengthening the Cybersecurity and Infrastructure Security Agency Act of 2020:” This measure mandates a comprehensive review of CISA in an effort to strengthen its operations, improve coordination, and increase oversight of the agency. Specifically, the bill:
      • Requires CISA to review how additional appropriations could be used to support programs for national risk management, federal information systems management, and public-private cybersecurity and integration. It also requires a review of workforce structure and current facilities and projected needs. 
      • Mandates that CISA provides a report to the House and Senate Homeland Committees within 1-year of enactment. CISA must also provide a report and recommendations to GSA on facility needs. 
      • Requires GSA to provide a review to the Administration and House and Senate Committees on CISA facilities needs within 30-days of Congressional report. 
    • The “CISA Public-Private Talent Exchange Act:” This bill requires CISA to create a public-private workforce program to facilitate the exchange of ideas, strategies, and concepts between federal and private sector cybersecurity professionals. Specifically, the bill:
      • Establishes a public-private cyber exchange program allowing government and industry professionals to work in one another’s field.
      • Expands existing private outreach and partnership efforts. 
  • The Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA) is ordering United States federal civilian agencies “to apply the July 2020 Security Update for Windows Servers running DNS (CVE-2020-1350), or the temporary registry-based workaround if patching is not possible within 24 hours.” CISA stated “[t]he software update addresses a significant vulnerability where a remote attacker could exploit it to take control of an affected system and run arbitrary code in the context of the Local System Account.” CISA Director Christopher Krebs explained “due to the wide prevalence of Windows Server in civilian Executive Branch agencies, I’ve determined that immediate action is necessary, and federal departments and agencies need to take this remote code execution vulnerability in Windows Server’s Domain Name System (DNS) particularly seriously.”
  • The United States (US) Department of State has imposed “visa restrictions on certain employees of Chinese technology companies that provide material support to regimes engaging in human rights abuses globally” that is aimed at Huawei. In its statement, the Department stated “Companies impacted by today’s action include Huawei, an arm of the Chinese Communist Party’s (CCP) surveillance state that censors political dissidents and enables mass internment camps in Xinjiang and the indentured servitude of its population shipped all over China.” The Department claimed “[c]ertain Huawei employees provide material support to the CCP regime that commits human rights abuses.”
  • Earlier in the month, the US Departments of State, Treasury, Commerce, and of Homeland Security issued an “advisory to highlight the harsh repression in Xinjiang.” The agencies explained
    • Businesses, individuals, and other persons, including but not limited to academic institutions, research service providers, and investors (hereafter “businesses and individuals”), that choose to operate in Xinjiang or engage with entities that use labor from Xinjiang elsewhere in China should be aware of reputational, economic, and, in certain instances, legal, risks associated with certain types of involvement with entities that engage in human rights abuses, which could include Withhold Release Orders (WROs), civil or criminal investigations, and export controls.
  • The United Kingdom’s National Cyber Security Centre (NCSC), Canada’s Communications  Security Establishment (CSE), United States’ National Security Agency (NSA) and the United States’ Department of Homeland Security’s Cybersecurity and Infrastructure Security  Agency (CISA) issued a joint advisory on a Russian hacking organization’s efforts have “targeted various organisations involved in COVID-19 vaccine development in Canada, the United States and the United Kingdom, highly likely with the intention of stealing information and intellectual property relating to the development and testing of COVID-19 vaccines.” The agencies named APT29 (also known as ‘the Dukes’ or ‘Cozy Bear’), “a cyber espionage group, almost certainly part of the Russian intelligence services,” as the culprit behind “custom malware known as ‘WellMess’ and ‘WellMail.’”
    • This alert follows May advisories issued by Australia, the US, and the UK on hacking threats related to the pandemic. Australia’s Department of Foreign Affairs and Trade (DFAT) and the Australian Cyber Security Centre (ACSC) issued “Advisory 2020-009: Advanced Persistent Threat (APT) actors targeting Australian health sector organisations and COVID-19 essential services” that asserted “APT groups may be seeking information and intellectual property relating to vaccine development, treatments, research and responses to the outbreak as this information is now of higher value and priority globally.” CISA and NCSC issued a joint advisory for the healthcare sector, especially companies and entities engaged in fighting COVID-19. The agencies stated that they have evidence that Advanced Persistent Threat (APT) groups “are exploiting the COVID-19 pandemic as part of their cyber operations.” In an unclassified public service announcement, the Federal Bureau of Investigation (FBI) and CISA named the People’s Republic of China as a nation waging a cyber campaign against U.S. COVID-19 researchers. The agencies stated they “are issuing this announcement to raise awareness of the threat to COVID-19-related research.”
  • The National Initiative for Cybersecurity Education (NICE) has released a draft National Institute of Standards and Technology (NIST) Special Publication (SP) for comment due by 28 August. Draft NIST Special Publication (SP) 800-181 Revision 1, Workforce Framework for Cybersecurity (NICE Framework) that features several updates, including:
    • an updated title to be more inclusive of the variety of workers who perform cybersecurity work,
    • definition and normalization of key terms,
    • principles that facilitate agility, flexibility, interoperability, and modularity,
    • introduction of competencies,
  • Representatives Glenn Thompson (R-PA), Collin Peterson (D-MN), and James Comer (R-KY) sent a letter to Federal Communications Commission (FCC) “questioning the Commission’s April 20, 2020 Order granting Ligado’s application to deploy a terrestrial nationwide network to provide 5G services.”
  • The European Commission (EC) is asking for feedback on part of its recently released data strategy by 31 July. The EC stated it is aiming “to create a single market for data, where data from public bodies, business and citizens can be used safely and fairly for the common good…[and] [t]his initiative will draw up rules for common European data spaces (covering areas like the environment, energy and agriculture) to:
    • make better use of publicly held data for research for the common good
    • support voluntary data sharing by individuals
    • set up structures to enable key organisations to share data.
  • The United Kingdom’s Parliament is asking for feedback on its legislative proposal to regulate Internet of Things (IoT) devices. The Department for Digital, Culture, Media & Sport explained “the obligations within the government’s proposed legislative framework would fall mainly on the manufacturer if they are based in the UK, or if not based in the UK, on their UK representative.” The Department is also “developing an enforcement approach with relevant stakeholders to identify an appropriate enforcement body to be granted day to day responsibility and operational control of monitoring compliance with the legislation.” The Department also touted the publishing of the European Telecommunications Standards Institute’s (ETSI) “security baseline for Internet-connected consumer devices and provides a basis for future Internet of Things product certification schemes.”
  • Facebook issued a white paper, titled “CHARTING A WAY FORWARD: Communicating Towards People-Centered and Accountable Design About Privacy,” in which the company states its desire to be involved in shaping a United States privacy law (See below for an article on this). Facebook concluded:
    • Facebook recognizes the responsibility we have to make sure that people are informed about the data that we collect, use, and share.
    • That’s why we support globally consistent comprehensive privacy laws and regulations that, among other things, establish people’s basic rights to be informed about how their information is collected, used, and shared, and impose obligations for organizations to do the same, including the obligation to build internal processes that maintain accountability.
    • As improvements to technology challenge historic approaches to effective communications with people about privacy, companies and regulators need to keep up with changing times.
    • To serve the needs of a global community, on both the platforms that exist now and those that are yet to be developed, we want to work with regulators, companies, and other interested third parties to develop new ways of informing people about their data, empowering them to make meaningful choices, and holding ourselves accountable.
    • While we don’t have all the answers, there are many opportunities for businesses and regulators to embrace modern design methods, new opportunities for better collaboration, and innovative ways to hold organizations accountable.
  • Four Democratic Senators sent Facebook a letter “about reports that Facebook has created fact-checking exemptions for people and organizations who spread disinformation about the climate crisis on its social media platform” following a New York Times article this week on the social media’s practices regarding climate disinformation. Even though the social media giant has moved aggressively to take down false and inaccurate COVID-19 posts, climate disinformation lives on the social media platform largely unmolested for a couple of reasons. First, Facebook marks these sorts of posts as opinion and take the approach that opinions should be judged under an absolutist free speech regime. Moreover, Facebook asserts posts of this sort do not pose any imminent harm and therefore do not need to be taken down. Despite having teams of fact checkers to vet posts of demonstrably untrue information, Facebook chooses not to, most likely because material that elicits strong reactions from users drive engagement that, in turn, drives advertising dollars. Senators Elizabeth Warren (D-WA), Tom Carper (D-DE), Sheldon Whitehouse (D-R.I.) and Brian Schatz (D-HI) argued “[i]f Facebook is truly “committed to fighting the spread of false news on Facebook and Instagram,” the company must immediately acknowledge in its fact-checking process that the climate crisis is not a matter of opinion and act to close loopholes that allow climate disinformation to spread on its platform.” They posed a series of questions to Facebook CEO Mark Zuckerberg on these practices, requesting answers by 31 July.
  • A Canadian court has found that the Canadian Security Intelligence Service (CSIS) “admittedly collected information in a manner that is contrary to this foundational commitment and then relied on that information in applying for warrants under the Canadian Security Intelligence Service Act, RSC 1985, c C-23 [CSIS Act]” according to a court summary of its redacted decision. The court further stated “[t]he Service and the Attorney General also admittedly failed to disclose to the Court the Service’s reliance on information that was likely collected unlawfully when seeking warrants, thereby breaching the duty of candour owed to the Court.” The court added “[t]his is not the first time this Court has been faced with a breach of candour involving the Service…[and] [t]he events underpinning this most recent breach were unfolding as recommendations were being implemented by the Service and the Attorney General to address previously identified candour concerns.” CSIS was found to have illegally collected and used metadata in a 2016 case ion its conduct between 2006-2016. In response to the most recent ruling, CSIS is vowing to implement a range of reforms. The National Security and Intelligence Review Agency (NSIRA) is pledging the same.
  • The United Kingdom’s National Police Chiefs’ Council (NPCC) announced the withdrawal of “[t]he ‘Digital device extraction – information for complainants and witnesses’ form and ‘Digital Processing Notice’ (‘the relevant forms’) circulated to forces in February 2019 [that] are not sufficient for their intended purpose.” In mid-June, the UK’s data protection authority, the Information Commissioner’s Office (ICO) unveiled its “finding that police data extraction practices vary across the country, with excessive amounts of personal data often being extracted, stored, and made available to others, without an appropriate basis in existing data protection law.” This withdrawal was also due, in part, to a late June Court of Appeal decision.  
  • A range of public interest and advocacy organizations sent a letter to Speaker of the House Nancy Pelosi (D-CA) and House Minority Leader Kevin McCarthy (R-CA) noting “there are intense efforts underway to do exactly that, via current language in the House and Senate versions of the FY2021 National Defense Authorization Act (NDAA) that ultimately seek to reverse the FCC’s recent bipartisan and unanimous approval of Ligado Networks’ regulatory plans.” They urged them “not endorse efforts by the Department of Defense and its allies to veto commercial spectrum authorizations…[and][t]he FCC has proven itself to be the expert agency on resolving spectrum disputes based on science and engineering and should be allowed to do the job Congress authorized it to do.” In late April, the FCC’s “decision authorize[d] Ligado to deploy a low-power terrestrial nationwide network in the 1526-1536 MHz, 1627.5-1637.5 MHz, and 1646.5-1656.5 MHz bands that will primarily support Internet of Things (IoT) services.” The agency argued the order “provides regulatory certainty to Ligado, ensures adjacent band operations, including Global Positioning System (GPS), are sufficiently protected from harmful interference, and promotes more efficient and effective use of [the U.S.’s] spectrum resources by making available additional spectrum for advanced wireless services, including 5G.”
  • The European Data Protection Supervisor (EDPS) rendered his opinion on the European Commission’s White Paper on Artificial Intelligence: a European approach to excellence and trust and recommended the following for the European Union’s (EU) regulation of artificial intelligence (AI):
    • applies both to EU Member States and to EU institutions, offices, bodies and agencies;
    • is designed to protect from any negative impact, not only on individuals, but also on communities and society as a whole;
    • proposes a more robust and nuanced risk classification scheme, ensuring any significant potential harm posed by AI applications is matched by appropriate mitigating measures;
    • includes an impact assessment clearly defining the regulatory gaps that it intends to fill.
    • avoids overlap of different supervisory authorities and includes a cooperation mechanism.
    • Regarding remote biometric identification, the EDPS supports the idea of a moratorium on the deployment, in the EU, of automated recognition in public spaces of human features, not only of faces but also of gait, fingerprints, DNA, voice, keystrokes and other biometric or behavioural signals, so that an informed and democratic debate can take place and until the moment when the EU and Member States have all the appropriate safeguards, including a comprehensive legal framework in place to guarantee the proportionality of the respective technologies and systems for the specific use case.
  • The Bundesamt für Verfassungsschutz (BfV), Germany’s domestic security agency, released a summary of its annual report in which it claimed:
    • The Russian Federation, the People’s Republic of China, the Islamic Republic of Iran and the Republic of Turkey remain the main countries engaged in espionage activities and trying to exert influence on Germany.
    • The ongoing digital transformation and the increasingly networked nature of our society increases the potential for cyber attacks, worsening the threat of cyber espionage and cyber sabotage.
    • The intelligence services of the Russian Federation and the People’s Republic of China in particular carry out cyber espionage activities against German agencies. One of their tasks is to boost their own economies with the help of information gathered by the intelligence services. This type of information-gathering campaign severely threatens the success and development opportunities of German companies.
    • To counteract this threat, Germany has a comprehensive cyber security architecture in place, which is operated by a number of different authorities. The BfV plays a major role in investigating and defending against cyber threats by detecting attacks, attributing them to specific attackers, and using the knowledge gained from this to draw up prevention strategies. The National Cyber Response Centre, in which the BfV plays a key role, was set up to consolidate the co-operation between the competent agencies. The National Cyber Response Centre aims to optimise the exchange of information between state agencies and to improve the co-ordination of protective and defensive measures against potential IT incidents.

Further Reading

  • Trump confirms cyberattack on Russian trolls to deter them during 2018 midterms” – The Washington Post. In an interview with former George W. Bush speechwriter Marc Thiessen, President Donald Trump confirmed he ordered a widely reported retaliatory attack on the Russian Federation’s Internet Research Agency as a means of preventing interference during the 2018 mid-term election. Trump claimed this attack he ordered was the first action the United States took against Russian hacking even though his predecessor warned Russian President Vladimir Putin to stop such activities and imposed sanctions at the end of 2016. The timing of Trump’s revelation is interesting given the ongoing furor over reports of Russian bounties paid to Taliban fighters for killing Americans the Trump Administration may have known of but did little or nothing to stop.
  • Germany proposes first-ever use of EU cyber sanctions over Russia hacking” – Deutsche Welle. Germany is looking to use the European Union’s (EU) cyber sanctions powers against Russia for its alleged 2015 16 GB exfiltration of data from the Bundestag’s systems, including from Chancellor Angela Merkel’s office. Germany has been alleging that Fancy Bear (aka APT28) and Russia’s military secret service GRU carried out the attack. Germany has circulated its case for sanctions to other EU nations and EU leadership. In 2017, the European Council declared “[t]he EU diplomatic response to malicious cyber activities will make full use of measures within the Common Foreign and Security Policy, including, if necessary, restrictive measures…[and] [a] joint EU response to malicious cyber activities would be proportionate to the scope, scale, duration, intensity, complexity, sophistication and impact of the cyber activity.”
  • Wyden Plans Law to Stop Cops From Buying Data That Would Need a Warrant” – VICE. Following on a number of reports that federal, state, and local law enforcement agencies are essentially sidestepping the Fourth Amendment through buying location and other data from people’s smartphones, Senator Ron Wyden (D-OR) is going to draft legislation that would seemingly close what he, and other civil libertarians, are calling a loophole to the warrant requirement.
  • Amazon Backtracks From Demand That Employees Delete TikTok” – The New York Times. Amazon first instructed its employees to remove ByteDance’s app, TikTok, on 11 July from company devices and then reversed course the same day, claiming the email had been erroneously sent out. The strange episode capped another tumultuous week for ByteDance as the Trump Administration is intensifying pressure in a number of ways on the company which officials claim is subject to the laws of the People’s Republic of China and hence must share information with the government in Beijing. ByteDance counters the app marketed in the United States is through a subsidiary not subject to PRC law. ByteDance also said it would no longer offer the app in Hong Kong after the PRC change in law has extended the PRC’s reach into the former British colony. TikTok was also recently banned in India as part of a larger struggle between India and he PRC. Additionally, the Democratic National Committee warned staff about using the app this week, too.
  • Is it time to delete TikTok? A guide to the rumors and the real privacy risks.” – The Washington Post. A columnist and security specialist found ByteDance’s app vacuums up information from users, but so does Facebook and other similar apps. They scrutinized TikTok’s privacy policy and where the data went, and they could not say with certainty that it goes to and stays on servers in the US and Singapore. 
  • California investigating Google for potential antitrust violations” – Politico. California Attorney General Xavier Becerra is going to conduct his own investigation of Google aside and apart from the investigation of the company’s advertising practices being conducted by virtually every other state in the United States. It was unclear why Becerra opted against joining the larger probe launched in September 2019. Of course, the Trump Administration’s Department of Justice is also investigating Google and could file suit as early as this month.
  • How May Google Fight an Antitrust Case? Look at This Little-Noticed Paper” – The New York Times. In a filing with the Australian Competition and Consumer Commission (ACCC), Google claimed it does not control the online advertising market and it is borne out by a number of indicia that argue against a monopolistic situation. The company is likely to make the same case to the United States’ government in its antitrust inquiry. However, similar arguments did not gain tractions before the European Commission, which levied a €1.49 billion for “breaching EU antitrust rules” in March 2019.
  •  “Who Gets the Banhammer Now?” – The New York Times. This article examines possible motives for the recent wave of action by social media platforms to police a fraction of the extreme and hateful speech activists and others have been asking them to take down for years. This piece makes the argument that social media platforms are businesses and operate as such and expecting them to behave as de facto public squares dedicated to civil political and societal discourse is more or less how we ended up where we are.
  • TikTok goes tit-for-tat in appeal to MPs: ‘stop political football’ – The Australian. ByteDance is lobbying hard in Canberra to talk Ministers of Parliament out of possibly banning TikTok like the United States has said it is considering. While ByteDance claims the data collected on users in Australia is sent to the US or Singapore, some experts are arguing just to maintain and improve the app would necessarily result in some non-People’s Republic of China (PRC) user data making its way back to the PRC. As Australia’s relationship with the PRC has grown more fraught with allegations PRC hackers infiltrated Parliament and the Prime Minister all but saying PRC hackers were targeting hospitals and medical facilities, the government in Canberra could follow India’s lead and ban the app.
  • Calls for inquiry over claims Catalan lawmaker’s phone was targeted” – The Guardian. British and Spanish newspapers are reporting that an official in Catalonia who favors separating the region from Spain may have had his smartphone compromised with industrial grade spyware typically used only by law enforcement and counterterrorism agencies. The President of the Parliament of Catalonia Roger Torrent claims his phone was hacked for domestic political purposes, which other Catalan leaders argued, too. A spokesperson for the Spanish government said “[t]he government has no evidence that the speaker of the Catalan parliament has been the victim of a hack or theft involving his mobile.” However, the University of Toronto’s CitizenLab, the entity that researched and claimed that Israeli firm NSO Group’s spyware was deployed via WhatsApp to spy on a range of journalists, officials, and dissidents, often by their own governments, confirmed that Torrent’s phone was compromised.
  • While America Looks Away, Autocrats Crack Down on Digital News Sites” – The New York Times. The Trump Administration’s combative relationship with the media in the United States may be encouraging other nations to crack down on digital media outlets trying to hold those governments to account.
  •  “How Facebook Handles Climate Disinformation” – The New York Times. Even though the social media giant has moved aggressively to take down false and inaccurate COVID-19 posts, climate disinformation lives on the social media platform largely unmolested for a couple of reasons. First, Facebook marks these sorts of posts as opinion and take the approach that opinions should be judged under an absolutist free speech regime. Moreover, Facebook asserts posts of this sort do not pose any imminent harm and therefore do not need to be taken down. Despite having teams of fact checkers to vet posts of demonstrably untrue information, Facebook chooses not to, most likely because material that elicits strong reactions from users drive engagement that, in turn, drives advertising dollars.
  • Here’s how President Trump could go after TikTok” – The Washington Post. This piece lays out two means the Trump Administration could employ to press ByteDance in the immediate future: use of the May 2019 Executive Order “Securing the Information and Communications Technology and Services Supply Chain” or the Committee on Foreign Investment in the United States process examining ByteDance of the app Music.ly that became TikTok. Left unmentioned in this article is the possibility of the Federal Trade Commission (FTC) examining its 2019 settlement with ByteDance to settle violations of the “Children’s Online Privacy Protection Act” (COPPA).
  • You’re Doomscrolling Again. Here’s How to Snap Out of It.” – The New York Times. If you find yourself endlessly looking through social media feeds, this piece explains why and how you might stop doing so.
  • UK selling spyware and wiretaps to 17 repressive regimes including Saudi Arabia and China” – The Independent. There are allegations that the British government has ignored its own regulations on selling equipment and systems that can be used for surveillance and spying to other governments with spotty human rights records. Specifically, the United Kingdom (UK) has sold £75m to countries non-governmental organizations (NGO) are rated as “not free.” The claims include nations such as the People’s Republic of China (PRC), the Kingdom of Saudi Arabia, Bahrain, and others. Not surprisingly, NGOs and the minority Labour party are calling for an investigation and changes.
  • Google sued for allegedly tracking users in apps even after opting out” – c/net. Boies Schiller Flexner filed suit in what will undoubtedly seek to become a class action suit over Google’s alleged continuing to track users even when they turned off tracking features. This follows a suit filed by the same firm against Google in June, claiming its browser Chrome still tracks people when they switch to incognito mode.
  • Secret Trump order gives CIA more powers to launch cyberattacks” – Yahoo! News. It turns out that in addition to signing National Security Presidential Memorandum (NSPM) 13 that revamped and eased offensive cyber operations for the Department of Defense, President Donald Trump signed a presidential finding that has allowed the Central Intelligence Agency (CIA) to launch its own offensive cyber attacks, mainly at Russia and Iran, according to unnamed former United States (US) officials according to this blockbuster story. Now, the decision to commence with an attack is not vetted by the National Security Council; rather, the CIA makes the decision. Consequently, there have been a number of attacks on US adversaries that until now have not been associated with the US. And, the CIA is apparently not informing the National Security Agency or Cyber Command of its operations, raising the risk of US cyber forces working at cross purposes or against one another in cyberspace. Moreover, a recently released report blamed the lax security environment at the CIA for a massive exfiltration of hacking tools released by Wikileaks. 
  • Facebook’s plan for privacy laws? ‘Co-creating’ them with Congress” – Protocol. In concert with the release of a new white paper, Facebook Deputy Chief Privacy Officer Rob Sherman sat for an interview in which he pledged the company’s willingness to work with Congress to co-develop a national privacy law. However, he would not comment on any of the many privacy bills released thus far or the policy contours of a bill Facebook would favor except for advocating for an enhanced notice and consent regime under which people would be better informed about how their data is being used. Sherman also shrugged off suggestions Facebook may not be welcome given its record of privacy violations. Finally, it bears mention that similar efforts by other companies at the state level have not succeeded as of yet. For example, Microsoft’s efforts in Washington state have not borne fruit in the passage of a privacy law.
  • Deepfake used to attack activist couple shows new disinformation frontier” – Reuters. We are at the beginning of a new age of disinformation in which fake photographs and video will be used to wage campaigns against nations, causes, and people. An activist and his wife were accused of being terrorist sympathizers by a university student who apparently was an elaborate ruse for someone or some group looking to defame the couple. Small errors gave away the ruse this time, but advances in technology are likely to make detection all the harder.
  • Biden, billionaires and corporate accounts targeted in Twitter hack” – The Washington Post. Policymakers and security experts were alarmed when the accounts of major figures like Bill Gates and Barack Obama were hacked yesterday by some group seeking to sell bitcoin. They argue Twitter was lucky this time and a more ideologically motivated enemy may seek to cause havoc, say on the United States’ coming election. A number of experts are claiming the penetration of the platform must have been of internal controls for so many high profile accounts to be taken over at the same time.
  • TikTok Enlists Army of Lobbyists as Suspicions Over China Ties Grow” – The New York Times. ByteDance’s payments for lobbying services in Washington doubled between the last quarter of 2019 and thirst quarter of 2020, as the company has retained more than 35 lobbyists to push back against the Trump Administration’s rhetoric and policy changes. The company is fighting against a floated proposal to ban the TikTok app on national security grounds, which would cut the company off from another of its top markets after India banned it and scores of other apps from the People’s Republic of China. Even if the Administration does not bar use of the app in the United States, the company is facing legislation that would ban its use on federal networks and devices that will be acted upon next week by a Senate committee. Moreover, ByteDance’s acquisition of the app that became TikTok is facing a retrospective review of an inter-agency committee for national security considerations that could result in an unwinding of the deal. Moreover, the Federal Trade Commission (FTC) has been urged to review ByteDance’s compliance with a 2019 settlement that the company violated regulations protecting the privacy of children that could result in multi-billion dollar liability if wrongdoing is found.
  • Why Google and Facebook Are Racing to Invest in India” – Foreign Policy. With New Delhi banning 59 apps and platforms from the People’s Republic of China (PRC), two American firms have invested in an Indian giant with an eye toward the nearly 500 million Indians not yet online. Reliance Industries’ Jio Platforms have sold stakes to Google and Facebook worth $4.5 billion and $5.7 billion that gives them prized positions as the company looks to expand into 5G and other online ventures. This will undoubtedly give a leg up to the United States’ online giants in vying with competitors to the world’s second most populous nation.
  • “Outright Lies”: Voting Misinformation Flourishes on Facebook” – ProPublica. In this piece published with First Draft, “a global nonprofit that researches misinformation,” an analysis of the most popular claims made about mail voting show that many of them are inaccurate or false, thus violating the platforms terms of services yet Facebook has done nothing to remove them or mark them as inaccurate until this article was being written.
  • Inside America’s Secretive $2 Billion Research Hub” – Forbes. Using contract information obtained through Freedom of Information requests and interviews, light is shined on the little known non-profit MITRE Corporation that has been helping the United States government address numerous technological problems since the late 1950’s. The article uncovers some of its latest, federally funded projects that are raising eyebrows among privacy advocates: technology to life people’s fingerprints from social media pictures, technology to scan and copy Internet of Things (IoT) devices from a distance, a scanner to read a person’s DNA, and others.
  • The FBI Is Secretly Using A $2 Billion Travel Company As A Global Surveillance Tool” – Forbes. In his second blockbuster article in a week, Forbes reporter Thomas Brewster exposes how the United States (US) government is using questionable court orders to gather travel information from the three companies that essentially provide airlines, hotels, and other travel entities with back-end functions with respect to reservations and bookings. The three companies, one of whom, Sabre is a US multinational, have masses of information on you if you have ever traveled, and US law enforcement agencies, namely the Federal Bureau of Investigation, is using a 1789 statute to obtain orders all three companies have to obey for information in tracking suspects. Allegedly, this capability has only been used to track terror suspects but will now reportedly be used for COVID-19 tracking.
  • With Trump CIA directive, the cyber offense pendulum swings too far” – Yahoo! News. Former United States (US) National Coordinator for Security, Infrastructure Protection, and Counter-terrorism Richard Clarke argues against the Central Intelligence Agency (CIA) having carte blanche in conducting cyber operations without the review or input of other federal agencies. He suggests that the CIA in particular, and agencies in general, tend to push their authority to the extreme, which in this case could lead to incidents and lasting precedents in cyberspace that may haunt the US. Clarke also intimated that it may have been the CIA and not Israel that launched cyber attacks on infrastructure facilities in Tehran this month and last.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.