Update To Pending Legislation In U.S. Congress, Part VI

An AI resolution was introduced in Congress to shape the national strategy, and a committee of jurisdiction looks at a national commission on AI’s recommendations.

Last week, we looked at the artificial intelligence (AI) legislation that could move during the balance of the Congressional year, but two recent developments should also be noted. I neglected to explain the introduction of a resolution “[e]xpressing the sense of Congress with respect to the principles that should guide the national artificial intelligence strategy of the United States.” Of course, this is not legislation and would have no legal force this Administration or future Administrations would need to heed. Rather, this effort is intended to serve as guide for future legislation and future administrative action.

Representatives Will Hurd (R-TX) and Robin Kelly (D-IL) introduced this resolution that was cosponsored by Representatives Steve Chabot (R-OH), Gerald Connolly (D-VA), Marc Veasey (D-TX), Seth Moulton (D-MA), Michael Cloud (R-TX), and Jim Baird (R-IN).

Hurd and Kelly have been working with the Bipartisan Policy Center, a Washington, D.C. think tank founded by four former Senate Majority Leaders to produce policy consensus of the sort that used to happen in Congress. They worked together on four white papers on AI:

The resolution states “[i]t is the sense of Congress that the following principles should guide the national artificial intelligence strategy of the United States:

(1) Global leadership.

(2) A prepared workforce.

(3) National security.

(4) Effective research and development.

(5) Ethics, reduced bias, fairness, and privacy.”

By way of contrast, the February 2019 Executive Order (EO) 13859 on Maintaining American Leadership in Artificial Intelligence stated “[i]t is the policy of the United States Government to sustain and enhance the scientific, technological, and economic leadership position of the United States in AI R&D and deployment through a coordinated Federal Government strategy, the American AI Initiative (Initiative), guided by five principles:

(a) The United States must drive technological breakthroughs in AI across the Federal Government, industry, and academia in order to promote scientific discovery, economic competitiveness, and national security.

(b) The United States must drive development of appropriate technical standards and reduce barriers to the safe testing and deployment of AI technologies in order to enable the creation of new AI-related industries and the adoption of AI by today’s industries.

(c) The United States must train current and future generations of American workers with the skills to develop and apply AI technologies to prepare them for today’s economy and jobs of the future.

(d) The United States must foster public trust and confidence in AI technologies and protect civil liberties, privacy, and American values in their application in order to fully realize the potential of AI technologies for the American people.

(e) The United States must promote an international environment that supports American AI research and innovation and opens markets for American AI industries, while protecting our technological advantage in AI and protecting our critical AI technologies from acquisition by strategic competitors and adversarial nations.

While the Trump Administration’s materials on the EO have mentioned civil liberties and privacy, they have largely not examined the potential effects of AI with respect to bias and fairness. Democrats have generally been keener to investigate potential problems with the algorithms underlying AI and similar technologies perpetuating racial and ethnic biases in western society. For example, facial recognition technology misidentifies African Americans, Latinos, and Asian Americans at much higher rates than American men of European descent. The Hurd/Kelly resolution would seem to focus more on these issues than the Trump Administration’s public materials on its AI efforts.

The two efforts would seem fairly close on the role the U.S. would ideally play in international development of AI. The nation would lead the development and implementation of AI under both plans with the additional gloss that the Trump Administration is more transparent in its notion that leading the world in AI will help ensure continued American military and commercial dominance in technology. Both are motivated, in significant part, by concerns that the People’s Republic of China (PRC), may continue on its current technological trajectory, surpass the U.S. in AI, and then be poised to lead the world according to its values in this field. It is possible the AI effort in the U.S. will be informed as much by competition as were various fields in the mid-20th Century by the Cold War with the Russians.

Otherwise, both are focused on workforce development, both in order to foster the types of education and training needed for people to work in AI and to help people in industries revolutionized or disrupted by AI. Likewise, both are concerned with maximizing R&D funding and efforts.

Last week, the House Armed Services Committee’s Intelligence and Emerging Threats and Capabilities Subcommittee conducted a virtual hearing titled “Interim Review of the National Security Commission on Artificial Intelligence Effort and Recommendations” with these witnesses:

  • Dr. Eric Schmidt , Chairman, National Security Commission on Artificial Intelligence 
  • HON Robert Work, Vice Chairman, National Security Commission on Artificial Intelligence, HON Mignon Clyburn, Commissioner, National Security Commission on Artificial Intelligence 
  • Dr. José-Marie Griffiths, Commissioner, National Security Commission on Artificial Intelligence

Chair James Langevin (D-RI) stated:

  • Our intent for this commission was to ensure a bipartisan whole-of-government effort focused on solving national security issues, and we appreciate the leadership and hard work of our witnesses in supporting the commission’s efforts in that spirit.
  • [T]his Commission is working through the difficult issues requiring national investments in research and software development and new approaches on how to apply AI appropriately for national security missions; attract and hold onto the best talent; protect and build upon our technical advantages; best partner with our allies on AI; stay ahead of the threat posed by this technology in the hands of adversaries; and implement ethical requirements for responsible American-built AI.
  • Indeed, last year the Defense Innovation Board, which was also chaired until recently by Dr. Schmidt, helped the Department begin the necessary discussions on ethics in AI.
  • I applaud the Commission for being forward leaning by not only releasing an initial and annual report as required in law, but also releasing quarterly recommendations. Ranking Member [Elise] Stefanik (R-NY) and I, along with Chair Adam Smith (D-WA) and Ranking Member Mac Thornberry (R-TX), were pleased to support a package of provisions in this year’s House version of the FY 2021 National Defense Authorization Act (NDAA) (the “William M. (Mac) Thornberry National Defense Authorization Act for Fiscal Year 2021” (H.R.6395)) based on the Commission’s first quarter’s recommendations. The House version carried 11 provisions, with the majority deriving from the Commission’s call to Strengthen the AI Workforce. We are pleased that both Commissioner Griffiths and Commissioner Clyburn are with us today to testify on the need for action on AI talent. 
  • On that note, we must implement policies that promote a sound economic, political, and strategic environment on U.S. soil where global collaboration, discovery, and innovation can all thrive. The open dialogue and debate resident in academia and the research community can be anathema to the requirement for secrecy in the Department of Defense.
  • But we must recognize – and embrace – how our free society provides the competitive advantage that lets us innovate faster than our great power competitors. Our free society enables a dynamic innovation ecosystem, and federally funded open basic research focused on discovery has allowed American universities to develop an innovation base that has effectively functioned as a talent acquisition program for the U.S. economy. And that talent is required today as much as ever to solve our most pressing national security challenges.
  • Indeed, great power competition is also a race for talent. We are looking forward to hearing about your efforts, the observations and recommendations you’ve already developed, and your plan to continue until you submit the Commission’s final report in the spring.

Ranking Member Elise Stefanik (R-NY) noted she introduced a bill in March 2018 to establish a national commission on AI and cosponsored the 11 amendments to H.R.6395 that added the Commission’s first quarter recommendations to the House’s FY 2021 NDAA. She asserted this represents a remarkable achievement that speaks to the quality of the recommendations made to policymakers. Stefanik said in her remarks before the Commission she spoke about the need for AI to be transformative and stressed that if AI does not fundamentally change the way the U.S. operates, adapt the collective defense, change workforce policy, change priorities and shift resources, then the U.S. is failing to embrace the technology to its fullest. She expressed pleasure that many of the initial recommendations address these issues.

Stefanik claimed the last several weeks have provided glimpses at the power of AI. She said the Defense Advanced Research Projects Agency’s (DARPA) AlphaDogFight demonstration that pitted an experienced fighter pilot against an algorithm developed by a minoty woman owned small business from Maryland. Stefanik noted AI decisively won, and Secretary of Defense Mark Esper characterized the victory as the “tectonic impact of machine learning on the future of warfighting.” Stefanik said a hypervelocity weapon shot down a cruise missile with the help of an advanced battle management system powered by powerful data analytics and AI capabilities. She said the head of Northern Command remarked afterwards “I am not skeptic after watching today.”

Stefanik stated that the policy governing AI is equally as important as technical demonstrations, specifically the development of standards, ethical principles, accountability, and the appropriate level of human oversight. She asserted all of these will be critical to ensuring Americans trust the use of AI. Stefanik contended that the Commission’s work is crucial in ensuring an enduring partnership of the military, academia, and the private sector built on trust, democratic ideals, and mutual values.

In their joint testimony, the four Commissioners stated:

We are encouraged to see several NSCAI recommendations reflected in the House and Senate versions of this year’s NDAA, and would like to take this opportunity to comment on the importance of legislative action in five key areas. We believe it is crucial for these recommendations to reach the President’s desk and become law.

1. Expanding AI Research and Development

Both the House and Senate bills feature encouraging actions on federal government investment in AI research and development, public-private coordination, and establishment of technical standards. The Commission shares these priorities.

We want to emphasize the importance of creating a National AI Research Resource. There is a growing divide in AI research between “haves” in the private sector and “have nots” in academia. Much of today’s AI research depends on access to resource-intensive computation and large, curated data sets. These are held primarily in companies. We fear that this growing gap will degrade research and training at our universities.

2. DOD Organizational Reforms

We have made a number of proposals to ensure the Department of Defense (DOD) is well positioned to excel in the AI era. In particular, we want to emphasize the need for a senior-level Steering Committee on Emerging Technology. This top-down approach would help the Department overcome some of the bureaucratic challenges that are impeding AI adoption. It would also focus concept and capability development on emerging threats, and guide defense investments to ensure strategic advantage against near-peer competitors.

Importantly, we believe this Steering Committee must include the Intelligence Community (IC). A central goal of our recommendation is to create a leadership mechanism that bridges DOD and the IC. This would better integrate intelligence analysis related to emerging technologies with defense capability development. And it would help ensure that DOD and the IC have a shared vision of national security needs and coherent, complementary investment strategies.

3. Microelectronics

We believe the United States needs a national strategy for microelectronics. Recent advances in AI have depended heavily on advances in available computing power. To preserve U.S. global leadership in AI, we need to preserve leadership in the underlying microelectronics.

In our initial reports, the Commission has put forward specific recommendations to lay the groundwork for long-term access to resilient, trusted, and assured microelectronics. We propose a portfolio-based approach to take advantage of American strengths and ensure the United States stays ahead of competitors in this field.

4. Ethical and Responsible Use

Determining how to use AI responsibly is central to the Commission’s work. We recently published a detailed “paradigm” of issues and practices that government agencies should consider in developing and fielding AI. We believe these proposals can help DOD and the IC to operationalize their AI ethics principles.

Within the government, it is important to develop an understanding of these principles and practices, and an awareness of the risks and limitations associated with AI systems. That is why we recommend that DOD, the IC, Department of Homeland Security (DHS), and Federal Bureau of Investigation (FBI) should conduct self-assessments. These should focus on several issues:

  • Whether the department/agency has access to adequate in-house expertise––including ethical, legal, and technical expertise––to assist in the development and fielding of responsible AI systems;
  • Whether current procurement processes sufficiently encourage or require such expertise to be utilized in acquiring commercial AI systems; and,
  • Whether organizations have the ability and resources to consult outside experts when in-house expertise is insufficient.

5. Workforce Reforms

Much of the Commission’s early work has focused on building an AI-ready national security workforce. This includes recruiting experts and developers, training end users, identifying talented individuals, and promoting education. If the government cannot improve its recruitment and hiring, or raise the level of AI knowledge in its workforce, we will struggle to achieve any significant AI progress.

In particular, we support several provisions in the current versions of the NDAA. These include:

  • Training courses in AI and related topics for human resources practitioners, to improve the government’s recruitment of AI talent.
  • The creation of unclassified workspaces. This would allow organizations to hire and utilize new employees more quickly, while their security clearances are in process.
  • A pilot program for the use of electronic portfolios to evaluate applicants for certain technical positions. Because AI and software development are sometimes self-taught fields, experts do not always have resumes that effectively convey their knowledge. The pilot program would pair HR professionals with subject matter experts to better assess candidates’ previous work as a tangible demonstration of his or her capabilities.
  • A program to track and reward the completion of certified AI training and courses. This would help agencies identify and capitalize on AI talent within the ranks.
  • A mechanism for hiring university faculty with relevant expertise to serve as part-time researchers in government laboratories. The government would benefit from access to more outside experts. We believe this mechanism should apply not only to DOD but also to DHS, Department of Commerce, DOE, and the IC.
  • Expanding the use of public-private talent exchange programs in DOD. We recommend expanding both the number of participants in general and the number of exchanges with AI-focused companies in particular. We also recommend creating an office to manage civilian talent exchanges and hold their billets.
  • An addition to the Armed Services Vocational Aptitude Battery Test to include testing for computational thinking. This would provide the military with a systematic way to identify potential AI talent.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Photo by Owen Beard on Unsplash

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s