At this point, Congress is just looking to organize U.S. AI efforts, maximize resources, and better understand the field. |
Today, let us survey bills on artificial intelligence (AI), an area of growing interest and concern among Democratic and Republican Members. Lawmakers and staff have been grappling with this new technology, and, at this point, are looking to study and foster its development, particularly in maintain the technological dominance of the United States (U.S.) There are some bills that may get enacted this year. However, any legislative action would play out against extensive executive branch AI efforts. In any event, Congress does not seem close to passing legislation that would regulate the technology and is looking to rely on existing statutes and regulators (e.g. the Federal Trade Commission’s powers to police unfair and deceptive practices.)
The bill with the best chances of enactment at present is the “National Artificial Intelligence Initiative Act of 2020” (H.R.6216), which was added to the “William M. (Mac) Thornberry National Defense Authorization Act for Fiscal Year 2021” (H.R.6395), a bill that has other mostly defense related AI provisions.
Big picture, H.R. 6216 would require better coordination of federal AI initiatives, research, and funding, and more involvement in the development of voluntary, consensus-based standards for AI. Much of this would happen through the standing up of a new “National Artificial Intelligence Initiative Office” by the Office of Science and Technology Policy (OSTP) in the White House. This new entity would be the locus of AI activities and programs in the United States’ (U.S.) government with the ultimate goal of ensuring the nation is the world’s foremost developer and user of the new technology.
Moreover, OSTP would “acting through the National Science and Technology Council…establish or designate an Interagency Committee to coordinate Federal programs and activities in support of the Initiative.” This body would “provide for interagency coordination of Federal artificial intelligence research, development, and demonstration activities, development of voluntary consensus standards and guidelines for research, development, testing, and adoption of ethically developed, safe, and trustworthy artificial intelligence systems, and education and training activities and programs of Federal departments and agencies undertaken pursuant to the Initiative.” The committee would need to “develop a strategic plan for AI” within two years and update it every three years thereafter. Moreover, the committee would need to “propose an annually coordinated interagency budget for the Initiative to the Office of Management and Budget (OMB) that is intended to ensure that the balance of funding across the Initiative is sufficient to meet the goals and priorities established for the Initiative.” However, OMB would be under no obligation to take notice of this proposal save for pressure from AI stakeholders in Congress or AI champions in any given Administration. The Secretary of Energy would create a ‘‘National Artificial Intelligence Advisory Committee” to advise the President and National Artificial Intelligence Initiative Office on a range of AI policy matters.
Federal agencies would be permitted to award funds to new Artificial Intelligence Research Institutes to pioneer research in any number of AI fields or considerations. The bill does not authorize any set amount of money for this program and instead kicks the decision over to the Appropriations Committees on any funding. The National Institute of Standards and Technology (NIST) must “support measurement research and development of best practices and voluntary standards for trustworthy artificial intelligence systems” and “support measurement research and development of best practices and voluntary standards for trustworthy artificial intelligence systems” among other duties. NIST must “shall work to develop, and periodically update, in collaboration with other public and private sector organizations, including the National Science Foundation and the Department of Energy, a voluntary risk management framework for the trustworthiness of artificial intelligence systems.” NIST would also “develop guidance to facilitate the creation of voluntary data sharing arrangements between industry, federally funded research centers, and Federal agencies for the purpose of advancing artificial intelligence research and technologies.”
The National Science Foundation (NSF) would need to “fund research and education activities in artificial intelligence systems and related fields, including competitive awards or grants to institutions of higher education or eligible non-profit organizations (or consortia thereof).” The Department of Energy must “carry out a cross-cutting research and development program to advance artificial intelligence tools, systems, capabilities, and workforce needs and to improve the reliability of artificial intelligence methods and solutions relevant to the mission of the Department.” This department would also be tasked with advancing “expertise in artificial intelligence and high-performance computing in order to improve health outcomes for veteran populations.”
According to a fact sheet issued by the House Science, Space, and Technology Committee, [t]he legislation will:
- Formalize interagency coordination and strategic planning efforts in AI research, development, standards, and education through an Interagency Coordination Committee and a coordination office managed by the Office of Science and Technology Policy (OSTP).
- Create an advisory committee to better inform the Coordination Committee’s strategic plan, track the state of the science around artificial intelligence, and ensure the Initiative is meeting its goals.
- Create a network of AI institutes, coordinated through the National Science Foundation, that any Federal department of agency could fund to create partnerships between the academia and the public and private sectors to accelerate AI research focused on an economic sector, social sector, or on a cross-cutting AI challenge.
- Support basic AI measurement research and standards development at the National Institute for Standards and Technology(NIST) and require NIST to create a framework for managing risks associated with AI systems and best practices for sharing data to advance trustworthy AI systems.
- Support research at the National Science Foundation (NSF) across a wide variety of AI related research areas to both improve AI systems and use those systems to advance other areas of science. This section requires NSF to include an obligation for an ethics statement for all research proposals to ensure researchers are considering, and as appropriate, mitigating potential societal risks in carrying out their research.
- Support education and workforce development in AI and related fields, including through scholarships and traineeships at NSF.
- Support AI research and development efforts at the Department of Energy (DOE), utilize DOE computing infrastructure for AI challenges, promote technology transfer, data sharing, and coordination with other Federal agencies, and require an ethics statement for DOE funded research as required at NSF.
- Require studies to better understand workforce impacts and opportunities created by AI, and identify the computing resources necessary to ensure the United States remains competitive in AI.
As mentioned, the House’s FY 2021 NDAA has a number of other AI provisions, including:
- Section 217–Modification of Joint Artificial Intelligence Research, Development, and Transition Activities. This section would amend section 238 of the John S. McCain National Defense Authorization Act for Fiscal Year 2019 (Public Law 115-232) by assigning responsibility for the Joint Artificial Intelligence Center (JAIC) to the Deputy Secretary of Defense and ensure data access and visibility for the JAIC.
- Section 224–Board of Directors for the Joint Artificial Intelligence Center. This section would direct the Secretary of Defense to create and resource a Board of Directors for the Joint Artificial Intelligence Center (JAIC), comprised of senior Department of Defense officials, as well as civilian directors not employed by the Department of Defense. The objective would be to have a standing body over the JAIC that can bring governmental and non-governmental experts together for the purpose of assisting the Department of Defense in correctly integrating and operationalizing artificial intelligence technologies.
- Section 242–Training for Human Resources Personnel in Artificial Intelligence and Related Topics. This section would direct the Secretary of Defense to develop and implement a program to provide human resources personnel with training in the fields of software development, data science, and artificial intelligence, as such fields relate to the duties of such personnel, not later 1 year after the date of the enactment of this Act.
- Section 248–Acquisition of Ethically and Responsibly Developed Artificial Intelligence Technology. This section would direct the Secretary of Defense, acting through the Board of Directors of the Joint Artificial Intelligence Center, to conduct an assessment to determine whether the Department of Defense has the ability to ensure that any artificial intelligence technology acquired by the Department is ethically and responsibly developed.
- Section 805–Acquisition Authority of the Director of the Joint Artificial Intelligence Center. This section would authorize the Director of the Joint Artificial Intelligence Center with responsibility for the development, acquisition, and sustainment of artificial intelligence technologies, services, and capabilities through fiscal year 2025.
The “FUTURE of Artificial Intelligence Act of 2020” (S.3771) was marked up and reported out of the Senate Commerce, Science, and Transportation Committee in July 2020. This bill would generally “require the Secretary of Commerce to establish the Federal Advisory Committee on the Development and Implementation of Artificial Intelligence” to advise the department on a range of AI related matters, including competitiveness, workforce, education, ethics training and development, the open sharing of data and research, international cooperation, legal and civil rights, government efficiency, and others. Additionally, a subcommittee will be empaneled to focus on the intersection of AI and law enforcement and national security issues. 18 months after enactment, this committee must submit its findings in a report to Congress and the Department of Commerce. A bill with the same titled has been introduced in the House (H.R.7559) but has not been acted upon. This bill would “require the Director of the National Science Foundation, in consultation with the Director of the Office of Science and Technology Policy, to establish an advisory committee to advise the President on matters relating to the development of artificial intelligence.”
The same day S.3771 was marked up, the committee took up another AI bill: the “Advancing Artificial Intelligence Research Act of 2020” (S.3891) that would “require the Director of the National Institute of Standards and Technology (NIST) to advance the development of technical standards for artificial intelligence, to establish the National Program to Advance Artificial Intelligence Research, to promote research on artificial intelligence at the National Science Foundation” (NSF). $250 million a year would be authorized for NIST to distribute for AI research. NIST would also need to establish at least six AI research institutes. The NSF would “establish a pilot program to assess the feasibility and advisability of awarding grants for the conduct of research in rapidly evolving, high priority topics.”
In early November 2019, the Senate Homeland Security & Governmental Affairs Committee marked up the “AI in Government Act of 2019” (S.1363) that would establish an AI Center of Excellence in the General Services Administration (GSA) to:
- promote the efforts of the Federal Government in developing innovative uses of and acquiring artificial intelligence technologies by the Federal Government;
- improve cohesion and competency in the adoption and use of artificial intelligence within the Federal Government
The bill stipulates that both of these goals would be pursued “for the purposes of benefitting the public and enhancing the productivity and efficiency of Federal Government operations.”
The Office of Management and Budget (OMB) must “issue a memorandum to the head of each agency that shall—
- inform the development of policies regarding Federal acquisition and use by agencies regarding technologies that are empowered or enabled by artificial intelligence;
- recommend approaches to remove barriers for use by agencies of artificial intelligence technologies in order to promote the innovative application of those technologies while protecting civil liberties, privacy, civil rights, and economic and national security; and
- identify best practices for identifying, assessing, and mitigating any discriminatory impact or bias on the basis of any classification protected under Federal nondiscrimination laws, or any unintended consequence of the use of artificial intelligence by the Federal Government.”
OMB is required to coordinate the drafting of this memo with the Office of Science and Technology Policy, GSA, other relevant agencies, and other key stakeholders.
This week, the House passed its version of S.1363, the “AI in Government Act of 2019” (H.R.2575), by voice vote sending it over to the Senate.
In September 2019, the House sent another AI bill to the Senate where it has not been taken up. The “Advancing Innovation to Assist Law Enforcement Act” (H.R.2613) would task the Financial Crimes Enforcement Network (FinCEN) with studying
- the status of implementation and internal use of emerging technologies, including AI, digital identity technologies, blockchain technologies, and other innovative technologies within FinCEN;
- whether AI, digital identity technologies, blockchain technologies, and other innovative technologies can be further leveraged to make FinCEN’s data analysis more efficient and effective; and
- how FinCEN could better utilize AI, digital identity technologies, blockchain technologies, and other innovative technologies to more actively analyze and disseminate the information it collects and stores to provide investigative leads to Federal, State, Tribal, and local law enforcement, and other Federal agencies…and better support its ongoing investigations when referring a case to the Agencies.”
All of these bills are being considered against a backdrop of significant Trump Administration action on AI, using existing authority to manage government operations. The Administration sees AI as playing a key role in ensuring and maintaining U.S. dominance in military affairs and in other realms.
Most recently, OMB and the Office of Science and Technology Policy (OSTP) released their annual guidance to United States department and agencies to direct their budget requests for FY 2022 with respect to research and development (R&D). OMB and OSTP explained:
For FY2022, the five R&D budgetary priorities in this memorandum ensure that America remains at the global forefront of science and technology (S&T) discovery and innovation. The Industries of the Future (IotF) -artificial intelligence (AI), quantum information sciences (QIS), advanced communication networks/5G, advanced manufacturing, and biotechnology-remain the Administration’s top R&D priority.
Specifically, regarding AI, OMB and OSTP stated
Artificial Intelligence: Departments and agencies should prioritize research investments consistent with the Executive Order (EO) 13859 on Maintaining American Leadership in Artificial Intelligence and the 2019 update of the National Artificial Intelligence Research and Development Strategic Plan. Transformative basic research priorities include research on ethical issues of AI, data-efficient and high performance machine learning (ML) techniques, cognitive AI, secure and trustworthy Al, scalable and robust AI, integrated and interactive AI, and novel AI hardware. The current pandemic highlights the importance of use-inspired AI research for healthcare, including AI for discovery of therapeutics and vaccines; Al-based search of publications and patents for scientific insights; and Al for improved imaging, diagnosis, and data analysis. Beyond healthcare, use-inspired AI research for scientific and engineering discovery across many domains can help the Nation address future crises. AI infrastructure investments are prioritized, including national institutes and testbeds for AI development, testing, and evaluation; data and model resources for AI R&D; and open knowledge networks. Research is also prioritized for the development of AI measures, evaluation methodologies, and standards, including quantification of trustworthy AI in dimensions of accuracy, fairness, robustness, explainability, and transparency.
In February 2020, OSTP published the “American Artificial Intelligence Initiative: Year One Annual Report” in which the agency claimed “the Trump Administration has made critical progress in carrying out this national strategy and continues to make United States leadership in [artificial intelligence] (AI) a top priority.” OSTP asserted that “[s]ince the signing of the EO, the United States has made significant progress on achieving the objectives of this national strategy…[and] [t]his document provides both a summary of progress and a continued long-term vision for the American AI Initiative.” However, some agencies were working on AI-related initiatives independently of the EO, but the White House has folded those into the larger AI strategy it is pursuing. Much of the document recites already announced developments and steps.
However, OSTP seems to reference a national AI strategy that differs a bit from the one laid out in EO 13859 and appears to represent the Administration’s evolved thinking on how to address AI across a number of dimensions in the form of “key policies and practices:”
1) Invest in AI research and development: The United States must promote Federal investment in AI R&D in collaboration with industry, academia, international partners and allies, and other non- Federal entities to generate technological breakthroughs in AI. President Trump called for a 2-year doubling of non-defense AI R&D in his fiscal year (FY) 2021 budget proposal, and in 2019 the Administration updated its AI R&D strategic plan, developed the first progress report describing the impact of Federal R&D investments, and published the first-ever reporting of government-wide non-defense AI R&D spending.
2) Unleash AI resources: The United States must enhance access to high-quality Federal data, models, and computing resources to increase their value for AI R&D, while maintaining and extending safety, security, privacy, and confidentiality protections. The American AI Initiative called on Federal agencies to identify new opportunities to increase access to and use of Federal data and models. In 2019, the White House Office of Management and Budget established the Federal Data Strategy as a framework for operational principles and best practices around how Federal agencies use and manage data.
3) Remove barriers to AI innovation: The United States must reduce barriers to the safe development, testing, deployment, and adoption of AI technologies by providing guidance for the governance of AI consistent with our Nation’s values and by driving the development of appropriate AI technical standards. As part of the American AI Initiative, The White House published for comment the proposed United States AI Regulatory Principles, the first AI regulatory policy that advances innovation underpinned by American values and good regulatory practices. In addition, the National Institute of Standards and Technology (NIST) issued the first-ever strategy for Federal engagement in the development of AI technical standards.
4) Train an AI-ready workforce: The United States must empower current and future generations of American workers through apprenticeships; skills programs; and education in science, technology, engineering, and mathematics (STEM), with an emphasis on computer science, to ensure that American workers, including Federal workers, are capable of taking full advantage of the opportunities of AI. President Trump directed all Federal agencies to prioritize AI-related apprenticeship and job training programs and opportunities. In addition to its R&D focus, the National Science Foundation’s new National AI Research Institutes program will also contribute to workforce development, particularly of AI researchers.
5) Promote an international environment supportive of American AI innovation: The United States must engage internationally to promote a global environment that supports American AI research and innovation and opens markets for American AI industries while also protecting our technological advantage in AI. Last year, the United States led historic efforts at the Organisation for Economic Cooperation and Development (OECD) to develop the first international consensus agreements on fundamental principles for the stewardship of trustworthy AI. The United States also worked with its international partners in the G7 and G20 to adopt similar AI principles.
6) Embrace trustworthy AI for government services and missions: The United States must embrace technology such as artificial intelligence to improve the provision and efficiency of government services to the American people and ensure its application shows due respect for our Nation’s values, including privacy, civil rights, and civil liberties. The General Services Administration established an AI Center of Excellence to enable Federal agencies to determine best practices for incorporating AI into their organizations.
Also in February 2020, the Department of Defense (DOD) announced in a press release that it “officially adopted a series of ethical principles for the use of Artificial Intelligence today following recommendations provided to Secretary of Defense Dr. Mark T. Esper by the Defense Innovation Board last October.” The DOD claimed “[t]he adoption of AI ethical principles aligns with the DOD AI strategy objective directing the U.S. military lead in AI ethics and the lawful use of AI systems.” The Pentagon added “[t]he DOD’s AI ethical principles will build on the U.S. military’s existing ethics framework based on the U.S. Constitution, Title 10 of the U.S. Code, Law of War, existing international treaties and longstanding norms and values.” The DOD stated “[t]he DOD Joint Artificial Intelligence Center (JAIC) will be the focal point for coordinating implementation of AI ethical principles for the department.”
The DOD explained that “[t]hese principles will apply to both combat and non-combat functions and assist the U.S. military in upholding legal, ethical and policy commitments in the field of AI…[and] encompass five major areas:
- Responsible. DOD personnel will exercise appropriate levels of judgment and care, while remaining responsible for the development, deployment, and use of AI capabilities.
- Equitable. The Department will take deliberate steps to minimize unintended bias in AI capabilities.
- Traceable. The Department’s AI capabilities will be developed and deployed such that relevant personnel possess an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including with transparent and auditable methodologies, data sources, and design procedure and documentation.
- Reliable. The Department’s AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire life-cycles.
- Governable. The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.
It bears note that the DOD’s recitation of these five AI Ethics differs from those drafted by the Defense Innovation Board. Notably, in “Equitable,” the Defense Innovation Board also included that the “DOD should take deliberate steps to avoid unintended bias in the development and deployment of combat or non-combat AI systems that would inadvertently cause harm to persons” (emphasis added.) Likewise, in “Governable,” the Board recommended that “DOD AI systems should be designed and engineered to fulfill their intended function while possessing the ability to detect and avoid unintended harm or disruption, and for human or automated disengagement or deactivation of deployed systems that demonstrate unintended escalatory or other behavior (emphasis added.)
Additionally, the DOD has declined, at least at this time, to adopt the recommendations made by the Board regarding the use of AI:
1. Formalize these principles via official DOD channels.
2. Establish a DOD-wide AI Steering Committee.
3. Cultivate and grow the field of AI engineering.
4. Enhance DOD training and workforce programs.
5. Invest in research on novel security aspects of AI.
6. Invest in research to bolster reproducibility.
7. Define reliability benchmarks.
8. Strengthen AI test and evaluation techniques.
9. Develop a risk management methodology.
10. Ensure proper implementation of AI ethics principles.
11. Expand research into understanding how to implement AI ethics principles.
12. Convene an annual conference on AI safety, security, and robustness.
In January 2020 OMB rand OSTP requested comments on a draft “Guidance for Regulation of Artificial Intelligence Applications” that would be issued to federal agencies as directed by EO 13859. OMB listed the 10 AI principles agencies must in regulating AI in the private sector, some of which have some overlap with the DOD’s Ethics:
- Public trust in AI
- Public participation
- Scientific integrity and information quality
- Risk assessment and management
- Benefits and costs
- Flexibility
- Fairness and non-discrimination
- Disclosure and transparency
- Safety and security
- Interagency coordination
OSTP explained how the ten AI principles should be used:
Consistent with law, agencies should take into consideration the following principles when formulating regulatory and non-regulatory approaches to the design, development, deployment, and operation of AI applications, both general and sector-specific. These principles, many of which are interrelated, reflect the goals and principles in Executive Order 13859. Agencies should calibrate approaches concerning these principles and consider case-specific factors to optimize net benefits. Given that many AI applications do not necessarily raise novel issues, these considerations also reflect longstanding Federal regulatory principles and practices that are relevant to promoting the innovative use of AI. Promoting innovation and growth of AI is a high priority of the United States government. Fostering innovation and growth through forbearing from new regulations may be appropriate. Agencies should consider new regulation only after they have reached the decision, in light of the foregoing section and other considerations, that Federal regulation is necessary.
In November 2019 , the National Security Commission on Artificial Intelligence (NSCAI) released its interim report and explained that “[b]etween now and the publication of our final report, the Commission will pursue answers to hard problems, develop concrete recommendations on “methods and means” to integrate AI into national security missions, and make itself available to Congress and the executive branch to inform evidence-based decisions about resources, policy, and strategy.” The Commission released its initial report in July that laid out its work plan.
In July 2020, NSCAI published its Second Quarter Recommendations, a compilation of policy proposals made this quarter. NSCAI said it is still on track to release its final recommendations in March 2021. The NSCAI asserted
The recommendations are not a comprehensive follow-up to the interim report or first quarter memorandum. They do not cover all areas that will be included in the final report. This memo spells out recommendations that can inform ongoing deliberations tied to policy, budget, and legislative calendars. But it also introduces recommendations designed to build a new framework for pivoting national security for the artificial intelligence (AI) era.
In August 2019, NIST published “U.S. LEADERSHIP IN AI: A Plan for Federal Engagement in Developing Technical Standards and Related Tools” as required by EO 13859. The EO directed the Secretary of Commerce, through NIST, to issue “a plan for Federal engagement in the development of technical standards and related tools in support of reliable, robust, and trustworthy systems that use AI technologies” that must include:
(A) Federal priority needs for standardization of AI systems development and deployment;
(B) identification of standards development entities in which Federal agencies should seek membership with the goal of establishing or supporting United States technical leadership roles; and
(C) opportunities for and challenges to United States leadership in standardization related to AI technologies.
NIST’s AI plan meets those requirements in the broadest of strokes and will require much from the Administration and agencies to be realized, including further steps required by the EO.
Finally, all these Trump Administration efforts are playing out at the same global processes are as well. In late May 2019, the Organization for Economic Cooperation and Development (OECD) adopted recommendations from the OECD Council on Artificial Intelligence (AI), and non-OECD members Argentina, Brazil, Colombia, Costa Rica, Peru and Romania also pledged to adhere to the recommendations. Of course, OECD recommendations have no legal binding force on any nation, but standards articulated by the OECD are highly respected and sometime do form the basis for nations’ approaches on an issue like the 1980 OECD recommendations on privacy. Moreover, the National Telecommunications and Information Administration (NTIA) signaled the Trump Administration’s endorsement of the OECD effort. In February 2020, the European Commission (EC) released its latest policy pronouncement on artificial intelligence, “On Artificial Intelligence – A European approach to excellence and trust,” in which the Commission articulates its support for “a regulatory and investment oriented approach with the twin objective of promoting the uptake of AI and of addressing the risks associated with certain uses of this new technology.” The EC stated that “[t]he purpose of this White Paper is to set out policy options on how to achieve these objectives…[but] does not address the development and use of AI for military purposes.”
© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.
Image by Gerd Altmann from Pixabay