American and Canadian Agencies Take Differing Approaches On Regulating AI

The outgoing Trump Administration tells agencies to lightly regulate AI; Canada’s privacy regulator calls for strong safeguards and limits on use of AI, including legislative changes.

The Office of Management and Budget (OMB) has issued guidance for federal agencies on how they are to regulate artificial intelligence (AI) not in use by the government. This guidance seeks to align policy across agencies in how they use their existing power to regulate AI according to the Trump Administration’s policy goals. Notably, this memorandum is binding on all federal agencies (including national defense) and even independent agencies such as the Federal Trade Commission (FTC) and Federal Communications Commission (FCC). OMB worked with other stakeholder agencies on this guidance per by Executive Order (EO) 13859, “Maintaining American Leadership in Artificial Intelligence” and issued a draft of the memorandum 11 months ago for comment.

In “Guidance for Regulation of Artificial Intelligence Applications,” OMB “sets out policy considerations that should guide, to the extent permitted by law, regulatory and non-regulatory approaches to AI applications developed and deployed outside of the Federal government.” OMB is directing agencies to take a light touch to regulating AI under its current statutory authorities, being careful to consider costs and benefits and keeping in mind the larger policy backdrop of taking steps to ensure United States (U.S.) dominance in AI in light of competition from the People’s Republic of China (PRC), the European Union, Japan, the United Kingdom, and others. OMB is requiring reports from agencies on how they will use and not use their authority to meet the articulated goals and requirements of this memorandum. However, given the due date for these reports will be well into the next Administration, it is very likely the Biden OMB at least pauses this initiative and probably alters it to meet new policy. It is possible that policy goals to protect privacy, combat algorithmic bias, and protect data are made more prominent in U.S. AI regulation.

As a threshold matter, it bears note that this memorandum uses a definition of statute that is narrower than AI is being popularly discussed. OMB explained that “[w]hile this Memorandum uses the definition of AI recently codified in statute, it focuses on “narrow” (also known as “weak”) AI, which goes beyond advanced conventional computing to learn and perform domain-specific or specialized tasks by extracting information from data sets, or other structured or unstructured sources of information.” Consequently, “[m]ore theoretical applications of “strong” or “general” AI—AI that may exhibit sentience or consciousness, can be applied to a wide variety of cross-domain activities and perform at the level of, or better than a human agent, or has the capacity to self-improve its general cognitive abilities similar to or beyond human capabilities—are beyond the scope of this Memorandum.”

The Trump OMB tells agencies to minimize regulation of AI and take into account how any regulatory action may affect growth and innovation in the field before putting implemented. OMB directs agencies to favor “narrowly tailored and evidence­ based regulations that address specific and identifiable risks” that foster an environment where U.S. AI can flourish. Consequently, OMB bars “a precautionary approach that holds AI systems to an impossibly high standard such that society cannot enjoy their benefits and that could undermine America’s position as the global leader in AI innovation.” Of course, what constitutes “evidence-based regulation” and an “impossibly high standard” are in the eye of the beholder, so this memorandum could be read by the next OMB in ways the outgoing OMB does not agree with. Finally, OMB is pushing agencies to factor potential benefits in any risk calculation, presumably allowing for greater risk of bad outcomes if the potential reward seems high. This would seem to suggest a more hands-off approach on regulating AI.

OMB listed the 10 AI principles agencies must in regulating AI in the private sector:

  • Public trust in AI
  • Public participation
  • Scientific integrity and information quality
  • Risk assessment and management
  • Benefits and costs
  • Flexibility
  • Fairness and non-discrimination
  • Disclosure and transparency
  • Safety and security
  • Interagency coordination

OMB also tells agencies to look at existing federal or state regulation that may prove inconsistent, duplicative, or inconsistent with this federal policy and “may use their authority to address inconsistent, burdensome, and duplicative State laws that prevent the emergence of a national market.”

OMB encouraged agencies to use “non-regulatory approaches” in the event existing regulations are sufficient or the benefits of regulation do not justify the costs. OMB counseled “[i]n these cases, the agency may consider either not taking any action or, instead, identifying non-regulatory approaches that may be appropriate to address the risk posed by certain AI applications” and provided examples of “non-regulatory approaches:”

  • Sector-Specific Policy Guidance or Frameworks
  • Pilot Programs and Experiments
  • Voluntary Consensus Standards
  • Voluntary Frameworks

As noted, the EO under which OMB is acting requires “that implementing agencies with regulatory authorities review their authorities relevant to AI applications and submit plans to OMB on achieving consistency with this Memorandum.” OMB directs:

The agency plan must identify any statutory authorities specifically governing agency regulation of AI applications, as well as collections of AI-related information from regulated entities. For these collections, agencies should describe any statutory restrictions on the collection or sharing of information (e.g., confidential business information, personally identifiable information, protected health information, law enforcement information, and classified or other national security information). The agency plan must also report on the outcomes of stakeholder engagements that identify existing regulatory barriers to AI applications and high-priority AI applications that are within an agency’s regulatory authorities. OMB also requests agencies to list and describe any planned or considered regulatory actions on AI. Appendix B provides a template for agency plans.

Earlier this year, the White House’s Office of Science and Technology Policy (OSTP) released a draft “Guidance for Regulation of Artificial Intelligence Applications,” a draft of this OMB memorandum that would be issued to federal agencies as directed by Executive Order (EO) 13859, “Maintaining American Leadership in Artificial Intelligence.” However, this memorandum is not aimed at how federal agencies use and deploy artificial intelligence (AI) but rather it “sets out policy considerations that should guide, to the extent permitted by law, regulatory and non-regulatory oversight of AI applications developed and deployed outside of the Federal government.” In short, if this draft is issued by OMB as written, federal agencies would need to adhere to the ten principles laid out in the document in regulating AI as part of their existing and future jurisdiction over the private sector. Not surprisingly, the Administration favors a light touch approach that should foster the growth of AI.

EO 13859 sets the AI policy of the government “to sustain and enhance the scientific, technological, and economic leadership position of the United States in AI.” The EO directed OMB and OSTP along with other Administration offices, to craft this draft memorandum for comment. OMB was to “issue a memorandum to the heads of all agencies that shall:

(i) inform the development of regulatory and non-regulatory approaches by such agencies regarding technologies and industrial sectors that are either empowered or enabled by AI, and that advance American innovation while upholding civil liberties, privacy, and American values; and
(ii) consider ways to reduce barriers to the use of AI technologies in order to promote their innovative application while protecting civil liberties, privacy, American values, and United States economic and national security.

A key regulator in a neighbor of the U.S. also weighed in on the proper regulation of AI from the vantage of privacy. The Office of the Privacy Commissioner of Canada (OPC) “released key recommendations…[that] are the result of a public consultation launched earlier this year.” OPC explained that it “launched a public consultation on our proposals for ensuring the appropriate regulation of AI in the Personal Information Protection and Electronic Documents Act (PIPEDA).” OPC’s “working assumption was that legislative changes to PIPEDA are required to help reap the benefits of AI while upholding individuals’ fundamental right to privacy.” It is to be expected that a privacy regulator will see matters differently than a Republican White House, and so it is here. The OPC

In an introductory paragraph, the OPC spelled out the problems and dangers created by AI:

uses of AI that are based on individuals’ personal information can have serious consequences for their privacy. AI models have the capability to analyze, infer and predict aspects of individuals’ behaviour, interests and even their emotions in striking ways. AI systems can use such insights to make automated decisions about individuals, including whether they get a job offer, qualify for a loan, pay a higher insurance premium, or are suspected of suspicious or unlawful behaviour. Such decisions have a real impact on individuals’ lives, and raise concerns about how they are reached, as well as issues of fairness, accuracy, bias, and discrimination. AI systems can also be used to influence, micro-target, and “nudge” individuals’ behaviour without their knowledge. Such practices can lead to troubling effects for society as a whole, particularly when used to influence democratic processes.

The OPC is focused on the potential for AI to be used in a more effective fashion than current data processing to predict, uncover, subvert, and influence the behavior of people in ways not readily apparent. There is also concern for another aspect of AI and other data processing that has long troubled privacy and human rights advocates: the potential for discriminatory treatement.

OPC asserted “an appropriate law for AI would:

  • Allow personal information to be used for new purposes towards responsible AI innovation and for societal benefits;
  • Authorize these uses within a rights based framework that would entrench privacy as a human right and a necessary element for the exercise of other fundamental rights;
  • Create provisions specific to automated decision-making to ensure transparency, accuracy and fairness; and
  • Require businesses to demonstrate accountability to the regulator upon request, ultimately through proactive inspections and other enforcement measures through which the regulator would ensure compliance with the law.

However, the OPC does not entirely oppose the use of AI and is proposing exceptions to the general requirement under Canadian federal law that meaningful consent is required before data processing. The OPC is “recommending a series of new exceptions to consent that would allow the benefits of AI to be better achieved, but within a rights based framework.” OPC stated “[t]he intent is to allow for responsible, socially beneficial innovation, while ensuring individual rights are respected…[and] [w]e recommend exceptions to consent for the use of personal information for research and statistical purposes, compatible purposes, and legitimate commercial interests purposes.” However, the OPC is proposing a number of safeguards:

The proposed exceptions to consent must be accompanied by a number of safeguards to ensure their appropriate use. This includes a requirement to complete a privacy impact assessment (PIA), and a balancing test to ensure the protection of fundamental rights. The use of de-identified information would be required in all cases for the research and statistical purposes exception, and to the extent possible for the legitimate commercial interests exception.

Further, the OPC made the case that enshrining strong privacy rights in Canadian law would not obstruct the development of AI but would, in fact, speed its development:

  • A rights-based regime would not stand in the way of responsible innovation. In fact, it would help support responsible innovation and foster trust in the marketplace, giving individuals the confidence to fully participate in the digital age. In our 2018-2019 Annual Report to Parliament, our Office outlined a blueprint for what a rights-based approach to protecting privacy should entail. This rights-based approach runs through all of the recommendations in this paper.
  • While we propose that the law should allow for uses of AI for a number of new purposes as outlined, we have seen examples of unfair, discriminatory, and biased practices being facilitated by AI which are far removed from what is socially beneficial. Given the risks associated with AI, a rights based framework would help to ensure that it is used in a manner that upholds rights. Privacy law should prohibit using personal information in ways that are incompatible with our rights and values.
  • Another important measure related to this human rights-based approach would be for the definition of personal information in PIPEDA to be amended to clarify that it includes inferences drawn about an individual. This is important, particularly in the age of AI, where individuals’ personal information can be used by organizations to create profiles and make predictions intended to influence their behaviour. Capturing inferred information clearly within the law is key for protecting human rights because inferences can often be drawn about an individual without their knowledge, and can be used to make decisions about them.

The OPC also called for a framework under which people could review and contest automated decisions:

we recommend that individuals be provided with two explicit rights in relation to automated decision-making. Specifically, they should have a right to a meaningful explanation of, and a right to contest, automated decision-making under PIPEDA. These rights would be exercised by individuals upon request to an organization. Organizations should be required to inform individuals of these rights through enhanced transparency practices to ensure individual awareness of the specific use of automated decision-making, as well as of their associated rights. This could include requiring notice to be provided separate from other legal terms.

The OPC also counseled that PIPEDA’s enforcement mechanism and incentives be changed:

PIPEDA should incorporate a right to demonstrable accountability for individuals, which would mandate demonstrable accountability for all processing of personal information. In addition to the measures detailed below, this should be underpinned by a record keeping requirement similar to that in Article 30 of the GDPR. This record keeping requirement would be necessary to facilitate the OPC’s ability to conduct proactive inspections under PIPEDA, and for individuals to exercise their rights under the Act.

The OPC called for the following to ensure “demonstrable accountability:”

  • Integrating privacy and human rights into the design of AI algorithms and models is a powerful way to prevent negative downstream impacts on individuals. It is also consistent with modern legislation, such as the GDPR and Bill 64. PIPEDA should require organizations to design for privacy and human rights by requiring organizations to implement “appropriate technical and organizational measures” that implement PIPEDA requirements prior to and during all phases of collection and processing.
  • In light of the new proposed rights to explanation and contestation, organizations should be required to log and trace the collection and use of personal information in order to adequately fulfill these rights for the complex processing involved in AI. Tracing supports demonstrable accountability as it provides documentation that the regulator could consult through the course of an inspection or investigation, to determine the personal information fed into the AI system, as well as broader compliance.
  • Demonstrable accountability must include a model of assured accountability pursuant to which the regulator has the ability to proactively inspect an organization’s privacy compliance. In today’s world where business models are often opaque and information flows are increasingly complex, individuals are unlikely to file a complaint when they are unaware of a practice that might cause them harm. This challenge will only become more pronounced as information flows gain complexity with the continued development of AI.
  • The significant risks posed to privacy and human rights by AI systems require a proportionally strong regulatory regime. To incentivize compliance with the law, PIPEDA must provide for meaningful enforcement with real consequences for organizations found to be non-compliant. To guarantee compliance and protect human rights, PIPEDA should empower the OPC to issue binding orders and financial penalties.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Photo by Tetyana Kovyrina from Pexels

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s