Preview of Senate Democratic Chairs

It’s not clear who will end up where, but new Senate chairs will change focus and agenda of committees and debate over the next two years.

With the victories of Senators-elect Rafael Warnock (D-GA) and Jon Ossoff (D-GA), control of the United States Senate will tip to the Democrats once Vice President-elect Kamala Harris (D) is sworn in and can break the 50-50 tie in the chamber in favor of the Democrats. With the shift in control, new chairs will take over committees key to setting the agenda over the next two years in the Senate. However, given the filibuster, and the fact that Senate Republicans will exert maximum leverage through its continued use, Democrats will be hamstrung and forced to work with Republicans on matters such as federal privacy legislation, artificial intelligence (AI), the Internet of Things (IOT), cybersecurity, data flows, surveillance, etc. just as Republicans have had to work with Democrats over the six years they controlled the chamber. Having said that, Democrats will be in a stronger position than they had been and will have the power to set the agenda in committee hearings, being empowered to call the lion’s share of witnesses and to control the floor agenda. What’s more, Democrats will be poised to confirm President-elect Joe Biden’s nominees at agencies like the Federal Communications Commission (FCC), Federal Trade Commission (FTC), the Department of Justice (DOJ), and others, giving the Biden Administration a free hand in many areas of technology policy.

All of that being said, this is not meant to be an exhaustive look at all the committees of jurisdiction and possible chairs. Rather, it seeks to survey likely chairs on selected committees and some of their priorities for the next two years. Subcommittee chairs will also be important, but until the cards get shuffled among the chairs, it will not be possible to see where they land at the subcommittee level.

When considering the possible Democratic chairs of committees, one must keep in mind it is often a matter of musical chairs with the most senior members getting first choice. And so, with Senator Patrick Leahy (D-VT) as the senior-most Democratic Senator, he may well choose to leave the Appropriations Committee and move back to assume the gavel of the Judiciary Committee. Leahy has long been a stakeholder on antitrust, data security, privacy, and surveillance legislation and would be in a position to influence what bills on those and other matters before the Senate look like. If Leahy does not move to the chair on Judiciary, he may still be entitled to chair a subcommittee and exert influence.

If Leahy stays put, then current Senate Minority Whip Dick Durbin (D-IL) would be poised to leapfrog Senator Dianne Feinstein (D-CA) to chair Judiciary after Feinstein was persuaded to step aside on account of her lackluster performance in a number of high-profile hearings in 2020. Durbin has also been active on privacy, data security, and surveillance issues. The Judiciary Committee will be central to a number of technology policies, including Foreign Intelligence Surveillance Act reauthorization, privacy legislation, Section 230 reform, antitrust, and others. On the Republican side of the dais, Senator Lindsey Graham (R-SC) leaving the top post because of term limit restrictions imposed by Republicans, and Senator Charles Grassley (R-IA) is set to replace him. How this changes the 47 USC 230 (Section 230) debate is not immediately clear. And yet, Grassley and three colleagues recently urged the Trump Administration in a letter to omit language in a trade agreement with the United Kingdom (UK) that mirrors the liability protection Section 230. Senators Rob Portman (R-OH), Mark R. Warner (D-VA), Richard Blumenthal (D-CT), and Grassley argued to U.S. Trade Representative Ambassador Robert Lighthizer that a “safe harbor” like the one provided to technology companies for hosting or moderating third party content is outdated, not needed in a free trade agreement, contrary to the will of both the Congress and UK Parliament, and likely to be changed legislatively in the near future. It is likely, however, Grassley will fall in with other Republicans propagating the narrative that social media is unfairly biased against conservatives, particularly in light of the recent purge of President Donald Trump for his many, repeated violations of policy.

The Senate Judiciary Committee will be central in any policy discussions of antitrust and anticompetition in the technology realm. But it bears note the filibuster (and the very low chances Senate Democrats would “go nuclear” and remove all vestiges of the functional supermajority requirement to pass legislation) will give Republicans leverage to block some of the more ambitious reforms Democrats might like to enact (e.g. the House Judiciary Committee’s October 2020 final report that calls for nothing less than a complete remaking of United States (U.S.) antitrust policy and law; see here for more analysis.)

It seems Senator Sherrod Brown (D-OH) will be the next chair of the Senate Banking, Housing, and Urban Development Committee which has jurisdiction over cybersecurity, data security, privacy, and other issues in the financial services sector, making it a player on any legislation designed to encompass the whole of the United States economy. Having said that, it may again be the case that sponsors of, say, privacy legislation decide to cut the Gordian knot of jurisdictional turf battles by cutting out certain committees. For example, many of the privacy bills had provisions making clear they would deem financial services entities in compliance with the Financial Services Modernization Act of 1999 (P.L. 106-102) (aka Gramm-Leach-Bliley) to be in compliance with the new privacy regime. I suppose these provisions may have been included on the basis of the very high privacy and data security standards Gramm-Leach-Bliley has brought about (e.g. the Experian hack), or sponsors of federal privacy legislation made the strategic calculation to circumvent the Senate Banking Committee as much as they can. Nonetheless, this committee has sought to insert itself into the policymaking process on privacy last year as Brown and outgoing Chair Mike Crapo (R-ID) requested “feedback” in February 2019 “from interested stakeholders on the collection, use and protection of sensitive information by financial regulators and private companies.” Additionally, Brown released what may be the most expansive privacy bill from the perspective of privacy and civil liberties advocates, the “Data Accountability and Transparency Act of 2020” in June 2020 (see here for my analysis.) Therefore, Brown may continue to push for a role in federal privacy legislation with a gavel in his hands.

In a similar vein, Senator Patty Murray (D-WA) will likely take over the Senate Health, Education, Labor, and Pensions (HELP) Committee which has jurisdiction over health information privacy and data security through the Health Insurance Portability and Accountability Act of 1996 (HIPAA) and the Health Information Technology for Economic and Clinical Health Act of 2009 (HITECH Act). Again, as with the Senate Banking Committee and Gramm-Leach-Bliley, most of the privacy bills exempt HIPAA-compliant entities. And yet, even if her committee is cut out of a direct role in privacy legislation, Murray will still likely exert influence through oversight of and possible legislation changing HIPAA regulations and the Department of Health and Human Services (HHS) enforcement and rewriting of these standards for most of the healthcare industry. For example, HHS is rushing a rewrite of the HIPAA regulations at the tail end of the Trump Administration, and Murray could be in a position to inform how the Biden Administration and Secretary of Health and Human Services-designate Xavier Berra handles this rulemaking. Additionally, Murray may push the Office of Civil Rights (OCR), the arm of HHS that writes and enforces these regulations, to prioritize matters differently.

Senator Maria Cantwell (D-WA) appears to be the next chair of the Senate Commerce, Science, and Transportation Committee and arguably the largest technology portfolio in the Senate. It is the primary committee of jurisdiction for the FCC, FTC, National Telecommunications and Information Administration (NTIA), the National Institute of Standards and Technology (NIST), and the Department of Commerce. Cantwell may exert influence on which people are nominated to head and staff those agencies and others. Her committee is also the primary committee of jurisdiction for domestic and international privacy and data protection matters. And so, federal privacy legislation will likely be drafted by this committee, and legislative changes so the U.S. can enter into a new personal data sharing agreement with the European Union (EU) would also likely involve her and her committee.

Cantwell and likely next Ranking Member Roger Wicker (R-MS) agree on many elements of federal privacy law but were at odds last year on federal preemption and whether people could sue companies for privacy violations. Between them, they circulated three privacy bills. In September 2020, Wicker and three Republican colleagues introduced the “Setting an American Framework to Ensure Data Access, Transparency, and Accountability (SAFE DATA) Act” (S.4626) (see here for more analysis). Wicker had put out for comment a discussion draft, the “Consumer Data Privacy Act of 2019” (CDPA) (See here for analysis) in November 2019 shortly after the Ranking Member on the committee, Senator Maria Cantwell (D-WA) and other Democrats had introduced their privacy bill, the “Consumer Online Privacy Rights Act“ (COPRA) (S.2968) (See here for more analysis).

Cantwell could also take a leading role on Section 230, but her focus, of late, seems to be on how technology companies are wreaking havoc to traditional media. released a report that she has mentioned during her opening statement at the 23 September hearing aimed at trying to revive data privacy legislation. She and her staff investigated the decline and financial troubles of local media outlets, which are facing a cumulative loss in advertising revenue of up to 70% since 2000. And since advertising revenue has long been the life blood of print journalism, this has devastated local media with many outlets shutting their doors or radically cutting their staff. This trend has been exacerbated by consolidation in the industry, often in concert with private equity or hedge funds looking to wring the last dollars of value from bargain basement priced newspapers. Cantwell also claimed that the overwhelming online advertising dominance of Google and Facebook has further diminished advertising revenue and other possible sources of funding through a variety of means. She intimates that much of this content may be illegal under U.S. law, and the FTC may well be able to use its Section 5 powers against unfair and deceptive acts and its anti-trust authority to take action. (see here for more analysis and context.) In this vein, Cantwell will want her committee to play in any antitrust policy changes, likely knowing massive changes in U.S. law are not possible in a split Senate with entrenched party positions and discipline.

Senator Jack Reed (D-RI) will take over the Senate Armed Services Committee and its portfolio over national security technology policy that includes the cybersecurity, data protection and supply chain of national security agencies and their contractors, AI, offensive and defensive U.S. cyber operations, and other realms. Much of the changes Reed and his committee will seek to make will be through the annual National Defense Authorization Act (NDAA) (see here and here for the many technology provisions in the FY 2021 NDAA.) Reed may also prod the Department of Defense (DOD) to implement or enforce the Cybersecurity Maturity Model Certification (CMMC) Framework differently than envisioned and designed by the Trump Administration. In December 2020, a new rule took effect designed to drive better cybersecurity among U.S. defense contractors. This rule brings together two different lines of effort to require the Defense Industrial Base (DIB) to employ better cybersecurity given the risks they face by holding and using classified information, Federal Contract Information (FCI) and Controlled Unclassified Information (CUI). The Executive Branch has long wrestled with how to best push contractors to secure their systems, and Congress and the White House have opted for using federal contract requirements in that contractors must certify compliance. However, the most recent initiative, the CMMC Framework will require contractors to be certified by third party assessors. And yet, it is not clear the DOD has wrestled with the often-misaligned incentives present in third party certification schemes.

Reed’s committee will undoubtedly delve deep into the recent SolarWinds hack and implement policy changes to avoid a reoccurrence. Doing so may lead the Senate Armed Services Committee back to reconsidering the Cyberspace Solarium Commission’s (CSC) March 2020 final report and follow up white papers, especially their views embodied in “Building a Trusted ICT Supply Chain.”

Senator Mark Warner (D-VA) will likely take over the Senate Intelligence Committee. Warner has long been a stakeholder on a number of technology issues and would be able to exert influence on the national security components of such issues. He and his committee will almost certainly play a role in the Congressional oversight of and response to the SolarWinds hack. Likewise, his committee shares jurisdiction over FISA with the Senate Judiciary Committee and over national security technology policy with the Armed Services Committee.

Senator Amy Klobuchar (D-MN) would be the Senate Democratic point person on election security from her perch at the Senate Rules and Administration Committee, which may enable her to more forcefully push for the legislative changes she has long advocated for. In May 2019, Klobuchar and other Senate Democrats introduced the “Election Security Act” (S. 1540), the Senate version of the stand-alone measure introduced in the House that was taken from the larger package, the “For the People Act” (H.R. 1) passed by the House.

In August 2018, the Senate Rules and Administration Committee postponed indefinitely a markup on a compromise bill to provide states additional assistance in securing elections from interference, the “The Secure Elections Act” (S.2593). Reportedly, there was concern among state officials that a provision requiring audits of election results would be in effect an unfunded mandate even though this provision was softened at the insistence of Senate Republican leadership. However, a Trump White House spokesperson indicated in a statement that the Administration opposed the bill, which may have posed an additional obstacle to Committee action. However, even if the Senate had passed its bill, it was unlikely that the Republican controlled House would have considered companion legislation (H.R. 6663).

Senator Gary Peters (D-MI) may be the next chair of the Senate Homeland Security and Governmental Affairs Committee, and if so, he will continue to face the rock on which many the bark of cybersecurity legislation has been dashed: Senator Ron Johnson (R-WI). So significant has Johnson’s opposition been to bipartisan cybersecurity legislation from the House, some House Republican stakeholders have said so in media accounts not bothering to hide in anonymity. And so whatever Peters’ ambitions may be to shore up the cybersecurity of the federal government as his committee will play a role in investigating and responding to the Russian hack of SolarWinds and many federal agencies, he will be limited by whatever Johnson and other Republicans will allow to move through the committee and through the Senate. Of course, Peters’ purview would include the Department of Homeland Security and the Cybersecurity and Infrastructure Security Agency (CISA) and its remit to police the cybersecurity practices of the federal government. Peters would also have in his portfolio the information technology (IT) practices of the federal government, some $90 billion annually across all agencies.

Finally, whether it be Leahy or Durbin at the Senate Appropriations Committee, this post allows for immense influence in funding and programmatic changes in all federal programs through the power of the purse Congress holds.

Further Reading, Other Developments, and Coming Events (11 January 2021)

Further Reading

  • Why the Russian hack is so significant, and why it’s close to a worst-case scenario” By Kevin Collier — NBC News. This article quotes experts who paint a very ugly picture for the United States (U.S.) in trying to recover from the Russian Federation’s hack. Firstly, the Russians are very good at what they do and likely built multiple backdoors in systems they would want to ensure they have access to after using SolarWinds’ update system to gain initial entry. Secondly, broadly speaking, at present, U.S. agencies and companies have two very unpalatable options: spend months hunting through their systems for any such backdoors or other issues or rebuild their systems from scratch. The ramifications of this hack will continue to be felt well into the Biden Administration.
  • The storming of Capitol Hill was organized on social media.” By Sheera Frenkel — The New York Times. As the repercussions of the riot and apparently attempted insurrection continue to be felt, one aspect that has received attention and will continue to receive attention is the role social media platforms played. Platforms used predominantly by right wing and extremist groups like Gab and Parler were used extensively to plan and execute the attack. This fact and the ongoing content moderation issues at larger platforms will surely inform the Section 230 and privacy legislation debates expected to occur this year and into the future.
  • Comcast data cap blasted by lawmakers as it expands into 12 more states” By Jon Brodkin — Ars Technica. Comcast has extended to other states its 1.2TB cap on household broadband usage, and lawmakers in Massachusetts have written the company, claiming this will hurt low-income families working and schooling children at home. Comcast claims this affects only a small class of subscribers, so-called “super users.” Such a move always seemed in retrospect as data is now the most valuable commodity.
  • Finnish lawmakers’ emails hacked in suspected espionage incident” By Shannon Vavra — cyberscoop. Another legislature of a democratic nation has been hacked, and given the recent hacks of Norway’s Parliament and Germany’s Bundestag by the Russians, it may well turn out they were behind this hack that “obtain[ed] information either to benefit a foreign state or to harm Finland” according to Finland’s National Bureau of Investigation.
  • Facebook Forced Its Employees To Stop Discussing Trump’s Coup Attempt” By Ryan Mac — BuzzFeed News. Reportedly, Facebook shut down internal dialogue about the misgivings voiced by employees about its response to the lies in President Donald Trump’s video and the platform’s role in creating the conditions that caused Trump supporters to storm the United States (U.S.) Capitol. Internally and externally, Facebook equivocated on whether it would go so far as Twitter in taking down Trump’s video and content.
  • WhatsApp gives users an ultimatum: Share data with Facebook or stop using the app” By Dan Goodin — Ars Technica. Very likely in response to coming changes to the Apple iOS that will allow for greater control of privacy, Facebook is giving WhatsApp users a choice: accept our new terms of service that allows personal data to be shared with and used by Facebook or have your account permanently deleted.
  • Insecure wheels: Police turn to car data to destroy suspects’ alibis” By Olivia Solon — NBC News. Like any other computerized, connected device, cars are increasingly a source law enforcement (and likely intelligence agencies) are using to investigate crimes. If you sync your phone via USB or Bluetooth, most modern cars will access your phone and store all sorts of personal data that can later be accessed. But, other systems in cars can tell investigators where the car was, how heavy it was (i.e. how many people), when doors opened, etc. And, there are not specific federal or state laws in the United States to mandate protection of these data.

Other Developments

  • The Federal Bureau of Investigation (FBI), the Cybersecurity and Infrastructure Security Agency (CISA), the Office of the Director of National Intelligence (ODNI), and the National Security Agency (NSA) issued a joint statement, finally naming the Russian Federation as the likely perpetrator of the massive SolarWinds hack. However, the agencies qualified the language, claiming:
    • This work indicates that an Advanced Persistent Threat (APT) actor, likely Russian in origin, is responsible for most or all of the recently discovered, ongoing cyber compromises of both government and non-governmental networks. At this time, we believe this was, and continues to be, an intelligence gathering effort.
      • Why the language is not more definitive is not clear. Perhaps the agencies are merely exercising caution about whom is blamed for the attack. Perhaps the agencies do not want to anger a White House and President averse to reports of Russian hacking for fear it will be associated with the hacking during the 2016 election that aided the Trump Campaign.
      • However, it is noteworthy the agencies are stating their belief the hacking was related to “intelligence gathering,” suggesting the purpose of the incursions was not to destroy data or launch an attack. Presumably, such an assertion is meant to allays concerns that the Russian Federation intends to attack the United States (U.S.) like it did in Ukraine and Georgia in the last decade.
    • The Cyber Unified Coordination Group (UCG) convened per Presidential Policy Directive (PPD) 41 (which technically is the FBI, CISA, and the ODNI but not the NSA) asserted its belief that
      • of the approximately 18,000 affected public and private sector customers of SolarWinds’ Orion products, a much smaller number has been compromised by follow-on activity on their systems. We have so far identified fewer than 10 U.S. government agencies that fall into this category, and are working to identify the nongovernment entities who also may be impacted.
      • These findings are, of course, preliminary, and there may be incentives for the agencies to be less than forthcoming about what they know of the scope and impact of the hacking.
  • Federal Communications Commission (FCC) Chair Ajit Pai has said he will not proceed with a rulemaking to curtail 47 USC 230 (Section 230) in response to a petition the National Telecommunications and Information Administration (NTIA) filed at the direction of President Donald Trump. Pai remarked “I do not intend to move forward with the notice of proposed rule-making at the FCC” because “in part, because given the results of the election, there’s simply not sufficient time to complete the administrative steps necessary in order to resolve the rule-making.” Pai cautioned Congress and the Biden Administration “to study and deliberate on [reforming Section 230] very seriously,” especially “the immunity provision.”  
    • In October, Pai had announced the FCC would proceed with a notice and comment rulemaking based on the NTIA’s petition asking the agency to start a rulemaking to clarify alleged ambiguities in 47 USC 230 regarding the limits of the liability shield for the content others post online versus the liability protection for “good faith” moderation by the platform itself. The NTIA was acting per direction in an executive order allegedly aiming to correct online censorship. Executive Order 13925, “Preventing Online Censorship” was issued in late May after Twitter factchecked two of President Donald Trump’s Tweets regarding false claims made about mail voting in California in response to the COVID-19 pandemic.
  • A House committee released its most recent assessment of federal cybersecurity and information technology (IT) assessment. The House Oversight Committee’s Government Operations Subcommittee released its 11th biannual scorecard under the “Federal Information Technology Acquisition Reform Act (FITARA). The subcommittee stressed this “marks the first time in the Scorecard’s history that all 24 agencies included in the law have received A’s in a single category” and noted it is “the first time that a category will be retired.” Even though this assessment is labeled the FITARA Scorecard, it is actually a compilation of different metrics borne of other pieces of legislation and executive branch programs.
    • Additionally, 19 of the 24 agencies reviewed received A’s on the Data Center Optimization Initiative (DCOI)
    • However, four agencies received F’s on Agency Chief Information Officer (CIO) authority enhancements, measures aiming to fulfill one of the main purposes of FITARA: empowering agency CIOs as a means of controlling and managing better IT acquisition and usage. It has been an ongoing struggle to get agency compliance with the letter and spirit of federal law and directives to do just this.
    • Five agencies got F’s and two agencies got D’s for failing to hit the schedule for transitioning off of the “the expiring Networx, Washington Interagency Telecommunications System (WITS) 3, and Regional Local Service Agreement (LSA) contracts” to the General Services Administration’s $50 billion Enterprise Infrastructure Solutions (EIS). The GSA explained this program in a recent letter:
      • After March 31, 2020, GSA will disconnect agencies, in phases, to meet the September 30, 2022 milestone for 100% completion of transition. The first phase will include agencies that have been “non-responsive” to transition outreach from GSA. Future phases will be based on each agency’s status at that time and the individual circumstances impacting that agency’s transition progress, such as protests or pending contract modifications. The Agency Transition Sponsor will receive a notification before any services are disconnected, and there will be an opportunity for appeal.
  • A bipartisan quartet of United States Senators urged the Trump Administration in a letter to omit language in a trade agreement with the United Kingdom (UK) that mirrors the liability protection in 47 U.S.C. 230 (Section 230). Senators Rob Portman (R-OH), Mark R. Warner (D-VA), Richard Blumenthal (D-CT), and Charles E. Grassley (R-IA) argued to U.S. Trade Representative Ambassador Robert Lighthizer that a “safe harbor” like the one provided to technology companies for hosting or moderating third party content is outdated, not needed in a free trade agreement, contrary to the will of both the Congress and UK Parliament, and likely to be changed legislatively in the near future. However, left unsaid in the letter, is the fact that Democrats and Republicans generally do not agree on how precisely to change Section 230. There may be consensus that change is needed, but what that change looks like is still a matter much in dispute.
    • Stakeholders in Congress were upset that the Trump Administration included language modeled on Section 230 in the United States-Mexico-Canada Agreement (USMCA), the modification of the North American Free Trade Agreement (NAFTA). For example, House Energy and Commerce Committee Chair Frank Pallone Jr (D-NJ) and then Ranking Member Greg Walden (R-OR) wrote Lighthizer, calling it “inappropriate for the United States to export language mirroring Section 230 while such serious policy discussions are ongoing” in Congress.
  • The Trump White House issued a new United States (U.S.) government strategy for advanced computing to replace the 2019 strategy. The “PIONEERING THE FUTURE ADVANCED COMPUTING ECOSYSTEM: A STRATEGIC PLAN” “envisions a future advanced computing ecosystem that provides the foundation for continuing American leadership in science and engineering, economic competitiveness, and national security.” The Administration asserted:
    • It develops a whole-of-nation approach based on input from government, academia, nonprofits, and industry sectors, and builds on the objectives and recommendations of the 2019 National Strategic Computing Initiative Update: Pioneering the Future of Computing. This strategic plan also identifies agency roles and responsibilities and describes essential operational and coordination structures necessary to support and implement its objectives. The plan outlines the following strategic objectives:
      • Utilize the future advanced computing ecosystem as a strategic resource spanning government, academia, nonprofits, and industry.
      • Establish an innovative, trusted, verified, usable, and sustainable software and data ecosystem.
      • Support foundational, applied, and translational research and development to drive the future of advanced computing and its applications.
      • Expand the diverse, capable, and flexible workforce that is critically needed to build and sustain the advanced computing ecosystem.
  • A federal court threw out a significant portion of a suit Apple brought against a security company, Corellium, that offers technology allowing security researchers to virtualize the iOS in order to undertake research. The United States District Court for the Southern District of Florida summarized the case:
    • On August 15, 2019, Apple filed this lawsuit alleging that Corellium infringed Apple’s copyrights in iOS and circumvented its security measures in violation of the federal Digital Millennium Copyright Act (“DMCA”). Corellium denies that it has violated the DMCA or Apple’s copyrights. Corellium further argues that even if it used Apple’s copyrighted work, such use constitutes “fair use” and, therefore, is legally permissible.
    • The court found “that Corellium’s use of iOS constitutes fair use” but did not for the DMCA claim, thus allowing Apple to proceed with that portion of the suit.
  • The Trump Administration issued a plan on how cloud computing could be marshalled to help federally funded artificial intelligence (AI) research and development (R&D). A select committee made four key recommendations that “should accelerate the use of cloud resources for AI R&D: 1)launch and support pilot projects to identify and explore the advantages and challenges associated with the use of commercial clouds in conducting federally funded AI research; (2) improve education and training opportunities to help researchers better leverage cloud resources for AI R&D; (3) catalog best practices in identity management and single-sign-on strategies to enable more effective use of the variety of commercial cloud resources for AI R&D; and (4) establish and publish best practices for the seamless use of different cloud platforms for AI R&D. Each recommendation, if adopted, should accelerate the use of cloud resources for AI R&D.”

Coming Events

  • On 13 January, the Federal Communications Commission (FCC) will hold its monthly open meeting, and the agency has placed the following items on its tentative agenda “Bureau, Office, and Task Force leaders will summarize the work their teams have done over the last four years in a series of presentations:
    • Panel One. The Commission will hear presentations from the Wireless Telecommunications Bureau, International Bureau, Office of Engineering and Technology, and Office of Economics and Analytics.
    • Panel Two. The Commission will hear presentations from the Wireline Competition Bureau and the Rural Broadband Auctions Task Force.
    • Panel Three. The Commission will hear presentations from the Media Bureau and the Incentive Auction Task Force.
    • Panel Four. The Commission will hear presentations from the Consumer and Governmental Affairs Bureau, Enforcement Bureau, and Public Safety and Homeland Security Bureau.
    • Panel Five. The Commission will hear presentations from the Office of Communications Business Opportunities, Office of Managing Director, and Office of General Counsel.
  • On 27 July, the Federal Trade Commission (FTC) will hold PrivacyCon 2021.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2021. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Gerd Altmann from Pixabay

FY 2021 Omnibus and COVID Stimulus Become Law

The end-of-the-year funding package for FY 2021 is stuffed with technology policy changes.

At the tail end of the calendar year 2020, Congress and the White House finally agreed on FY 2021 appropriations and further COVID-19 relief funding and policies, much of which implicated or involved technology policy. As is often the practice, Congressional stakeholders used the opportunity of must-pass legislation as the vehicle for other legislation that perhaps could not get through a chamber of Congress or surmount the now customary filibuster in the Senate.

Congress cleared the “Consolidated Appropriations Act, 2021” (H.R.133) on 21 December 2020, but President Donald Trump equivocated on whether to sign the package, in part, because it did not provide for $2,000 in aid to every American, a new demand at odds with the one his negotiators worked out with House Democrats and Senate Republicans. Given this disparity, it seems more likely Trump made an issue of the $2,000 assistance to draw attention from a spate of controversial pardons issued to Trump allies and friends. Nonetheless, Trump ultimately signed the package on 27 December.

As one of the only bills or set of bills to annually pass Congress, appropriations acts are often the means by which policy and programmatic changes are made at federal agencies through the ability of the legislative branch to condition the use of such funds as are provided. This year’s package is different only in that it contains much more in the way of ride-along legislation than the average omnibus. In fact, there are hundreds, perhaps even more than 1,000 pages of non-appropriations legislation, some that pertains to technology policy. Moreover, with an additional supplemental bill attached to the FY 2021 omnibus also carries significant technology funding and programming.

First, we will review FY 2021 funding and policy for key U.S. agencies, then discuss COVID-19 related legislation, and then finally all the additional legislation Congress packed into the omnibus.

The Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) would receive $2.025 billion, a bare $9 million increase above FY 2020 with significant reordering of how the agency may spend its funds:

  • The agreement includes a net increase of $224,178,000 above the budget request. This includes $226,256,000 above the request to maintain current services, and $54,516,000 in enhancements that are described in more detail below. Assumed in the current services level of funding are several rejections of proposed reductions to prior year initiatives and the inclusion of necessary annualizations to sustain them, such as: $35,606,000 for threat analysis and response; $5,507,000 for soft targets and crowded places security, including school safety and best practices; $6,852,000 for bombing prevention activities, including the train-the-trainer programs; and $67,371,000 to fully fund the Chemical Facility Anti-Terrorism Standards program. The agreement includes the following reductions below the budget request: $6,937,000 for personnel cost adjustments; $2,500,000 of proposed increases to the CyberSentry program; $11,354,000 of proposed increases for the Vulnerability Management program; $2,000,000 of proposed increases to the Cybersecurity Quality Service Management Office (QSMO); $6,500,000 of proposed increases for cybersecurity advisors; and $27,303,000 for the requested increase for protective security advisors. Of the total amount provided for this account, $22,793,000 is available until September 30, 2022, for the National Infrastructure Simulation Analysis Center.

The FY 2021 omnibus requires of CISA the following:

  • Financial Transparency and Accountability.-The Cybersecurity and Infrastructure Security Agency (CISA) is directed to submit the fiscal year 2022 budget request at the same level of PP A detail provided in the table at the end of this report with no further adjustments to the PP A structure. Further, CISA shall brief the Committees not later than 45 days after the date of enactment of this Act and quarterly thereafter on: a spend plan; detailed hiring plans with a delineation of each mission critical occupation (MCO); procurement plans for all major investments to include projected spending and program schedules and milestones; and an execution strategy for each major initiative. The hiring plan shall include an update on CISA’s hiring strategy efforts and shall include the following for each MCO: the number of funded positions and FTE within each PP A; the projected and obligated funding; the number of actual onboard personnel as of the date of the plan; and the hiring and attrition projections for the fiscal year.
  • Cyber Defense Education and Training (CDET).-The agreement includes $29,457,000 for CISA’s CDET programs, an increase of$20,607,000 above the request that is described in further detail below. Efforts are underway to address the shortage of qualified national cybersecurity professionals in the current and future cybersecurity workforce. In order to move forward with a comprehensive plan for a cybersecurity workforce development effort, the agreement includes $10,000,000 above the request to enhance cybersecurity education and training and programs to address the national shortfall of cybersecurity professionals, including activities funded through the use of grants or cooperative agreements as needed in order to fully comply with congressional intent. CISA should consider building a higher education consortium of colleges and universities, led by at least one academic institution with an extensive history of education, research, policy, and outreach in computer science and engineering disciplines; existing designations as a land-grant institution with an extension role; a center of academic excellence in cyber security operations; a proven track record in hosting cyber corps programs; a record of distinction in research cybersecurity; and extensive experience in offering distance education programs and outreach with K-12 programs. The agreement also includes $4,300,000 above the request for the Cybersecurity Education and Training Assistance Program (CETAP), which was proposed for elimination, and $2,500,000 above the request to further expand and initiate cybersecurity education programs, including CETAP, which improve education delivery methods for K-12 students, teachers, counselors and post-secondary institutions and encourage students to pursue cybersecurity careers.
  • Further, the agreement includes $2,500,000 above the request to support CISA’s role with the National Institute of Standards and Technology, National Initiative for Cybersecurity Education Challenge project or for similar efforts to address shortages in the cybersecurity workforce through the development of content and curriculum for colleges, universities, and other higher education institutions.
  • Lastly, the agreement includes $800,000 above the request for a review of CISA’s program to build a national cybersecurity workforce. CISA is directed to enter into a contract for this review with the National Academy of Public Administration, or a similar non-profit organization, within 45 days of the date of enactment of this Act. The review shall assess: whether the partnership models under development by CISA are positioned to be effective and scalable to address current and anticipated needs for a highly capable cybersecurity workforce; whether other existing partnership models, including those used by other agencies and private industry, could usefully augment CISA’s strategy; and the extent to which CISA’s strategy has made progress on workforce development objectives, including excellence, scale, and diversity. A report with the findings of the review shall be provided to the Committees not later than 270 days after the date of enactment of this Act.
  • Cyber QSMO.-To help improve efforts to make strategic cybersecurity services available to federal agencies, the agreement provides $1,514,000 above the request to sustain and enhance prior year investments. As directed in the House report and within the funds provided, CISA is directed to work with the Management Directorate to conduct a crowd-sourced security testing program that uses technology platforms and ethical security researchers to test for vulnerabilities on departmental systems. In addition, not later than 90 days after the date of enactment of this Act, CISA is directed to brief the Committees on opportunities for state and local governments to leverage shared services provided through the Cyber QSMO or a similar capability and to explore the feasibility of executing a pilot program focused on this goal.
  • Cyber Threats to Critical Election Infrastructure.-The briefing required in House Report 116–458 regarding CISA’s efforts related to the 2020 elections shall be delivered not later than 60 days after the date of enactment of this Act. CISA is directed to continue working with SL TT stakeholders to implement election security measures.
  • Cybersecurity Worliforce.-By not later than September 30, 2021, CISA shall provide a joint briefing, in conjunction with the Department of Commerce and other appropriate federal departments and agencies, on progress made to date on each recommendation put forth in Executive Order 13800 and the subsequent “Supporting the Growth and Sustainment of the Nation’s Cybersecurity Workforce” report.
  • Hunt and Incident Response Teams.-The agreement includes an increase of $3,000,000 above fiscal year 2020 funding levels to expand CISA’s threat hunting capabilities.
  • Joint Cyber Planning Office (JCPO).-The agreement provides an increase of $10,568,000 above the request to establish a JCPO to bring together federal and SLTT governments, industry, and international partners to strategically and operationally counter nation-state cyber threats. CISA is directed to brief the Committees not later than 60 days after the date of enactment of this Act on a plan for establishing the JCPO, including a budget and hiring plan; a description of how JCPO will complement and leverage other CISA capabilities; and a strategy for partnering with the aforementioned stakeholders.
  • Multi-State Information Sharing and Analysis Center (MS-ISAC).-The agreement provides $5,148,000 above the request for the MS-ISAC to continue enhancements to SLTT election security support, and furthers ransomware detection and response capabilities, including endpoint detection and response, threat intelligence platform integration, and malicious domain activity blocking.
  • Software Assurance Tools.-Not later than 90 days after the date of enactment of this Act, CISA, in conjunction with the Science and Technology Directorate, is directed to brief the Committees on their collaborative efforts to transition cyber-related research and development initiatives into operational tools that can be used to provide continuous software assurance. The briefing should include an explanation for any completed projects and activities that were not considered viable for practice or were considered operationally self-sufficient. Such briefing shall include software assurance projects, such as the Software Assurance Marketplace.
  • Updated Lifecycle Cost Estimates.–CISA is directed to provide a briefing, not later than 60 days after the date of enactment of this Act, regarding the Continuous Diagnostics and Mitigation (COM) and National Cybersecurity Protection System (NCPS) program lifecycles. The briefing shall clearly describe the projected evolution of both programs by detailing the assumptions that have changed since the last approved program cost and schedule baseline, and by describing the plans to address such changes. In addition, the briefing shall include an analysis of alternatives for aligning vulnerability management, incident response, and NCPS capabilities. Finally, CISA is directed to provide a report not later than 120 days after the date of enactment of this Act with updated five-year program costs and schedules which is congruent with projected capability gaps across federal civilian systems and networks.
  • Vulnerability Management.-The agreement provides $9,452,000 above fiscal year 2020 levels to continue reducing the 12-month backlog in vulnerability assessments. The agreement also provides an increase of $8,000,000 above the request to address the increasing number of identified and reported vulnerabilities in the software and hardware that operates critical infrastructure. This investment will improve capabilities to identify, analyze, and share information about known vulnerabilities and common attack patterns, including through the National Vulnerability Database, and to expand the coordinated responsible disclosure of vulnerabilities.

There are a pair of provisions aimed at the People’s Republic of China (PRC) in Division B (i.e. the FY 2021 Commerce-Justice-Science Appropriations Act):

  • Section 514 prohibits funds for acquisition of certain information systems unless the acquiring department or agency has reviewed and assessed certain risks. Any acquisition of such an information system is contingent upon the development of a risk mitigation strategy and a determination that the acquisition is in the national interest. Each department or agency covered under section 514 shall submit a quarterly report to the Committees on Appropriations describing reviews and assessments of risk made pursuant to this section and any associated findings or determinations.
  • Section 526 prohibits the use of funds by National Aeronautics and Space Administration (NASA), Office of Science and Technology Policy (OSTP), or the National Space Council (NSC) to engage in bilateral activities with China or a Chinese-owned company or effectuate the hosting of official Chinese visitors at certain facilities unless the activities are authorized by subsequent legislation or NASA, OSTP, or NSC have made a certification…

The National Institute of Standards and Technology (NIST) is asked with a number of duties, most of which relate to current or ongoing efforts in artificial intelligence (AI), cybersecurity, and the Internet of Things:

  • Artificial Intelligence (Al). -The agreement includes no less than $6,500,000 above the fiscal year 2020 level to continue NIST’s research efforts related to AI and adopts House language on Data Characterization Standards in Al. House language on Framework for Managing AI Risks is modified to direct NIST to establish a multi-stakeholder process for the development of an Al Risk Management Framework regarding the reliability, robustness, and trustworthiness of Al systems. Further, within 180 days of enactment of this Act, NIST shall establish the process by which it will engage with stakeholders throughout the multi-year framework development process.
  • Cybersecurity.-The agreement includes no less than the fiscal year 2020 enacted level for cybersecurity research, outreach, industry partnerships, and other activities at NIST, including the National Cybersecurity Center of Excellence (NCCoE) and the National Initiative for Cybersecurity Education (NICE). Within the funds provided, the agreement encourages NIST to establish additional NICE cooperative agreements with regional alliances and multi-stakeholder partnerships for cybersecurity workforce and education.
  • Cybersecurity of Genomic Data.-The agreement includes no less than $1,250,000 for NIST and NCCoE to initiate a use case, in collaboration with industry and academia, to research the cybersecurity of personally identifiable genomic data, with a particular focus on better securing deoxyribonucleic acid sequencing techniques, including clustered regularly interspaced short palindromic repeat (CRISPR) technologies, and genomic data storage architectures from cyber threats. NIST and NCCoE should look to partner with entities who have existing capability to research and develop state-of-the-art cybersecurity technologies for the unique needs of genomic and biomedical-based systems.
  • Industrial Internet of Things (IIoT).-The agreement includes no less than the fiscal year 2020 enacted amount for the continued development of an IloT cybersecurity research initiative and to partner, as appropriate, with academic entities and industry to improve the sustainable security of IloT devices in industrial settings.

NIST would receive a modest increase in funding from $1.034 billion to $1.0345 billion from the last fiscal year to the next.

The National Telecommunications and Information Administration (NTIA) would be provided $45.5 million and “the agreement provides (1) up to $7,500,000 for broadband mapping in coordination with the Federal Communications Commission (FCC); (2) no less than the fiscal year 2020 enacted amount for Broadband Programs; (3) $308,000 for Public Safety Communications; and (4) no less than $3,000,000 above the fiscal year 2020 enacted level for Advanced Communications Research.” The agency’s funding for FY 2021 is higher than the last fiscal year at a bit more than $40 million but far less than the Trump Administration’s request of more than $70 million.

Regarding NTIA programmatic language, the bill provides:

  • Further, the agreement directs the additional funds for Advanced Communications Research be used to procure and maintain cutting-edge equipment for research and testing of the next generation of communications technologies, including 5G, as well as to hire staff as needed. The agreement further encourages NTIA to improve the deployment of 5G and spectrum sharing through academic partnerships to accelerate the development of low-cost sensors. For fiscal year 2021, NTIA is directed to follow prior year report language, included in Senate Report 116-127 and adopted in Public Law 116-93, on the following topics: Federal Spectrum Management, Spectrum Management for Science, and the Internet Corporation for Assigned Names and Numbers (ICANN).
  • Spectrum Management System.-The agreement encourages NTIA and the Department to consider alternative proposals to fully fund the needed upgrades to its spectrum management system, including options outside of direct appropriations, and is directed to brief the Committees regarding possible alternative options no later than 90 days after enactment of this Act.
  • Next Generation Broadband in Rural Areas.-NTIA is encouraged to ensure that deployment of last-mile broadband infrastructure is targeted to areas that are currently unserved or underserved, and to utilize public-private partnerships and projects where Federal funding will not exceed 50 percent of a project’s total cost where practicable.
  • National Broadband Map Augmentation.-NTIA is directed to engage with rural and Tribal communities to further enhance the accuracy of the national broadband availability map. NTIA should include in its fiscal year 2022 budget request an update on rural-and Tribal-related broadband availability and access trends, challenges, and Federal actions to achieve equitable access to broadband services in currently underserved communities throughout the Nation. Furthermore, NTIA is encouraged, in coordination with the FCC, to develop and promulgate a standardized process for collecting data from State and local partners.
  • Domain Name Registration.-NTIA is directed, through its position within the Governmental Advisory Committee to work with ICANN to expedite the establishment of a global access model that provides law enforcement, intellectual property rights holders, and third parties with timely access to accurate domain name registration information for legitimate purposes. NTIA is encouraged, as appropriate, to require registrars and registries based in the United States to collect and make public accurate domain name registration information.

The Federal Trade Commission (FTC) would receive $351 million, an increase of $20 million over FY 2020. The final bill includes this policy provision for the FTC to heed:

  • Resources for Data Privacy and Security. -The agreement urges the FTC to conduct a comprehensive internal assessment measuring the agency’s current efforts related to data privacy and security while separately identifying all resource-based needs of the FTC to improve in these areas. The agreement also urges the FTC to provide a report describing the assessment’s findings to the Committees within 180 days of enactment of this Act.

The Federal Communications Commission (FCC) would see a larger increase in funding for agency operations than the FTC, going from $339 million in FY 2020 to $374 million in FY 2021. However, $33 million of the increase is earmarked for implementing the “Broadband DATA Act” (P.L.116-130) along with the $65 million in COVID-19 supplemental funding for the same purpose. The FY 2021 omnibus directs the FCC on a range of policy issues:

  • Broadband Maps.-In addition to adopting the House report language on Broadband Maps, the agreement provides substantial dedicated resources for the FCC to implement the Broadband DATA Act. The FCC is directed to submit a report to the Committees on Appropriations within 90 days of enactment of this Act providing a detailed spending plan for these resources. In addition, the FCC, in coordination with the NTIA, shall outline the specific roles and responsibilities of each agency as it relates to the National Broadband Map and implementation of the Broadband DATA Act. The FCC is directed to report in writing to the Committees every 30 days on the date, amount, and purpose of any new obligation made for broadband mapping and any updates to the broadband mapping spending plan.
  • Lifeline Service. In lieu of the House report language on Lifeline Service, the agreement notes recent action by the FCC to partially waive its rules updating the Lifeline program’s minimum service standard for mobile broadband usage in light of the large increase to the standard that would have gone into effect on Dec. I, 2020, and the increased reliance by Americans on mobile broadband as a result of the pandemic. The FCC is urged to continue to balance the Lifeline program’s goals of accessibility and affordability.
  • 5G Fund and Rural America.-The agreement remains concerned about the feasible deployment of 5G in rural America. Rural locations will likely run into geographic barriers and infrastructure issues preventing the robust deployment of 5G technology, just as they have faced with 4G. The FCC’s proposed 5G Fund fails to provide adequate details or a targeted spend plan on creating seamless coverage in the most rural parts of the Nation. Given these concerns, the FCC is directed to report in writing on: (1) its current and future plans fix prioritizing deployment of 4G coverage in rural areas, (2) its plans for 5G deployment in rural areas, and (3) its plan for improving the mapping and long-term tracking of coverage in rural areas.
  • 6 Gigahertz. -As the FCC has authorized unlicensed use of the 6 gigahertz band, the agreement expects the Commission to ensure its plan does not result in harmful interference to incumbent users or impact critical infrastructure communications systems. The agreement is particularly concerned about the potential effects on the reliability of the electric transmission and distribution system. The agreement expects the FCC to ensure any mitigation technologies are rigorously tested and found to be effective in order to protect the electric transmission system. The FCC is directed to provide a report to the Committees within 90 days of enactment of this Act on its progress in ensuring rigorous testing related to unlicensed use of the 6 gigahertz band. Rural Broadband-The agreement remains concerned that far too many Americans living in rural and economically disadvantaged areas lack access to broadband at speeds necessary to fully participate in the Internet age. The agreement encourages the agency to prioritize projects in underserved areas, where the infrastructure to be installed provides access at download and upload speeds comparable to those available to Americans in urban areas. The agreement encourages the FCC to avoid efforts that could duplicate existing networks and to support deployment of last-mile broadband infrastructure to underserved areas. Further, the agreement encourages the agency to prioritize projects financed through public-private partnerships.
  • Contraband Cell Phones. -The agreement notes continued concern regarding the exploitation of contraband cell phones in prisons and jails nationwide. The agreement urges the FCC to act on the March 24, 2017 Further Notice of Proposed Rulemaking regarding combating contraband wireless devices. The FCC should consider all legally permissible options, including the creation, or use, of “quiet or no service zones,” geolocation-based denial, and beacon technologies to geographically appropriate correctional facilities. In addition, the agreement encourages the FCC to adopt a rules-based approach to cellphone disabling that would require immediate disabling by a wireless carrier upon proper identification of a contraband device. The agreement recommends that the FCC move forward with its suggestion in the Fiscal Year 2019 report to this Committee, noting that “additional field testing of jamming technology will provide a better understanding of the challenges and costs associated with the proper deployment of jamming system.” The agreement urges the FCC to use available funds to coordinate rigorous Federal testing of jamming technology and coordinate with all relevant stakeholders to effectively address this urgent problem.
  • Next-Generation Broadband Networks/or Rural America-Deployment of broadband and telecommunications services in rural areas is imperative to support economic growth and public safety. However, due to geographical challenges facing mobile connectivity and fiber providers, connectivity in certain areas remains challenging. Next generation satellite-based technology is being developed to deliver direct satellite to cellular capability. The FCC is encouraged to address potential regulatory hurdles, to promote private sector development and implementation of innovative, next generation networks such as this, and to accelerate broadband and telecommunications access to all Americans.

$635 million is provided for a Department of Agriculture rural development pilot program, and he Secretary will need to explain how he or she will use authority provided in the last farm bill to expand broadband:

  • The agreement provides $635,000,000 to support the ReConnect pilot program to increase access to broadband connectivity in unserved rural communities and directs the Department to target grants and loans to areas of the country with the largest broadband coverage gaps. These projects should utilize technology that will maximize coverage of broadband with the most benefit to taxpayers and the rural communities served. The agreement notes stakeholder concerns that the ReConnect pilot does not effectively recognize the unique challenges and opportunities that different technologies, including satellite, provide to delivering broadband in noncontiguous States or mountainous terrain and is concerned that providing preference to 100 mbps symmetrical service unfairly disadvantages these communities by limiting the deployment of other technologies capable of providing service to these areas.
  • The Agriculture Improvement Act of 2018 (Public Law 115-334) included new authorities for rural broadband programs that garnered broad stakeholder support as well as bipartisan, bicameral agreement in Congress. Therefore, the Secretary is directed to provide a report on how the Department plans to utilize these authorities to deploy broadband connectivity to rural communities.

In Division M of the package, the “Coronavirus Response and Relief Supplemental Appropriations Act, 2021,” there are provisions related to broadband policy and funding. The bill created a $3.2 billion program to help low-income Americans with internet service and buying devices for telework or distance education. The “Emergency Broadband Benefit Program” is established at the FCC, “under which eligible households may receive a discount of up to $50, or up to $75 on Tribal lands, off the cost of internet service and a subsidy for low-cost devices such as computers and tablets” according to a House Appropriations Committee summary. This funding is far short of what House Democrats wanted. And yet, this program aims to help those on the wrong side of the digital divide during the pandemic.

Moreover, this legislation also establishes two grant programs at the NTIA, designed to help provide broadband on tribal lands and in rural areas. $1 billion is provided for the former and $300 million for the latter with the funds going to tribal and state and local governments to obtain services from private sector providers. The $1 billion for tribal lands allows for greater flexibility in what the funds are ultimately spent on with the $320 million for underserved rural areas being restricted to broadband deployment. Again, these funds are aimed at bridging the disparity in broadband service exposed and exacerbated during the pandemic.

Congress also provided funds for the FCC to reimburse smaller telecommunications providers in removing and replacing risky telecommunications equipment from the People’s Republic of China (PRC). Following the enactment of the “Secure and Trusted Communications Networks Act of 2019” (P.L.116-124) that codified and added to a FCC regulatory effort to address the risks posed by Huawei and ZTE equipment in United States (U.S.) telecommunications networks, there was pressure in Congress to provide the funds necessary to help carriers meet the requirements of the program. The FY 2021 omnibus appropriates $1.9 billion for this program. In another but largely unrelated tranche of funding, the aforementioned $65 million given to the FCC to undertake the “Broadband DATA Act.”

Division Q contains text similar to the “Cybersecurity and Financial System Resilience Act of 2019” (H.R.4458) that would require “the Board of Governors of the Federal Reserve System, Office of the Comptroller of the Currency, Federal Deposit Insurance Corporation, and National Credit Union Administration to annually report on efforts to strengthen cybersecurity by the agencies, financial institutions they regulate, and third-party service providers.”

Division U contains two bills pertaining to technology policy:

  • Title I. The AI in Government Act of 2020. This title codifies the AI Center of Excellence within the General Services Administration to advise and promote the efforts of the federal government in developing innovative uses of artificial intelligence (AI) and competency in the use of AI in the federal government. The section also requires that the Office of Personnel Management identify key skills and competencies needed for federal positions related to AI and establish an occupational series for positions related to AI.
  • Title IX. The DOTGOV Act. This title transfers the authority to manage the .gov internet domain from the General Services Administration to the Cybersecurity and Infrastructure Security Agency (CISA) of the Department of Homeland Security. The .gov internet domain shall be available to any Federal, State, local, or territorial government entity, or other publicly controlled entity, subject to registration requirements established by the Director of CISA and approved by the Director of the Office of Management and Budget.

Division W is the FY 2021 Intelligence Authorization Act with the following salient provisions:

  • Section 323. Report on signals intelligence priorities and requirements. Section 323 requires the Director of National Intelligence (DNI) to submit a report detailing signals intelligence priorities and requirements subject to Presidential Policy Directive-28 (PPD-28) that stipulates “why, whether, when, and how the United States conducts signals intelligence activities.” PPD-28 reformed how the National Security Agency (NSA) and other Intelligence Community (IC) agencies conducted signals intelligence, specifically collection of cellphone and internet data, after former NSA contractor Edward Snowden exposed the scope of the agency’s programs.
  • Section 501. Requirements and authorities to improve education in science, technology, engineering, arts, and mathematics. Section 501 ensures that the Director of the Central Intelligence Agency (CIA) has the legal authorities required to improve the skills in science, technology, engineering, arts, and mathematics (known as STEAM) necessary to meet long-term national security needs. Section 502. Seedling investment in next-generation microelectronics in support of artificial intelligence. Section 502 requires the DNI, acting through the Director of the Intelligence Advanced Research Projects Activity, to award contracts or grants, or enter into other transactions, to encourage microelectronics research.
  • Section 601. Report on attempts by foreign adversaries to build telecommunications and cybersecurity equipment and services for, or to provide them to, certain U.S. Section 601 requires the CIA, NSA, and DIA to submit a joint report that describes the United States intelligence sharing and military posture in Five Eyes countries that currently have or intend to use adversary telecommunications or cybersecurity equipment, especially as provided by China or Russia, with a description of potential vulnerabilities of that information and assessment of mitigation options.
  • Section 602. Report on foreign use of cyber intrusion and surveillance technology. Section 602 requires the DNI to submit a report on the threats posed by foreign governments and foreign entities using and appropriating commercially available cyber intrusion and other surveillance technology.
  • Section 603. Reports on recommendations of the Cyberspace Solarium Commission. Section 603 requires the ODNI and representatives of other agencies to report to Congress their assessment of the recommendations submitted by the Cyberspace Solarium Commission pursuant to Section 1652(j) of the John S. McCain National Defense Authorization Act (NDAA) for Fiscal Year 2019, and to describe actions that each agency expects to take to implement these recommendations.
  • Section 604. Assessment of critical technology trends relating to artificial intelligence, microchips, and semiconductors and related matters. Section 604 requires the DNI to complete an assessment of export controls related to artificial intelligence (AI), microchips, advanced manufacturing equipment, and other AI-enabled technologies, including the identification of opportunities for further cooperation with international partners.
  • Section 605. Combating Chinese influence operations in the United States and strengthening civil liberties protections. Section 605 provides additional requirements to annual reports on Influence Operations and Campaigns in the United States by the Chinese Communist Party (CCP) by mandating an identification of influence operations by the CCP against the science and technology sector in the United States. Section 605 also requires the FBI to create a plan to increase public awareness of influence activities by the CCP. Finally, section 605 requires the FBI, in consultation with the Assistant Attorney General for the Civil Rights and the Chief Privacy and Civil Liberties Officer of the Department of Justice, to develop recommendations to strengthen relationships with communities targeted by the CCP and to build trust with such communities through local and regional grassroots outreach.
  • Section 606. Annual report on corrupt activities of senior officials of the CCP. Section 606 requires the CIA, in coordination with the Department of Treasury’s Office of Intelligence and Analysis and the FBI, to submit to designated congressional committees annually through 2025 a report that describes and assesses the wealth and corruption of senior officials of the CCP, as well as targeted financial measures, including potential targets for sanctions designation. Section 606 further expresses the Sense of Congress that the United States should undertake every effort and pursue every opportunity to expose the corruption and illicit practices of senior officials of the CCP, including President Xi Jinping.
  • Section 607. Report on corrupt activities of Russian and other Eastern European oligarchs. Section 607 requires the CIA, in coordination with the Department of the Treasury’s Office of Intelligence and Analysis and the FBI, to submit to designated congressional committees and the Under Secretary of State for Public Diplomacy, a report that describes the corruption and corrupt or illegal activities among Russian and other Eastern European oligarchs who support the Russian government and Russian President Vladimir Putin, and the impact of those activities on the economy and citizens of Russia. Section 607 further requires the CIA, in coordination with the Department of Treasury’s Office of Intelligence and Analysis, to describe potential sanctions that could be imposed for such activities. Section 608. Report on biosecurity risk and disinformation by the CCP and the PRC. Section 608 requires the DNI to submit to the designated congressional committees a report identifying whether and how CCP officials and the Government of the People’s Republic of China may have sought to suppress or exploit for national advantage information regarding the novel coronavirus pandemic, including specific related assessments. Section 608 further provides that the report shall be submitted in unclassified form, but may have a classified annex.
  • Section 612. Research partnership on activities of People’s Republic of China. Section 612 requires the Director of the NGA to seek to enter into a partnership with an academic or non-profit research institution to carry out joint unclassified geospatial intelligence analyses of the activities of the People’s Republic of China that pose national security risks to the United States, and to make publicly available unclassified products relating to such analyses.

Division Z would tweak a data center energy efficiency and energy savings program overseen by the Secretary of Energy and the Administrator of the Environmental Protection Agency that could impact the Office of Management and Budget’s (OMB) government-wide program. Specifically, “Section 1003 requires the development of a metric for data center energy efficiency, and requires the Secretary of Energy, Administrator of the Environmental Protection Agency (EPA), and Director of the Office of Management and Budget (OMB) to maintain a data center energy practitioner program and open data initiative for federally owned and operated data center energy usage.” There is also language that would require the U.S. government to buy and use more energy-efficient information technology (IT): “each Federal agency shall coordinate with the Director [of OMB], the Secretary, and the Administrator of the Environmental Protection Agency to develop an implementation strategy (including best-practices and measurement and verification techniques) for the maintenance, purchase, and use by the Federal agency of energy-efficient and energy-saving information technologies at or for facilities owned and operated by the Federal agency, taking into consideration the performance goals.”

Division FF contains telecommunications provisions:

  • Section 902. Don’t Break Up the T-Band Act of 2020. Section 902 repeals the requirement for the FCC to reallocate and auction the 470 to 512megahertz band, commonly referred to as the T-band. In certain urban areas, the T-band is utilized by public-safety entities. It also directs the FCC to implement rules to clarify acceptable expenditures on which 9-1- 1 fees can be spent, and creates a strike force to consider how the Federal Government can end 9-1-1 fee diversion.
  • Section 903. Advancing Critical Connectivity Expands Service, Small Business Resources, Opportunities, Access, and Data Based on Assessed Need and Demand (ACCESS BROADBAND) Act. Section 903 establishes the Office of Internet Connectivity and Growth (Office) at the NTIA. This Office would be tasked with performing certain responsibilities related to broadband access, adoption, and deployment, such as performing public outreach to promote access and adoption of high-speed broadband service, and streamlining and standardizing the process for applying for Federal broadband support. The Office would also track Federal broadband support funds, and coordinate Federal broadband support programs within the Executive Branch and with the FCC to ensure unserved Americans have access to connectivity and to prevent duplication of broadband deployment programs.
  • Section 904. Broadband Interagency Coordination Act. Section 904 requires the Federal Communications Commission (FCC), the National Telecommunications and Information Administration (NTIA), and the Department of Agriculture to enter into an interagency agreement to coordinate the distribution of federal funds for broadband programs, to prevent duplication of support and ensure stewardship of taxpayer dollars. The agreement must cover, among other things, the exchange of information about project areas funded under the programs and the confidentiality of such information. The FCC is required to publish and collect public comments about the agreement, including regarding its efficacy and suggested modifications.
  • Section 905. Beat CHINA for 5G Act of 2020. Section 905 directs the President, acting through the Assistant Secretary of Commerce for Communications and Information, to withdraw or modify federal spectrum assignments in the 3450 to 3550 megahertz band, and directs the FCC to begin a system of competitive bidding to permit non-Federal, flexible-use services in a portion or all of such band no later than December 31, 2021.

Section 905 would countermand the White House’s efforts to auction off an ideal part of spectrum for 5G (see here for analysis of the August 2020 announcement). Congressional and a number of Trump Administration stakeholders were alarmed by what they saw as a push to bestow a windfall on a private sector company in the rollout of 5G.

Title XIV of Division FF would allow the FTC to seek civil fines of more than $43,000 per violation during the duration of the public health emergency arising from the pandemic “for unfair and deceptive practices associated with the treatment, cure, prevention, mitigation, or diagnosis of COVID–19 or a government benefit related to COVID-19.”

Finally, Division FF is the vehicle for the “American COMPETES Act” that:

directs the Department of Commerce and the FTC to conduct studies and submit reports on technologies including artificial intelligence, the Internet of Things, quantum computing, blockchain, advanced materials, unmanned delivery services, and 3-D printing. The studies include requirements to survey each industry and report recommendations to help grow the economy and safely implement the technology.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2021. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by forcal35 from Pixabay

Further Reading, Other Developments, and Coming Events (14 December)

Further Reading

  • Russian Hackers Broke Into Federal Agencies, U.S. Officials Suspect” By David Sanger — The New York Times.; “Russian government hackers are behind a broad espionage campaign that has compromised U.S. agencies, including Treasury and Commerce” By Ellen Nakashima and Craig Timberg — The Washington Post; “Suspected Russian hackers spied on U.S. Treasury emails – sources” By Chris Bing — Reuters. Apparently, Sluzhba vneshney razvedki Rossiyskoy Federatsii (SVR), the Russian Federation’s Foreign Intelligence Service, has exploited a vulnerability in SolarWinds’ update system used by many United States (U.S.) government systems, Fortune 500 companies, and the U.S.’ top ten largest telecommunications companies. Reportedly, APT29 (aka Cozy Bear) has had free reign in the email systems of the Departments of the Treasury and Commerce among other possible victims. The hackers may have also accessed a range of other entities around the world using the same SolarWind system. Moreover, these penetrations may be related to the recently announced theft of hacking tools a private firm, FireEye, used to test clients’ systems.
  • Hackers steal Pfizer/BioNTech COVID-19 vaccine data in Europe, companies say” By Jack Stubbs — Reuters. The European Union’s (EU) agency that oversees and approve medications has been hacked, and documents related to one of the new COVID-19 vaccines may have been stolen. The European Medicines Agency (EMA) was apparently penetrated, and materials related to Pfizer and BioNTech’s vaccine were exfiltrated. The scope of the theft is not yet known, but this is the latest in many attempts to hack into the entities conducting research on the virus and potential vaccines.
  • The AI Girlfriend Seducing China’s Lonely Men” By Zhang Wanqing — Sixth Tone. A chat bot powered by artificial intelligence that some men in the People’s Republic of China (PRC) are using extensively raises all sorts of ethical and privacy issues. Lonely people have turned to this AI technology and have confided their deepest feelings, which are stored by the company. It seems like a matter of time until these data are mined for commercial value or hacked. Also, the chatbot has run afoul of PRC’s censorship policies. Finally, is this a preview of the world to come, much like the 2013 film, Her, in which humans have relationships with AI beings?
  • YouTube will now remove videos disputing Joe Biden’s election victory” By Makena Kelly — The Verge. The Google subsidiary announced that because the safe harbor deadline has been reached and a sufficient number of states have certified President-elect Joe Biden, the platform will begin taking down misleading election videos. This change in policy may have come about, in part, because of pressure from Democrats in Congress about what they see as Google’s lackluster efforts to find and remove lies, misinformation, and disinformation about the 2020 election.
  • Lots of people are gunning for Google. Meet the man who might have the best shot.” By Emily Birnbaum — Protocol. Colorado Attorney General Phil Weiser may be uniquely qualified to lead state attorneys general on a second antitrust and anti-competition action against Google given his background as a law professor steeped in antitrust and his background in the Department of Justice and White House during the Obama Administration.

Other Developments

  • Cybersecurity firm, FireEye, revealed it was “attacked by a highly sophisticated threat actor, one whose discipline, operational security, and techniques lead us to believe it was a state-sponsored attack” according to CEO Kevin Mandia. This hacking may be related to vast penetration of United States (U.S.) government systems revealed over the weekend. Mandia stated FireEye has “found that the attacker targeted and accessed certain Red Team assessment tools that we use to test our customers’ security…[that] mimic the behavior of many cyber threat actors and enable FireEye to provide essential diagnostic security services to our customers.” Mandia claimed none of these tools were zero-day exploits. FireEye is “proactively releasing methods and means to detect the use of our stolen Red Team tools…[and] out of an abundance of caution, we have developed more than 300 countermeasures for our customers, and the community at large, to use in order to minimize the potential impact of the theft of these tools.
    • Mandia added:
      • Consistent with a nation-state cyber-espionage effort, the attacker primarily sought information related to certain government customers. While the attacker was able to access some of our internal systems, at this point in our investigation, we have seen no evidence that the attacker exfiltrated data from our primary systems that store customer information from our incident response or consulting engagements, or the metadata collected by our products in our dynamic threat intelligence systems. If we discover that customer information was taken, we will contact them directly.
      • Based on my 25 years in cyber security and responding to incidents, I’ve concluded we are witnessing an attack by a nation with top-tier offensive capabilities. This attack is different from the tens of thousands of incidents we have responded to throughout the years. The attackers tailored their world-class capabilities specifically to target and attack FireEye. They are highly trained in operational security and executed with discipline and focus. They operated clandestinely, using methods that counter security tools and forensic examination. They used a novel combination of techniques not witnessed by us or our partners in the past.
      • We are actively investigating in coordination with the Federal Bureau of Investigation and other key partners, including Microsoft. Their initial analysis supports our conclusion that this was the work of a highly sophisticated state-sponsored attacker utilizing novel techniques.    
  • The United States’ (U.S.) Department of Justice filed suit against Facebook for “tactics that discriminated against U.S. workers and routinely preferred temporary visa holders (including H-1B visa holders) for jobs in connection with the permanent labor certification (PERM) process.” The DOJ is asking for injunction to stop Facebook from engaging in the alleged conduct, civil penalties, and damages for workers harmed by this conduct.
    • The DOJ contended:
      • The department’s lawsuit alleges that beginning no later than Jan. 1, 2018 and lasting until at least Sept. 18, 2019, Facebook employed tactics that discriminated against U.S. workers and routinely preferred temporary visa holders (including H-1B visa holders) for jobs in connection with the PERM process. Rather than conducting a genuine search for qualified and available U.S. workers for permanent positions sought by these temporary visa holders, Facebook reserved the positions for temporary visa holders because of their immigration status, according to the complaint. The complaint also alleges that Facebook sought to channel jobs to temporary visa holders at the expense of U.S. workers by failing to advertise those vacancies on its careers website, requiring applicants to apply by physical mail only, and refusing to consider any U.S. workers who applied for those positions. In contrast, Facebook’s usual hiring process relies on recruitment methods designed to encourage applications by advertising positions on its careers website, accepting electronic applications, and not pre-selecting candidates to be hired based on a candidate’s immigration status, according to the lawsuit.
      • In its investigation, the department determined that Facebook’s ineffective recruitment methods dissuaded U.S. workers from applying to its PERM positions. The department concluded that, during the relevant period, Facebook received zero or one U.S. worker applicants for 99.7 percent of its PERM positions, while comparable positions at Facebook that were advertised on its careers website during a similar time period typically attracted 100 or more applicants each. These U.S. workers were denied an opportunity to be considered for the jobs Facebook sought to channel to temporary visa holders, according to the lawsuit. 
      • Not only do Facebook’s alleged practices discriminate against U.S. workers, they have adverse consequences on temporary visa holders by creating an employment relationship that is not on equal terms. An employer that engages in the practices alleged in the lawsuit against Facebook can expect more temporary visa holders to apply for positions and increased retention post-hire. Such temporary visa holders often have limited job mobility and thus are likely to remain with their company until they can adjust status, which for some can be decades.
      • The United States’ complaint seeks civil penalties, back pay on behalf of U.S. workers denied employment at Facebook due to the alleged discrimination in favor of temporary visa holders, and other relief to ensure Facebook stops the alleged violations in the future. According to the lawsuit, and based on the department’s nearly two-year investigation, Facebook’s discrimination against U.S. workers was intentional, widespread, and in violation of a provision of the Immigration and Nationality Act (INA), 8 U.S.C. § 1324b(a)(1), that the Department of Justice’s Civil Rights Division enforces. 
  • A trio of consumer authority regulators took the lead in coming into agreement with Apple to add “a new section to each app’s product page in its App Store, containing key information about the data the app collects and an accessible summary of the most important information from the privacy policy.” The United Kingdom’s UK’s Competition and Markets Authority (CMA), the Netherlands Authority for Consumers and Markets and the Norwegian Consumer Authority led the effort that “ongoing work from the International Consumer Protection and Enforcement Network (ICPEN), involving 27 of its consumer authority members across the world.” The three agencies explained:
    • Consumer protection authorities, including the CMA, became concerned that people were not being given clear information on how their personal data would be used before choosing an app, including on whether the app developer would share their personal data with a third party. Without this information, consumers are unable to compare and choose apps based on how they use personal data.
  • Australia’s Council of Financial Regulators (CFR) has released a Cyber Operational Resilience Intelligence-led Exercises (CORIE) framework “to test and demonstrate the cyber maturity and resilience of institutions within the Australian financial services industry.”

Coming Events

  • On 15 December, the Senate Judiciary Committee’s Intellectual Property Subcommittee will hold a hearing titled “The Role of Private Agreements and Existing Technology in Curbing Online Piracy” with these witnesses:
    • Panel I
      • Ms. Ruth Vitale, Chief Executive Officer, CreativeFuture
      • Mr. Probir Mehta, Head of Global Intellectual Property and Trade Policy, Facebook, Inc.
      • Mr. Mitch Glazier, Chairman and CEO, Recording Industry Association of America
      • Mr. Joshua Lamel, Executive Director, Re:Create
    • Panel II
      • Ms. Katherine Oyama, Global Director of Business Public Policy, YouTube
      • Mr. Keith Kupferschmid, Chief Executive Officer, Copyright Alliance
      • Mr. Noah Becker, President and Co-Founder, AdRev
      • Mr. Dean S. Marks, Executive Director and Legal Counsel, Coalition for Online Accountability
  • The Senate Armed Services Committee’s Cybersecurity Subcommittee will hold a closed briefing on Department of Defense Cyber Operations on 15 December with these witnesses:
    • Mr. Thomas C. Wingfield, Deputy Assistant Secretary of Defense for Cyber Policy, Office of the Under Secretary of Defense for Policy
    • Mr. Jeffrey R. Jones, Vice Director, Command, Control, Communications and Computers/Cyber, Joint Staff, J-6
    • Ms. Katherine E. Arrington, Chief Information Security Officer for the Assistant Secretary of Defense for Acquisition, Office of the Under Secretary of Defense for Acquisition and Sustainment
    • Rear Admiral Jeffrey Czerewko, United States Navy, Deputy Director, Global Operations, J39, J3, Joint Staff
  • The Senate Banking, Housing, and Urban Affairs Committee’s Economic Policy Subcommittee will conduct a hearing titled “US-China: Winning the Economic Competition, Part II” on 16 December with these witnesses:
    • The Honorable Will Hurd, Member, United States House of Representatives;
    • Derek Scissors, Resident Scholar, American Enterprise Institute;
    • Melanie M. Hart, Ph.D., Senior Fellow and Director for China Policy, Center for American Progress; and
    • Roy Houseman, Legislative Director, United Steelworkers (USW).
  • On 17 December the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency’s (CISA) Information and Communications Technology (ICT) Supply Chain Risk Management (SCRM) Task Force will convene for a virtual event, “Partnership in Action: Driving Supply Chain Security.”

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Photo by stein egil liland from Pexels

Further Reading, Other Development, and Coming Events (8 December)

Further Reading

  • Facebook failed to put fact-check labels on 60% of the most viral posts containing Georgia election misinformation that its own fact-checkers had debunked, a new report says” By Tyler Sonnemaker — Business Insider. Despite its vows to improve its managing of untrue and false content, the platform is not consistently taking down such material related to the runoffs for the Georgia Senate seats. The group behind this finding argues it is because Facebook does not want to. What is left unsaid is that engagement drives revenue, and so, Facebook’s incentives are not to police all violations. Rather it would be to take down enough to be able to say their doing something.
  • Federal Labor Agency Says Google Wrongly Fired 2 Employees” By Kate Conger and Noam Scheiber — The New York Times. The National Labor Relations Board (NLRB) has reportedly sided with two employees Google fired for activities that are traditionally considered labor organizing. The two engineers had been dismissed for allegedly violating the company’s data security practices when they researched the company’s retention of a union-busting firm and sought to alert others about organizing. Even though Google is vowing to fight the action, which has not been finalized, it may well settle given the view of Big Tech in Washington these days. This action could also foretell how a Biden Administration NLRB may look at the labor practices of these companies.
  • U.S. states plan to sue Facebook next week: sources” By Diane Bartz — Reuters. We could see state and federal antitrust suits against Facebook this week. One investigation led by New York Attorney General Tish James could include 40 states although the grounds for alleged violations have not been leaked at this point. It may be Facebook’s acquisition of potential rivals Instagram and WhatsApp that have allowed it to dominate the social messaging market. The Federal Trade Commission (FTC) may also file suit, and, again, the grounds are unknown. The European Commission (EC) is also investigating Facebook for possible violations of European Union (EU) antitrust law over the company’s use of the personal data it holds and uses and about its operation of it online marketplace.
  • The Children of Pornhub” By Nicholas Kristof — The New York Times. This column comprehensively traces the reprehensible recent history of a Canadian conglomerate Mindgeek that owns Pornhub where one can find reams of child and non-consensual pornography. Why Ottawa has not cracked down on this firm is a mystery. The passage and implementation of the “Allow States and Victims to Fight Online Sex Trafficking Act of 2017” (P.L. 115-164) that narrowed the liability shield under 47 USC 230 has forced the company to remove content, a significant change from its indifference before the statutory change in law. Kristof suggests some easy, common sense changes Mindgeek could implement to combat the presence of this illegal material, but it seems like the company will do enough to say it is acting without seriously reforming its platform. Why would it? There is too much money to be made. Additionally, those fighting against this sort of material have been pressuring payment platforms to stop doing business with Mindgeek. PayPal has foresworn any  interaction, and due to pressure Visa and Mastercard are “reviewing” their relationship with Mindgeek and Pornhub. In a statement to a different news outlet, Pornhub claimed it is “unequivocally committed to combating child sexual abuse material (CSAM), and has instituted a comprehensive, industry-leading trust and safety policy to identify and eradicate illegal material from our community.” The company further claimed “[a]ny assertion that we allow CSAM is irresponsible and flagrantly untrue….[w]e have zero tolerance for CSAM.”
  • Amazon and Apple Are Powering a Shift Away From Intel’s Chips” By Don Clark — The New York Times. Two tech giants have chosen new faster, cheaper chips signaling a possible industry shift away from Intel, the firm that has been a significant player for decades. Intel will not go quietly, of course, and a key variable is whether must have software and applications are rewritten to accommodate the new chips from a British firm, Arm.

Other Developments

  • The Government Accountability Office (GAO) and the National Academy of Medicine (NAM) have released a joint report on artificial intelligence in healthcare, consisting of GAO’s Technology Assessment: Artificial Intelligence in Health Care: Benefits and Challenges of Technologies to Augment Patient Care and NAM’s Special Publication: Advancing Artificial Intelligence in Health Settings Outside the Hospital and Clinic. GAO’s report “discusses three topics: (1) current and emerging AI tools available for augmenting patient care and their potential benefits, (2) challenges to the development and adoption of these tools, and (3) policy options to maximize benefits and mitigate challenges to the use of AI tools to augment patient care.” NAM’s “paper aims to provide an analysis of: 1) current technologies and future applications of AI in HSOHC, 2) the logistical steps and challenges involved in integrating AI- HSOHC applications into existing provider workflows, and 3) the ethical and legal considerations of such AI tools, followed by a brief proposal of potential key initiatives to guide the development and adoption of AI in health settings outside the hospital and clinic (HSOHC).
    • The GAO “identified five categories of clinical applications where AI tools have shown promise to augment patient care: predicting health trajectories, recommending treatments, guiding surgical care, monitoring patients, and supporting population health management.” The GAO “also identified three categories of administrative applications where AI tools have shown promise to reduce provider burden and increase the efficiency of patient care: recording digital clinical notes, optimizing operational processes, and automating laborious tasks.” The GAO stated:
      • This technology assessment also identifies challenges that hinder the adoption and impact of AI tools to augment patient care, according to stakeholders, experts, and the literature. Difficulties accessing sufficient high-quality data may hamper innovation in this space. Further, some available data may be biased, which can reduce the effectiveness and accuracy of the tools for some people. Addressing bias can be difficult because the electronic health data do not currently represent the general population. It can also be challenging to scale tools up to multiple locations and integrate them into new settings because of differences in institutions and the patient populations they serve. The limited transparency of AI tools used in health care can make it difficult for providers, regulators, and others to determine whether an AI tool is safe and effective. A greater dispersion of data across providers and institutions can make securing patient data difficult. Finally, one expert described how existing case law does not specifically address AI tools, which can make providers and patients reticent to adopt them. Some of these challenges are similar to those identified previously by GAO in its first publication in this series, such as the lack of high-quality, structured data, and others are more specific to patient care, such as liability concerns.
    • The GAO “described six policy options:”
      • Collaboration. Policymakers could encourage interdisciplinary collaboration between developers and health care providers. This could result in AI tools that are easier to implement and use within an existing workflow.
      • Data Access. Policymakers could develop or expand high-quality data access mechanisms. This could help developers address bias concerns by ensuring data are representative, transparent, and equitable.
      • Best Practices. Policymakers could encourage relevant stakeholders and experts to establish best practices (such as standards) for development, implementation, and use of AI technologies. This could help with deployment and scalability of AI tools by providing guidance on data, interoperability, bias, and formatting issues.
      • Interdisciplinary Education. Policymakers could create opportunities for more workers to develop interdisciplinary skills. This could allow providers to use AI tools more effectively, and could be accomplished through a variety of methods, including changing medical curricula or grants.
      • Oversight Clarity. Policymakers could collaborate with relevant stakeholders to clarify appropriate oversight mechanisms. Predictable oversight could help ensure that AI tools remain safe and effective after deployment and throughout their lifecycle.
      • Status Quo. Policymakers could allow current efforts to proceed without intervention.
    • NAM claimed
      • Numerous AI-powered health applications designed for personal use have been shown to improve patient outcomes, building predictions based on large volumes of granular, real-time, and individualized behavioral and medical data. For instance, some forms of telehealth, a technology that has been critical during the COVID-19 pandemic, benefit considerably from AI software focused on natural language processing, which enables efficient triaging of patients based on urgency and type of illness. Beyond patient-provider communication, AI algorithms relevant to diabetic and cardiac care have demonstrated remarkable efficacy in helping patients manage their blood glucose levels in their day-to-day lives and in detecting cases of atrial fibrillation. AI tools that monitor and synthesize longitudinal patient behaviors are also particularly useful in psychiatric care, where of the exact timing of interventions is often critical. For example, smartphone-embedded sensors that track location and proximity of individuals can alert clinicians of possible substance use, prompting immediate intervention. On the population health level, these individual indicators of activity and health can be combined with environmental- and system-level data to generate predictive insight into local and global health trends. The most salient example of this may be the earliest warnings of the COVID-19 outbreak, issued in December 2019 by two private AI technology firms.
      • Successful implementation and widespread adoption of AI applications in HSOHC requires careful consideration of several key issues related to personal data, algorithm development, and health care insurance and payment. Chief among them are data interoperability, standardization, privacy, ameliorating systemic biases in algorithms, reimbursement of AI- assisted services, quality improvement, and integration of AI tools into provider workflows. Overcoming these challenges and optimizing the impact of AI tools on clinical outcomes will involve engaging diverse stakeholders, deliberately designing AI tools and interfaces, rigorously evaluating clinical and economic utility, and diffusing and scaling algorithms across different health settings. In addition to these potential logistical and technical hurdles, it is imperative to consider the legal and ethical issues surrounding AI, particularly as it relates to the fair and humanistic deployment of AI applications in HSOHC. Important legal considerations include the appropriate designation of accountability and liability of medical errors resulting from AI- assisted decisions for ensuring the safety of patients and consumers. Key ethical challenges include upholding the privacy of patients and their data—particularly with regard to non-HIPAA covered entities involved in the development of algorithms—building unbiased AI algorithms based on high-quality data from representative populations, and ensuring equitable access to AI technologies across diverse communities.
  • The National Institute of Standards and Technology (NIST) published a “new study of face recognition technology created after the onset of the COVID-19 pandemic [that] shows that some software developers have made demonstrable progress at recognizing masked faces.” In Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face Recognition Accuracy with Face Masks Using Post-COVID-19 Algorithms (NISTIR 8331), NIST stated the “report augments its predecessor with results for more recent algorithms provided to NIST after mid-March 2020.” NIST said that “[w]hile we do not have information on whether or not a particular algorithm was designed with face coverings in mind, the results show evidence that a number of developers have adapted their algorithms to support face recognition on subjects potentially wearing face masks.” NIST stated that
    • The following results represent observations on algorithms provided to NIST both before and after the COVID-19 pandemic to date. We do not have information on whether or not a particular algorithm was designed with face coverings in mind. The results documented capture a snapshot of algorithms submitted to the FRVT 1:1 in face recognition on subjects potentially wearing face masks.
      • False rejection performance: All algorithms submitted after the pandemic continue to give in-creased false non-match rates (FNMR) when the probes are masked. While a few pre-pandemic algorithms still remain within the most accurate on masked photos, some developers have submit-ted algorithms after the pandemic showing significantly improved accuracy and are now among the most accurate in our test.
      • Evolution of algorithms on face masks: We observe that a number of algorithms submitted since mid-March 2020 show notable reductions in error rates with face masks over their pre-pandemic predecessors. When comparing error rates for unmasked versus masked faces, the median FNMR across algorithms submitted since mid-March 2020 has been reduced by around 25% from the median pre-pandemic results. The figure below presents examples of developer evolution on both masked and unmasked datasets. For some developers, false rejection rates in their algorithms submitted since mid-March 2020 decreased by as much as a factor of 10 over their pre-pandemic algorithms, which is evidence that some providers are adapting their algorithms to handle facemasks. However, in the best cases, when comparing results for unmasked images to masked im-ages, false rejection rates have increased from 0.3%-0.5% (unmasked) to 2.4%-5% (masked).
      • False acceptance performance: As most systems are configured with a fixed threshold, it is necessary to report both false negative and false positive rates for each group at that threshold. When comparing a masked probe to an unmasked enrollment photo, in most cases, false match rates (FMR) are reduced by masks. The effect is generally modest with reductions in FMR usually being smaller than a factor of two. This property is valuable in that masked probes do not impart adverse false match security consequences for verification.
      • Mask-agnostic face recognition: All 1:1 verification algorithms submitted to the FRVT test since the start of the pandemic are evaluated on both masked and unmasked datasets. The test is de-signed this way to mimic operational reality: some images will have masks, some will not (especially enrollment samples from a database or ID card). And to the extent that the use of protective masks will exist for some time, our test will continue to evaluate algorithmic capability on verifying all combinations of masked and unmasked faces.
  • The government in London has issued a progress report on its current cybersecurity strategy that has another year to run. The Paymaster General assessed how well the United Kingdom (UK) has implemented the National Cyber Security Strategy 2016 to 2021 and pointed to goals yet to be achieved. This assessment comes in the shadow of the pending exit of the UK from the European Union (EU) and Prime Minister Boris Johnson’s plans to increase the UK’s role in select defense issues, including cyber operations. The Paymaster General stated:
    • The global landscape has changed significantly since the publication of the National Cyber Security Strategy Progress Report in May 2019. We have seen unprecedented levels of disruption to our way of life that few would have predicted. The COVID-19 pandemic has increased our reliance on digital technologies – for our personal communications with friends and family and our ability to work remotely, as well as for businesses and government to continue to operate effectively, including in support of the national response.
    • These new ways of living and working highlight the importance of cyber security, which is also underlined by wider trends. An ever greater reliance on digital networks and systems, more rapid advances in new technologies, a wider range of threats, and increasing international competition on underlying technologies and standards in cyberspace, emphasise the need for good cyber security practices for individuals, businesses and government.
    • Although the scale and international nature of these changes present challenges, there are also opportunities. With the UK’s departure from the European Union in January 2020, we can define and strengthen Britain’s place in the world as a global leader in cyber security, as an independent, sovereign nation.
    • The sustained, strategic investment and whole of society approach delivered so far through the National Cyber Security Strategy has ensured we are well placed to respond to this changing environment and seize new opportunities.
    • The Paymaster General asserted:
      • [The] report has highlighted growing risks, some accelerated by the COVID-19 pandemic, and longer-term trends that will shape the environment over the next decade:
      • Ever greater reliance on digital networks and systems as daily life moves online, bringing huge benefits but also creating new systemic and individuals risks.
      • Rapid technological change and greater global competition, challenging our ability to shape the technologies that will underpin our future security and prosperity.
      • A wider range of adversaries as criminals gain easier access to commoditised attack capabilities and cyber techniques form a growing part of states’ toolsets.
      • Competing visions for the future of the internet and the risk of fragmentation, making consensus on norms and ethics in cyberspace harder to achieve.
      • In February 2020 the Prime Minister announced the Integrated Review of Security, Defence, Development and Foreign Policy. This will define the government’s ambition for the UK’s role in the world and the long-term strategic aims of our national security and foreign policy. It will set out the way in which the UK will be a problem-solving and burden-sharing nation, and a strong direction for recovery from COVID-19, at home and overseas.
      • This will help to shape our national approach and priorities on cyber security beyond 2021. Cyber security is a key element of our international, defence and security posture, as well as a driving force for our economic prosperity.
  • The University of Toronto’s Citizen Lab published a report on an Israeli surveillance firm that uses “[o]ne of the widest-used—but least appreciated” means of surveilling people (i.e., “leveraging of weaknesses in the global mobile telecommunications infrastructure to monitor and intercept phone calls and traffic.” Citizen Lab explained that an affiliate of the NSO Group, “Circles is known for selling systems to exploit Signaling System 7 (SS7) vulnerabilities, and claims to sell this technology exclusively to nation-states.” Citizen Lab noted that “[u]nlike NSO Group’s Pegasus spyware, the SS7 mechanism by which Circles’ product reportedly operates does not have an obvious signature on a target’s phone, such as the telltale targeting SMS bearing a malicious link that is sometimes present on a phone targeted with Pegasus.” Citizen Lab found that
    • Circles is a surveillance firm that reportedly exploits weaknesses in the global mobile phone system to snoop on calls, texts, and the location of phones around the globe. Circles is affiliated with NSO Group, which develops the oft-abused Pegasus spyware.
    • Circles, whose products work without hacking the phone itself, says they sell only to nation-states. According to leaked documents, Circles customers can purchase a system that they connect to their local telecommunications companies’ infrastructure, or can use a separate system called the “Circles Cloud,” which interconnects with telecommunications companies around the world.
    • According to the U.S. Department of Homeland Security, all U.S. wireless networks are vulnerable to the types of weaknesses reportedly exploited by Circles. A majority of networks around the globe are similarly vulnerable.
    • Using Internet scanning, we found a unique signature associated with the hostnames of Check Point firewalls used in Circles deployments. This scanning enabled us to identify Circles deployments in at least 25 countries.
    • We determine that the governments of the following countries are likely Circles customers: Australia, Belgium, Botswana, Chile, Denmark, Ecuador, El Salvador, Estonia, Equatorial Guinea, Guatemala, Honduras, Indonesia, Israel, Kenya, Malaysia, Mexico, Morocco, Nigeria, Peru, Serbia, Thailand, the United Arab Emirates (UAE), Vietnam, Zambia, and Zimbabwe.
    • Some of the specific government branches we identify with varying degrees of confidence as being Circles customers have a history of leveraging digital technology for human rights abuses. In a few specific cases, we were able to attribute the deployment to a particular customer, such as the Security Operations Command (ISOC) of the Royal Thai Army, which has allegedly tortured detainees.
  • Senators Ron Wyden (D-OR) Elizabeth Warren (D-MA) Edward J. Markey (D-MA) and Brian Schatz (D-HI) “announced that the Department of Homeland Security (DHS) will launch an inspector general investigation into Customs and Border Protection’s (CBP) warrantless tracking of phones in the United States following an inquiry from the senators earlier this year” per their press release.
    • The Senators added:
      • As revealed by public contracts, CBP has paid a government contractor named Venntel nearly half a million dollars for access to a commercial database containing location data mined from applications on millions of Americans’ mobile phones. CBP officials also confirmed the agency’s warrantless tracking of phones in the United States using Venntel’s product in a September 16, 2020 call with Senate staff.
      • In 2018, the Supreme Court held in Carpenter v. United States that the collection of significant quantities of historical location data from Americans’ cell phones is a search under the Fourth Amendment and therefore requires a warrant.
      • In September 2020, Wyden and Warren successfully pressed for an inspector general investigation into the Internal Revenue Service’s use of Venntel’s commercial location tracking service without a court order.
    • In a letter, the DHS Office of the Inspector General (OIG) explained:
      • We have reviewed your request and plan to initiate an audit that we believe will address your concerns. The objective of our audit is to determine if the Department of Homeland Security (DHS) and it [sic] components have developed, updated, and adhered to policies related to cell-phone surveillance devices. In addition, you may be interested in our audit to review DHS’ use and protection of open source intelligence. Open source intelligence, while different from cell phone surveillance, includes the Department’s use of information provided by the public via cellular devices, such as social media status updates, geo-tagged photos, and specific location check-ins.
    • In an October letter, these Senators plus Senator Sherrod Brown (D-OH) argued:
      • CBP is not above the law and it should not be able to buy its way around the Fourth Amendment. Accordingly, we urge you to investigate CBP’s warrantless use of commercial databases containing Americans’ information, including but not limited to Venntel’s location database. We urge you to examine what legal analysis, if any, CBP’s lawyers performed before the agency started to use this surveillance tool. We also request that you determine how CBP was able to begin operational use of Venntel’s location database without the Department of Homeland Security Privacy Office first publishing a Privacy Impact Assessment.
  • The American Civil Liberties Union (ACLU) has filed a lawsuit in a federal court in New York City, seeking an order to compel the United States (U.S.) Department of Homeland Security (DHS), U.S. Customs and Border Protection (CBP), and U.S. Immigration and Customs Enforcement (ICE) “to release records about their purchases of cell phone location data for immigration enforcement and other purposes.” The ACLU made these information requests after numerous media accounts showing that these and other U.S. agencies were buying location data and other sensitive information in ways intended to evade the bar in the Fourth Amendment against unreasonable searches.
    • In its press release, the ACLU asserted:
      • In February, The Wall Street Journal reported that this sensitive location data isn’t just for sale to commercial entities, but is also being purchased by U.S. government agencies, including by U.S. Immigrations and Customs Enforcement to locate and arrest immigrants. The Journal identified one company, Venntel, that was selling access to a massive database to the U.S. Department of Homeland Security, U.S. Customs and Border Protection, and ICE. Subsequent reporting has identified other companies selling access to similar databases to DHS and other agencies, including the U.S. military.
      • These practices raise serious concerns that federal immigration authorities are evading Fourth Amendment protections for cell phone location information by paying for access instead of obtaining a warrant. There’s even more reason for alarm when those agencies evade requests for information — including from U.S. senators — about such practices. That’s why today we asked a federal court to intervene and order DHS, CBP, and ICE to release information about their purchase and use of precise cell phone location information. Transparency is the first step to accountability.
    • The ACLU explained in the suit:
      • Multiple news sources have confirmed these agencies’ purchase of access to databases containing precise location information for millions of people—information gathered by applications (apps) running on their smartphones. The agencies’ purchases raise serious concerns that they are evading Fourth Amendment protections for cell phone location information by paying for access instead of obtaining a warrant. Yet, more than nine months after the ACLU submitted its FOIA request (“the Request”), these agencies have produced no responsive records. The information sought is of immense public significance, not only to shine a light on the government’s use of powerful location-tracking data in the immigration context, but also to assess whether the government’s purchase of this sensitive data complies with constitutional and legal limitations and is subject to appropriate oversight and control.
  • Facebook’s new Oversight Board announced “the first cases it will be deliberating and the opening of the public comment process” and “the appointment of five new trustees.” The cases were almost all referred by Facebook users and the new board is asking for comments on the right way to manage what may be objectionable content. The Oversight Board explained it “prioritizing cases that have the potential to affect lots of users around the world, are of critical importance to public discourse or raise important questions about Facebook’s policies.”
    • The new trustees are:
      • Kristina Arriaga is a globally recognized advocate for freedom of expression, with a focus on freedom of religion and belief. Kristina is president of the advisory firm Intrinsic.
      • Cherine Chalaby is an expert on internet governance, international finance and technology, with extensive board experience. As Chairman of ICANN, he led development of the organization’s five-year strategic plan for 2021 to 2025.
      • Wanda Felton has over 30 years of experience in the financial services industry, including serving as Vice Chair of the Board and First Vice President of the Export-Import Bank of the United States.
      • Kate O’Regan is a former judge of the Constitutional Court of South Africa and commissioner of the Khayelitsha Commission. She is the inaugural director of the Bonavero Institute of Human Rights at the University of Oxford.
      • Robert Post is an American legal scholar and Professor of Law at Yale Law School, where he formerly served as Dean. He is a leading scholar of the First Amendment and freedom of speech.

Coming Events

  • The National Institute of Standards and Technology (NIST) will hold a webinar on the Draft Federal Information Processing Standards (FIPS) 201-3 on 9 December.
  • On 9 December, the Senate Commerce, Science, and Transportation Committee will hold a hearing titled “The Invalidation of the EU-US Privacy Shield and the Future of Transatlantic Data Flows” with the following witnesses:
    • The Honorable Noah Phillips, Commissioner, Federal Trade Commission
    • Ms. Victoria Espinel, President and Chief Executive Officer, BSA – The Software Alliance
    • Mr. James Sullivan, Deputy Assistant Secretary for Services, International Trade Administration, U.S. Department of Commerce
    • Mr. Peter Swire, Elizabeth and Tommy Holder Chair of Law and Ethics, Georgia Tech Scheller College of Business, and Research Director, Cross-Border Data Forum
  • The Senate Judiciary Committee will hold an executive session at which the “Online Content Policy Modernization Act” (S.4632), a bill to narrow the liability shield in 47 USC 230, may be marked up.
  • On 10 December, the Federal Communications Commission (FCC) will hold an open meeting and has released a tentative agenda:
    • Securing the Communications Supply Chain. The Commission will consider a Report and Order that would require Eligible Telecommunications Carriers to remove equipment and services that pose an unacceptable risk to the national security of the United States or the security and safety of its people, would establish the Secure and Trusted Communications Networks Reimbursement Program, and would establish the procedures and criteria for publishing a list of covered communications equipment and services that must be removed. (WC Docket No. 18-89)
    • National Security Matter. The Commission will consider a national security matter.
    • National Security Matter. The Commission will consider a national security matter.
    • Allowing Earlier Equipment Marketing and Importation Opportunities. The Commission will consider a Notice of Proposed Rulemaking that would propose updates to its marketing and importation rules to permit, prior to equipment authorization, conditional sales of radiofrequency devices to consumers under certain circumstances and importation of a limited number of radiofrequency devices for certain pre-sale activities. (ET Docket No. 20-382)
    • Promoting Broadcast Internet Innovation Through ATSC 3.0. The Commission will consider a Report and Order that would modify and clarify existing rules to promote the deployment of Broadcast Internet services as part of the transition to ATSC 3.0. (MB Docket No. 20-145)

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Gerd Altmann from Pixabay

Final NDAA Agreement, Part II

There are AI, 5G, and supply chain provisions in the national security policy bill the Armed Services Committee have agreed upon.

So, it appears I failed to include all the technology goodies to be found in the final FY 2021 National Defense Authorization Act (NDAA). And so, I will cover the provisions I missed yesterday in the conference report to accompany the “William M. “Mac” Thornberry National Defense Authorization Act for Fiscal Year 2021” (H.R.6395). For example, there are artificial intelligence (AI), 5G, and supply chain provisions.

Notably, the final bill includes the House Science, Space, and Technology Committee’s “National Artificial Intelligence Initiative Act of 2020” (H.R.6216). In the Joint Explanatory Statement, the conferees asserted:

The conferees believe that artificial intelligence systems have the potential to transform every sector of the United States economy, boosting productivity, enhancing scientific research, and increasing U.S. competitiveness and that the United States government should use this Initiative to enable the benefits of trustworthy artificial intelligence while preventing the creation and use of artificial intelligence systems that behave in ways that cause harm. The conferees further believe that such harmful artificial intelligence systems may include high-risk systems that lack sufficient robustness to prevent adversarial attacks; high-risk systems that harm the privacy or security of users or the general public; artificial general intelligence systems that become self-aware or uncontrollable; and artificial intelligence systems that unlawfully discriminate against protected classes of persons, including on the basis of sex, race, age, disability, color, creed, national origin, or religion. Finally, the conferees believe that the United States must take a whole of government approach to leadership in trustworthy artificial intelligence, including through coordination between the Department of Defense, the Intelligence Community, and the civilian agencies.

H.R.6216 directs the President to establish the National Artificial Intelligence Initiative that would:

  • Ensure the U.S. continues to lead in AI research and development (R&D)
  • Lead efforts throughout the world to develop and use “trustworthy AI systems” in both the public and private sectors
  • Prepare to assist U.S. workers for the coming integration and use of AI throughout the U.S., and
  • Coordinate AI R&D development and demonstration activities across the federal government, including national security agencies.

The President would have a variety of means at his or her discretion in effectuating those goals, including existing authority to ask Congress for funding and to use Executive Office agencies to manage the authority and funding Congress provides.

Big picture, H.R. 6216 would require better coordination of federal AI initiatives, research, and funding, and more involvement in the development of voluntary, consensus-based standards for AI. Much of this would happen through the standing up of a new “National Artificial Intelligence Initiative Office” by the Office of Science and Technology Policy (OSTP) in the White House. This new entity would be the locus of AI activities and programs in the United States’ (U.S.) government with the ultimate goal of ensuring the nation is the world’s foremost developer and user of the new technology.

Moreover, OSTP would “acting through the National Science and Technology Council…establish or designate an Interagency Committee to coordinate Federal programs and activities in support of the Initiative.” This body would “provide for interagency coordination of Federal artificial intelligence research, development, and demonstration activities, development of voluntary consensus standards and guidelines for research, development, testing, and adoption of ethically developed, safe, and trustworthy artificial intelligence systems, and education and training activities and programs of Federal departments and agencies undertaken pursuant to the Initiative.” The committee would need to “develop a strategic plan for AI” within two years and update it every three years thereafter. Moreover, the committee would need to “propose an annually coordinated interagency budget for the Initiative to the Office of Management and Budget (OMB) that is intended to ensure that the balance of funding across the Initiative is sufficient to meet the goals and priorities established for the Initiative.” However, OMB would be under no obligation to take notice of this proposal save for pressure from AI stakeholders in Congress or AI champions in any given Administration. The Secretary of Commerce would create a ‘‘National Artificial Intelligence Advisory Committee” to advise the President and National Artificial Intelligence Initiative Office on a range of AI policy matters. In the bill as added to the House’s FY 2021 NDAA, it was to have been the Secretary of Energy.

Federal agencies would be permitted to award funds to new Artificial Intelligence Research Institutes to pioneer research in any number of AI fields or considerations. The bill does not authorize any set amount of money for this program and instead kicks the decision over to the Appropriations Committees on any funding. The National Institute of Standards and Technology (NIST) must “support measurement research and development of best practices and voluntary standards for trustworthy artificial intelligence systems” and “support measurement research and development of best practices and voluntary standards for trustworthy artificial intelligence systems” among other duties. NIST must “shall work to develop, and periodically update, in collaboration with other public and private sector organizations, including the National Science Foundation and the Department of Energy, a voluntary risk management framework for the trustworthiness of artificial intelligence systems.” NIST would also “develop guidance to facilitate the creation of voluntary data sharing arrangements between industry, federally funded research centers, and Federal agencies for the purpose of advancing artificial intelligence research and technologies.”

The National Science Foundation (NSF) would need to “fund research and education activities in artificial intelligence systems and related fields, including competitive awards or grants to institutions of higher education or eligible non-profit organizations (or consortia thereof).” The Department of Energy must “carry out a cross-cutting research and development program to advance artificial intelligence tools, systems, capabilities, and workforce needs and to improve the reliability of artificial intelligence methods and solutions relevant to the mission of the Department.” This department would also be tasked with advancing “expertise in artificial intelligence and high-performance computing in order to improve health outcomes for veteran populations.”

According to a fact sheet issued by the House Science, Space, and Technology Committee, [t]he legislation will:

  • Formalize interagency coordination and strategic planning efforts in AI research, development, standards, and education through an Interagency Coordination Committee and a coordination office managed by the Office of Science and Technology Policy (OSTP).
  • Create an advisory committee to better inform the Coordination Committee’s strategic plan, track the state of the science around artificial intelligence, and ensure the Initiative is meeting its goals.
  • Create a network of AI institutes, coordinated through the National Science Foundation, that any Federal department of agency could fund to create partnerships between the academia and the public and private sectors to accelerate AI research focused on an economic sector, social sector, or on a cross-cutting AI challenge.
  • Support basic AI measurement research and standards development at the National Institute for Standards and Technology(NIST) and require NIST to create a framework for managing risks associated with AI systems and best practices for sharing data to advance trustworthy AI systems.
  • Support research at the National Science Foundation (NSF) across a wide variety of AI related research areas to both improve AI systems and use those systems to advance other areas of science. This section requires NSF to include an obligation for an ethics statement for all research proposals to ensure researchers are considering, and as appropriate, mitigating potential societal risks in carrying out their research.
  • Support education and workforce development in AI and related fields, including through scholarships and traineeships at NSF.
  • Support AI research and development efforts at the Department of Energy (DOE), utilize DOE computing infrastructure for AI challenges, promote technology transfer, data sharing, and coordination with other Federal agencies, and require an ethics statement for DOE funded research as required at NSF.
  • Require studies to better understand workforce impacts and opportunities created by AI, and identify the computing resources necessary to ensure the United States remains competitive in AI.

A provision would expand the scope of the biannual reports the DOD must submit to Congress on the Joint Artificial Intelligence Center (JAIC) to include the Pentagon’s efforts to develop or contribute to efforts to institute AI standards and more detailed information on uniformed DOD members who serve at the JAIC. Other language would revamp how the Under Secretary of Defense for Research and Engineering shall manage efforts and procurements between the DOD and the private sector on AI and other technology with cutting edge national security applications. The new emphasis of the program would be to buy mature AI to support DOD missions, allowing DOD components to directly use AI and machine learning to address operational problems, speeding up the development, testing, and deployment of AI technology and capabilities, and overseeing and managing any friction between DOD agencies and components over AI development and use. This section also spells out which DOD officials should be involved with this program and how the JAIC fits into the picture. This language and other provisions suggest the DOD may have trouble in coordinating AI activities and managing infighting, at least in the eyes of the Armed Services Committees.

Moreover, the JAIC would be given a new Board of Advisors to advise the Secretary of Defense and JAIC Director on a range of AI issues. However, as the Secretary shall appoint the members of the board, all of whom must be from outside the Pentagon, this organ would seem to be a means of the Office of the Secretary asserting greater control over the JAIC.

And yet, the Secretary is also directed to delegate acquisition authority to the JAIC, permitting it to operate with the same independence as a DOD agency. The JAIC Director will need to appoint an acquisition executive to manage acquisition and policy inside and outside the DOD. $75 million would be authorized a year for these activities, and the Secretary needs to draft and submit an implementation plan to Congress and conduct a demonstration before proceeding.

The DOD must identify five use cases of when AI-enabled systems have improved the functioning of the Department in handling management functions in implementing the National Defense Strategy and then create prototypes and technology pilots to utilize commercially available AI capabilities to bolster the use cases.

Within six months of enactment, the DOD must determine whether it currently has the resources, capability, and know how to ensure that any AI bought has been ethically and responsibly developed. Additionally, the DOD must assess how it can install ethical AI standards in acquisitions and supply chains.

The Secretary is provided the authority to convene a steering committing on emerging technology and national security threats comprised of senior DOD officials to decide on how the Department can best adapt to and buy new technology to ensure U.S. military superiority. This body would also investigate the new technology used by adversaries and how to address and counter any threats. For this steering committee, emerging technology is defined as:

Technology determined to be in an emerging phase of development by the Secretary, including quantum information science and technology, data analytics, artificial intelligence, autonomous technology, advanced materials, software, high performance computing, robotics, directed energy, hypersonics, biotechnology, medical technologies, and such other technology as may be identified by the Secretary.

Not surprisingly, the FY 2021 NDAA has provisions on 5G. Most notably, the Secretary of Defense must assess and mitigate any risks presented by “at-risk” 5G or 6G systems in other nations before a major weapons system or a battalion, squadron, or naval combatant can be based there. The Secretary must take into account any steps the nation is taking to address risk, those steps the U.S. is taking, any agreements in place to mitigate risks, and other steps. This provision names Huawei and ZTE as “at-risk vendors.” This language may be another means by which the U.S. can persuade other nations not to buy and install technology from these People’s Republic of China (PRC) companies.

The Under Secretary of Defense for Research and Engineering and a cross-functional team would need to develop a plan to transition the DOD to 5G throughout the Department and its components. Each military department inside the DOD would get to manage its own 5G acquisition with the caveat that the Secretary would need to establish a telecommunications security program to address 5G security risks in the DOD. The Secretary would also be tasked with conducting a demonstration project to “evaluate the maturity, performance, and cost of covered technologies to provide additional options for providers of fifth-generation wireless network services” for Open RAN (aka oRAN) and “one or more massive multiple-input, multiple-output radio arrays, provided by one or more companies based in the United States, that have the potential to compete favorably with radios produced by foreign companies in terms of cost, performance, and efficiency.”

The service departments would need to submit reports to the Secretary on how they are assessing and mitigating and reporting to the DOD on the following risks to acquisition programs:

  • Technical risks in engineering, software, manufacturing and testing.
  • Integration and interoperability risks, including complications related to systems working across multiple domains while using machine learning and artificial intelligence capabilities to continuously change and optimize system performance.
  • Operations and sustainment risks, including as mitigated by appropriate sustainment planning earlier in the lifecycle of a program, access to technical data, and intellectual property rights.
  • Workforce and training risks, including consideration of the role of contractors as part of the total workforce.
  • Supply chain risks, including cybersecurity, foreign control and ownership of key elements of supply chains, and the consequences that a fragile and weakening defense industrial base, combined with barriers to industrial cooperation with allies and partners, pose for delivering systems and technologies in a trusted and assured manner.

Moreover, “[t]he Under Secretary of Defense for Acquisition and Sustainment, in coordination with the Chief Information Officer of the Department of Defense, shall develop requirements for ap- propriate software security criteria to be included in solicitations for commercial and developmental solutions and the evaluation of bids submitted in response to such solicitations, including a delineation of what processes were or will be used for a secure software development life cycle.”

The Armed Services Committees are directing the Secretary to follow up a report submitted to the President per Executive Order 13806 on strengthening Defense Industrial Base (DIB) manufacturing and supply chain resiliency. The DOD must submit “additional recommendations regarding United States industrial policies….[that] shall consist of specific executive actions, programmatic changes, regulatory changes, and legislative proposals and changes, as appropriate.”

The DOD would also need to submit an annex to an annual report to Congress on “strategic and critical materials, including the gaps and vulnerabilities in supply chains of such materials.”

There is language that would change how the DOD manages the production of microelectronics and related supply chain risk. The Pentagon would also need to investigate how to commercialize its intellectual property for microelectronic R&D. The Department of Commerce would need to “assess the capabilities of the United States industrial base to support the national defense in light of the global nature of the supply chain and significant interdependencies between the United States industrial base and the industrial bases of foreign countries with respect to the manufacture, design, and end use of microelectronics.”

There is a revision of the Secretary of Energy’s authority over supply chain risk administered by the National Nuclear Security Administration (NNSA) that would provide for a “special exclusion action” that would bar the procurement of risky technology for up to two years.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Further Reading, Other Development, and Coming Events (7 December)

Further Reading

  • Facebook steps up campaign to ban false information about coronavirus vaccines” By Elizabeth Dwoskin — The Washington Post. In its latest step to find and remove lies, misinformation, and disinformation, the social media giant is now committing to removing and blocking untrue material about COVID-19 vaccines, especially from the anti-vaccine community. Will the next step be to take on anti-vaccination proponents generally?
  • Comcast’s 1.2 TB data cap seems like a ton of data—until you factor in remote work” By Rob Pegoraro — Fast Company. Despite many people and children working and learning from home, Comcast is reimposing a 1.2 terabyte limit on data for homes. Sounds like quite a lot until you factor in video meetings, streaming, etc. So far, other providers have not set a cap.
  • Google’s star AI ethics researcher, one of a few Black women in the field, says she was fired for a critical email” By Drew Harwell and Nitasha Tiku — The Washington Post. Timnit Gebru, a top flight artificial intelligence (AI) computer scientist, was fired for questioning Google’s review of a paper she wanted to present at an AI conference that is likely critical of the company’s AI projects. Google claims she resigned, but Gebru says she was fired. She has long been an advocate for women and minorities in tech and AI and her ouster will likely only increase scrutiny of and questions about Google’s commitment to diversity and an ethical approach to the development and deployment of AI. It will also probably lead to more employee disenchantment about the company that follows in the wake of protests about Google’s involvement with the United States Department of Defense’s Project Maven and hiring of former United States Department of Homeland Security chief of staff Miles Taylor who was involved with the policies that resulted in caging children and separating families on the southern border of the United States.
  • Humans Can Help Clean Up Facebook and Twitter” By Greg Bensinger — The New York Times. In this opinion piece, the argument is made that social media platforms should redeploy their human monitors to the accounts that violate terms of service most frequently (e.g., President Donald Trump) and more aggressively label and remove untrue or inflammatory content, they would have a greater impact on lies, misinformation, and disinformation.
  • Showdown looms over digital services tax” By Ashley Gold — Axios. Because the Organization for Economic Cooperation and Development (OECD) has not reached a deal on digital services taxes, a number of the United States (U.S.) allies could move forward with taxes on U.S. multinationals like Amazon, Google, and Apple. The Trump Administration has variously taken an adversarial position threatening to retaliate against countries like France who have enacted a tax that has not been collected during the OECD negotiations. The U.S. also withdrew from talks. It is probable the Biden Administration will be more willing to work in a multi-lateral fashion and may strike a deal on an issue that it not going away as the United Kingdom, Italy, and Canada also have plans for a digital tax.
  • Trump’s threat to veto defense bill over social-media protections is heading to a showdown with Congress” By Karoun Demirjian and Tony Romm — The Washington Post. I suppose I should mention of the President’s demands that the FY 2021 National Defense Authorization Act (NDAA) contain a repeal of 47 U.S.C. 230 (Section 230 of the Communications Act) that came at the eleventh hour and fifty-ninth minute of negotiations on a final version of the bill. Via Twitter, Donald Trump threatened to veto the bill which has been passed annually for decades. Republicans were not having it, however, even if they agreed on Trump’s desire to remove liability protection for technology companies. And yet, if Trump continues to insist on a repeal, Republicans may find themselves in a bind and the bill could conceivably get pulled until President-elect Joe Biden is sworn in. On the other hand, Trump’s veto threats about renaming military bases currently bearing the names of Confederate figures have not been renewed even though the final version of the bill contains language instituting a process to do just that.

Other Developments

  • The Senate Judiciary Committee held over its most recent bill to narrow 47 U.S.C. 230 (Section 230 of the Communications Act) that provides liability protection for technology companies for third-party material posted on their platforms and any decisions to edit, alter, or remove such content. The committee opted to hold the “Online Content Policy Modernization Act” (S.4632), which may mean the bill’s chances of making it to the Senate floor are low. What’s more, even if the Senate passes Section 230 legislation, it is not clear there will be sufficient agreement with Democrats in the House to get a final bill to the President before the end of this Congress. On 1 October, the committee also decided to hold over bill to try to reconcile the fifteen amendments submitted for consideration. The Committee could soon meet again to formally markup and report out this legislation.
    • At the earlier hearing, Chair Lindsey Graham (R-SC) submitted an amendment revising the bill’s reforms to Section 230 that incorporate some of the below amendments but includes new language. For example, the bill includes a definition of “good faith,” a term not currently defined in Section 230. This term would be construed as a platform taking down or restricting content only according to its publicly available terms of service, not as a pretext, and equally to all similarly situated content. Moreover, good faith would require alerting the user and giving him or her an opportunity to respond subject to certain exceptions. The amendment also makes clear that certain existing means of suing are still available to users (e.g. suing claiming a breach of contract.)
    • Senator Mike Lee (R-UT) offered a host of amendments:
      • EHF20913 would remove “user[s]” from the reduced liability shield that online platforms would receive under the bill. Consequently, users would still not be legally liable for the content posted by another user.
      • EHF20914 would revise the language the language regarding the type of content platforms could take down with legal protection to make clear it would not just be “unlawful” content but rather content “in violation of a duly enacted law of the United States,” possibly meaning federal laws and not state laws. Or, more likely, the intent would be to foreclose the possibility a platform would say it is acting in concert with a foreign law and still assert immunity.
      • EHF20920 would add language making clear that taking down material that violates terms of service or use according to an objectively reasonable belief would be shielded from liability.
      • OLL20928 would expand legal protection to platforms for removing or restricting spam,
      • OLL20929 would bar the Federal Communications Commission (FCC) from a rulemaking on Section 230.
      • OLL20930 adds language making clear if part of the revised Section 230 is found unconstitutional, the rest of the law would still be applicable.
      • OLL20938 revises the definition of an “information content provider,” the term of art in Section 230 that identifies a platform, to expand when platforms may be responsible for the creation or development of information and consequently liable for a lawsuit.
    • Senator Josh Hawley (R-MO) offered an amendment that would create a new right of action for people to sue large platforms for taking down his or her content if not done in “good faith.” The amendment limits this right only to “edge providers” who are platforms with more than 30 million users in the U.S. , 300 million users worldwide, and with revenues of more than $1.5 billion. This would likely exclude all platforms except for Twitter, Facebook, Instagram, TikTok, Snapchat, and a select group of a few others.
    • Senator John Kennedy (R-LA) offered an amendment that removes all Section 230 legal immunity from platforms that collect personal data and then uses an “automated function” to deliver targeted or tailored content to a user unless a user “knowingly and intentionally elect[s]” to receive such content.
  • The Massachusetts Institute of Technology’s (MIT) Work of the Future Task Force issued its final report and drew the following conclusions:
    • Technological change is simultaneously replacing existing work and creating new work. It is not eliminating work altogether.
    • Momentous impacts of technological change are unfolding gradually.
    • Rising labor productivity has not translated into broad increases in incomes because labor market institutions and policies have fallen into disrepair.
    • Improving the quality of jobs requires innovation in labor market institutions.
    • Fostering opportunity and economic mobility necessitates cultivating and refreshing worker skills.
    • Investing in innovation will drive new job creation, speed growth, and meet rising competitive challenges.
    • The Task Force stated:
      • In the two-and-a-half years since the Task Force set to work, autonomous vehicles, robotics, and AI have advanced remarkably. But the world has not been turned on its head by automation, nor has the labor market. Despite massive private investment, technology deadlines have been pushed back, part of a normal evolution as breathless promises turn into pilot trials, business plans, and early deployments — the diligent, if prosaic, work of making real technologies work in real settings to meet the demands of hard-nosed customers and managers.
      • Yet, if our research did not confirm the dystopian vision of robots ushering workers off of factor y floors or artificial intelligence rendering superfluous human expertise and judgment, it did uncover something equally pernicious: Amidst a technological ecosystem delivering rising productivity, and an economy generating plenty of jobs (at least until the COVID-19 crisis), we found a labor market in which the fruits are so unequally distributed, so skewed towards the top, that the majority of workers have tasted only a tiny morsel of a vast har vest.
      • As this report documents, the labor market impacts of technologies like AI and robotics are taking years to unfold. But we have no time to spare in preparing for them. If those technologies deploy into the labor institutions of today, which were designed for the last century, we will see similar effects to recent decades: downward pressure on wages, skills, and benefits, and an increasingly bifurcated labor market. This report, and the MIT Work of the Future Task Force, suggest a better alternative: building a future for work that har vests the dividends of rapidly advancing automation and ever-more powerful computers to deliver opportunity and economic security for workers. To channel the rising productivity stemming from technological innovations into broadly shared gains, we must foster institutional innovations that complement technological change.
  • The European Data Protection Supervisor (EDPS) Wojciech Wiewiorówski published his “preliminary opinion on the European Commission’s (EC) Communication on “A European strategy for data” and the creation of a common space in the area of health, namely the European Health Data Space (EHDS).” The EDPS lauded the goal of the EHDS, “the prevention, detection and cure of diseases, as well as for evidence-based decisions in order to enhance effectiveness, accessibility and sustainability of the healthcare systems.” However, Wiewiorówski articulated his concerns that the EC needs to think through the applicability of the General Data Protection Regulation (GDPR), among other European Union (EU) laws before it can legally move forward. The EDPS stated:
    • The EDPS calls for the establishment of a thought-through legal basis for the processing operations under the EHDS in line with Article 6(1) GDPR and also recalls that such processing must comply with Article 9 GDPR for the processing of special categories of data.
    • Moreover, the EDPS highlights that due to the sensitivity of the data to be processed within the EHDS, the boundaries of what constitutes a lawful processing and a compatible further processing of the data must be crystal-clear for all the stakeholders involved. Therefore, the transparency and the public availability of the information relating to the processing on the EHDS will be key to enhance public trust in the EHDS.
    • The EDPS also calls on the Commission to clarify the roles and responsibilities of the parties involved and to clearly identify the precise categories of data to be made available to the EHDS. Additionally, he calls on the Member States to establish mechanisms to assess the validity and quality of the sources of the data.
    • The EDPS underlines the importance of vesting the EHDS with a comprehensive security infrastructure, including both organisational and state-of-the-art technical security measures to protect the data fed into the EHDS. In this context, he recalls that Data Protection Impact Assessments may be a very useful tool to determine the risks of the processing operations and the mitigation measures that should be adopted.
    • The EDPS recommends paying special attention to the ethical use of data within the EHDS framework, for which he suggests taking into account existing ethics committees and their role in the context of national legislation.
    • The EDPS is convinced that the success of the EHDS will depend on the establishment of a strong data governance mechanism that provides for sufficient assurances of a lawful, responsible, ethical management anchored in EU values, including respect for fundamental rights. The governance mechanism should regulate, at least, the entities that will be allowed to make data available to the EHDS, the EHDS users, the Member States’ national contact points/ permit authorities, and the role of DPAs within this context.
    • The EDPS is interested in policy initiatives to achieve ‘digital sovereignty’ and has a preference for data being processed by entities sharing European values, including privacy and data protection. Moreover, the EDPS calls on the Commission to ensure that the stakeholders taking part in the EHDS, and in particular, the controllers, do not transfer personal data unless data subjects whose personal data are transferred to a third country are afforded a level of protection essentially equivalent to that guaranteed within the European Union.
    • The EDPS calls on Member States to guarantee the effective implementation of the right to data portability specifically in the EHDS, together with the development of the necessary technical requirements. In this regard, he considers that a gap analysis might be required regarding the need to integrate the GDPR safeguards with other regulatory safeguards, provided e.g. by competition law or ethical guidelines.
  • The Office of Management and Budget (OMB) extended a guidance memorandum directing agencies to consolidate data centers after Congress pushed back the sunset date for the program. OMB extended OMB Memorandum M-19-19, Update to Data Center Optimization Initiative (DCOI) through 30 September 2022, which applies “to the 24 Federal agencies covered by the Chief Financial Officers (CFO) Act of 1990, which includes the Department of Defense.” The DCOI was codified in the “Federal Information Technology Acquisition Reform” (FITARA) (P.L. 113-291) and extended in 2018 until October 1, 2020. And this sunset was pushed back another two years in the FY 2020 National Defense Authorization Act (NDAA) (P.L. 116-92).
    • In March 2020, the Government Accountability Office (GAO) issued another of its periodic assessments of the DCOI, started in 2012 by the Obama Administration to shrink the federal government’s footprint of data centers, increase efficiency and security, save money, and reduce energy usage.
    • The GAO found that 23 of the 24 agencies participating in the DCOI met or planned to meet their FY 2019 goals to close 286 of the 2,727 data centers considered part of the DCOI. This latter figure deserves some discussion, for the Trump Administration changed the definition of what is a data center to exclude smaller ones (so-called non-tiered data centers). GAO asserted that “recent OMB DCOI policy changes will reduce the number of data centers covered by the policy and both OMB and agencies may lose important visibility over the security risks posed by these facilities.” Nonetheless, these agencies are projecting savings of $241.5 million when all the 286 data centers planned for closure in FY 2019 actually close. It bears note that the GAO admitted in a footnote it “did not independently validate agencies’ reported cost savings figures,” so these numbers may not be reliable.
    • In terms of how to improve the DCOI, the GAO stated that “[i]n addition to reiterating our prior open recommendations to the agencies in our review regarding their need to meet DCOI’s closure and savings goals and optimization metrics, we are making a total of eight new recommendations—four to OMB and four to three of the 24 agencies. Specifically:
      • The Director of the Office of Management and Budget should (1) require that agencies explicitly document annual data center closure goals in their DCOI strategic plans and (2) track those goals on the IT Dashboard. (Recommendation 1)
      • The Director of the Office of Management and Budget should require agencies to report in their quarterly inventory submissions those facilities previously reported as data centers, even if those facilities are not subject to the closure and optimization requirements of DCOI. (Recommendation 2)
      • The Director of the Office of Management and Budget should document OMB’s decisions on whether to approve individual data centers when designated by agencies as either a mission critical facility or as a facility not subject to DCOI. (Recommendation 3)
      • The Director of the Office of Management and Budget should take action to address the key performance measurement characteristics missing from the DCOI optimization metrics, as identified in this report. (Recommendation 4)
  • Australia’s Inspector-General of Intelligence and Security (IGIS) released its first report on how well the nation’s security services did in observing the law with respect to COVID  app  data. The IGIS “is satisfied that the relevant agencies have policies and procedures in place and are taking reasonable steps to avoid intentional collection of COVID app data.” The IGIS revealed that “[i]ncidental collection in the course of the lawful collection of other data has occurred (and is permitted by the Privacy Act); however, there is no evidence that any agency within IGIS jurisdiction has decrypted, accessed or used any COVID app data.” The IGIS is also “satisfied  that  the intelligence agencies within IGIS jurisdiction which have the capability to incidentally collect a least some types of COVID app data:
    • Are aware of their responsibilities under Part VIIIA of the Privacy Act and are taking active steps to minimise the risk that they may collect COVID app data.
    • Have appropriate  policies  and  procedures  in  place  to  respond  to  any  incidental  collection of COVID app data that they become aware of. 
    • Are taking steps to ensure any COVID app data is not accessed, used or disclosed.
    • Are taking steps to ensure any COVID app data is deleted as soon as practicable.
    • Have not decrypted any COVID app data.
    • Are applying the usual security measures in place in intelligence agencies such that a ‘spill’ of any data, including COVID app data, is unlikely.
  • New Zealand’s Government Communications Security Bureau’s National Cyber Security Centre (NCSC) has released its annual Cyber Threat Report that found that “nationally significant organisations continue to be frequently targeted by malicious cyber actors of all types…[and] state-sponsored and non-state actors targeted public and private sector organisations to steal information, generate revenue, or disrupt networks and services.” The NCSC added:
    • Malicious cyber actors have shown their willingness to target New Zealand organisations in all sectors using a range of increasingly advanced tools and techniques. Newly disclosed vulnerabilities in products and services, alongside the adoption of new services and working arrangements, are rapidly exploited by state-sponsored actors and cyber criminals alike. A common theme this year, which emerged prior to the COVID-19 pandemic, was the exploitation of known vulnerabilities in internet-facing applications, including corporate security products, remote desktop services and virtual private network applications.
  • The former Director of the United States’ (U.S.) Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA) wrote an opinion piece disputing President Donald Trump’s claims that the 2020 Presidential Election was fraudulent. Christopher Krebs asserted:
    • While I no longer regularly speak to election officials, my understanding is that in the 2020 results no significant discrepancies attributed to manipulation have been discovered in the post-election canvassing, audit and recount processes.
    • This point cannot be emphasized enough: The secretaries of state in Georgia, Michigan, Arizona, Nevada and Pennsylvania, as well officials in Wisconsin, all worked overtime to ensure there was a paper trail that could be audited or recounted by hand, independent of any allegedly hacked software or hardware.
    • That’s why Americans’ confidence in the security of the 2020 election is entirely justified. Paper ballots and post-election checks ensured the accuracy of the count. Consider Georgia: The state conducted a full hand recount of the presidential election, a first of its kind, and the outcome of the manual count was consistent with the computer-based count. Clearly, the Georgia count was not manipulated, resoundingly debunking claims by the president and his allies about the involvement of CIA supercomputers, malicious software programs or corporate rigging aided by long-gone foreign dictators.

Coming Events

  • The National Institute of Standards and Technology (NIST) will hold a webinar on the Draft Federal Information Processing Standards (FIPS) 201-3 on 9 December.
  • On 9 December, the Senate Commerce, Science, and Transportation Committee will hold a hearing titled “The Invalidation of the EU-US Privacy Shield and the Future of Transatlantic Data Flows” with the following witnesses:
    • The Honorable Noah Phillips, Commissioner, Federal Trade Commission
    • Ms. Victoria Espinel, President and Chief Executive Officer, BSA – The Software Alliance
    • Mr. James Sullivan, Deputy Assistant Secretary for Services, International Trade Administration, U.S. Department of Commerce
    • Mr. Peter Swire, Elizabeth and Tommy Holder Chair of Law and Ethics, Georgia Tech Scheller College of Business, and Research Director, Cross-Border Data Forum
  • On 10 December, the Federal Communications Commission (FCC) will hold an open meeting and has released a tentative agenda:
    • Securing the Communications Supply Chain. The Commission will consider a Report and Order that would require Eligible Telecommunications Carriers to remove equipment and services that pose an unacceptable risk to the national security of the United States or the security and safety of its people, would establish the Secure and Trusted Communications Networks Reimbursement Program, and would establish the procedures and criteria for publishing a list of covered communications equipment and services that must be removed. (WC Docket No. 18-89)
    • National Security Matter. The Commission will consider a national security matter.
    • National Security Matter. The Commission will consider a national security matter.
    • Allowing Earlier Equipment Marketing and Importation Opportunities. The Commission will consider a Notice of Proposed Rulemaking that would propose updates to its marketing and importation rules to permit, prior to equipment authorization, conditional sales of radiofrequency devices to consumers under certain circumstances and importation of a limited number of radiofrequency devices for certain pre-sale activities. (ET Docket No. 20-382)
    • Promoting Broadcast Internet Innovation Through ATSC 3.0. The Commission will consider a Report and Order that would modify and clarify existing rules to promote the deployment of Broadcast Internet services as part of the transition to ATSC 3.0. (MB Docket No. 20-145)

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Photo by Daniel Schludi on Unsplash

Further Reading, Other Developments, and Coming Events (4 December)

Further Reading

  • How Misinformation ‘Superspreaders’ Seed False Election Theories” By Sheera Frenkel — The New York Times. A significant percentage of lies, misinformation, and disinformation about the legitimacy of the election have been disseminated by a small number of right-wing figures, which are then repeated, reposted, and retweeted. The Times relies on research of how much engagement people like President Donald Trump and Dan Bongino get on Facebook after posting untrue claims about the election and it turns out that such trends and rumors do not start spontaneously.
  • Facebook Said It Would Ban Holocaust Deniers. Instead, Its Algorithm Provided a Network for Them” By Aaron Sankin — The Markup. This news organization still found Holocaust denial material promoted by Facebook’s algorithm even though the platform said it was taking down such material recently. This result may point to the difficulty of policing objectionable material that uses coded language and/or the social media platforms lack of sufficient resources to weed out this sort of content.
  • What Facebook Fed the Baby Boomers” By Charlie Warzel — The New York Times. A dispiriting trip inside two people’s Facebook feeds. This article makes the very good point that comments are not moderated, and these tend to be significant sources of vitriol and disinformation.
  • How to ‘disappear’ on Happiness Avenue in Beijing” By Vincent Ni and Yitsing Wang — BBC. By next year, the People’s Republic of China (PRC) may have as many as 560 million security cameras, and one artist ran an experiment of sorts to see if a group of people could walk down a major street in the capital without being seen by a camera or without their face being seen at places with lots of cameras.
  • Patients of a Vermont Hospital Are Left ‘in the Dark’ After a Cyberattack” By Ellen Barry and Nicole Perlroth — The New York Times. A Russian hacking outfit may have struck back after the Department of Defense’s (DOD) Cyber Command and Microsoft struck them. A number of hospitals were hacked, and care was significantly disrupted. This dynamic may lend itself to arguments that the United States (U.S.) may be wise to curtail its offensive operations.
  • EU seeks anti-China alliance on tech with Biden” By Jakob Hanke Vela and David M. Herszenhorn — Politico. The European Union (EU) is hoping the United States (U.S.) will be more amenable to working together in the realm of future technology policy, especially against the People’s Republic of China (PRC) which has made a concerted effort to drive the adoption of standards that favor its companies (e.g., the PRC pushed for and obtained 5G standards that will favor Huawei). Diplomatically speaking, this is considered low-hanging fruit, and a Biden Administration will undoubtedly be more multilateral than the Trump Administration.
  • Can We Make Our Robots Less Biased Than We Are?” By David Berreby — The New York Times. The bias present in facial recognition technology and artificial intelligence is making its way into robotics, posing the question of how do we change this? Many African American and other minority scientists are calling for the inclusion of people of color inn designing such systems as a countermeasure to the usual bias for white men.

Other Developments

  • The top Democrat on the Senate Homeland Security and Governmental Affairs Committee wrote President Donald Trump and “slammed the Trump Administration for their lack of action against foreign adversaries, including Russia, China, and North Korea, that have sponsored cyber-attacks against American hospitals and research institutions in an effort to steal information related to development of Coronavirus vaccines.” Peters used language that was unusually strong as Members of Congress typically tone down the rhetoric and deploy coded language to signal their level of displeasure about administration action or inaction. Peters could well feel strongly about what he perceives to be Trump Administration indifference to the cyber threats facing institutions researching and developing COVID-19 vaccines, this is an issue on which he may well be trying to split Republicans, placing them in the difficult position of lining up behind a president disinclined to prioritize some cyber issues or breaking ranks with him.
    • Peters stated:
      • I urge you, again, to send a strong message to any foreign government attempting to hack into our medical institutions that this behavior is unacceptable. The Administration should use the tools at its disposal, including the threat of sanctions, to deter future attacks against research institutions. In the event that any foreign government directly threatens the lives of Americans through attacks on medical facilities, other Department of Defense capabilities should be considered to make it clear that there will be consequences for these actions.
  • A United States federal court has ruled against a Trump Administration appointee Michael Pack and the United States Agency for Global Media (USAGM) and their attempts to interfere illegally with the independence of government-funded news organizations such as the Voice of America (VOA). The District Court for the District of Columbia enjoined Pack and the USAGM from a list of actions VOA and USAGM officials claim are contrary to the First Amendment and the organization’s mission.
  • The Federal Trade Commission (FTC) is asking a United States federal court to compel former Trump White House advisor Steve Bannon to appear for questioning per a Civil Investigative Demand (CID) as part of its ongoing probe of Cambridge Analytica’s role in misusing personal data of Facebook users in the 2016 Presidential Election. The FTC noted it “issued the CID to determine, among other things, whether Bannon may be held individually liable for the deceptive conduct of Cambridge Analytica, LLC—the subject of an administrative law enforcement action brought by the Commission.” There had been an interview scheduled in September but the day before it was to take place, Bannon’s lawyers informed the FTC he would not be attending.
    • In 2019, the FTC settled with former Cambridge Analytica CEO Alexander Nix and app developer Aleksandr Kogan in “administrative orders restricting how they conduct any business in the future, and requiring them to delete or destroy any personal information they collected.” The FTC did not, however, settle with the company itself. The agency alleged “that Cambridge Analytica, Nix, and Kogan deceived consumers by falsely claiming they did not collect any personally identifiable information from Facebook users who were asked to answer survey questions and share some of their Facebook profile data.” Facebook settled with the FTC for a record $5 billion for its role in the Cambridge Analytica scandal and for how it violated its 2012 consent order with the agency.
  • Apple responded to a group of human rights and civil liberties organizations about its plans to deploy technology on its operating system that allows users greater control of their privacy. Apple confirmed that its App Tracking Transparency (ATT) would be made part of its iOS early next year and would provide users of Apple products with a prompt with a warning about how their information may be used by the app developer. ATT would stop app developers from tracking users when they use other apps on ta device. Companies like Facebook have objected, claiming that the change is a direct shot at them and their revenue. Apple does not reap a significant revenue stream from collecting, combining, and processing user data whereas Facebook does. Facebook also tracks users across devices and apps on a device through a variety of means.
    • Apple stated:
      • We delayed the release of ATT to early next year to give developers the time they indicated they needed to properly update their systems and data practices, but we remain fully committed to ATT and to our expansive approach to privacy protections. We developed ATT for a single reason: because we share your concerns about users being tracked without their consent and the bundling and reselling of data by advertising networks and data brokers.
      • ATT doesn’t ban the reasonable collection of user data for app functionality or even for advertising. Just as with the other data-access permissions we have added over many software releases, developers will be able to explain why they want to track users both before the ATT prompt is shown and in the prompt itself. At that point, users will have the freedom to make their own choice about whether to proceed. This privacy innovation empowers consumers — not Apple — by simply making it clear what their options are, and giving them the information and power to choose.
    • As mentioned, a number of groups wrote Apple in October “to express our disappointment that Apple is delaying the full implementation of iOS 14’s anti-tracking features until early 2021.” They argued:
      • These features will constitute a vital policy improvement with the potential to strengthen respect for privacy across the industry. Apple should implement these features as expeditiously as possible.
      • We were heartened by Apple’s announcement that starting with the iOS 14 update, all app developers will be required to provide information that will help users understand the privacy implications of an app before they install it, within the App Store interface.
      • We were also pleased that iOS 14 users would be required to affirmatively opt in to app tracking, on an app-by-app basis. Along with these changes, we urge Apple to verify the accuracy of app policies, and to publish transparency reports showing the number of apps that are rejected and/or removed from the App Store due to inadequate or inaccurate policies.
  • The United States (U.S.) Government Accountability Office (GAO) sent its assessment of the privacy notices and practices of U.S. banks and credit unions to the chair of the Senate committee that oversees this issue. Senate Banking, Housing, and Urban Affairs Committee Chair Mike Crapo (R-ID) had asked the GAO “to examine the types of personal information that financial institutions collect, use, and share; how they make consumers aware of their information-sharing practices; and federal regulatory oversight of these activities.” The GAO found that a ten-year-old model privacy disclosure form used across these industries may comply with the prevailing federal requirements but no longer encompasses the breadth and scope of how the personal information of people is collected, processed, and used. The GAO called on the Consumer Financial Protection Bureau (CFPB) to update this form. The GAO explained:
    • Banks and credit unions collect, use, and share consumers’ personal information—such as income level and credit card transactions—to conduct everyday business and market products and services. They share this information with a variety of third parties, such as service providers and retailers.
    • The Gramm-Leach-Bliley Act (GLBA) requires financial institutions to provide consumers with a privacy notice describing their information-sharing practices. Many banks and credit unions elect to use a model form—issued by regulators in 2009—which provides a safe harbor for complying with the law (see figure). GAO found the form gives a limited view of what information is collected and with whom it is shared. Consumer and privacy groups GAO interviewed cited similar limitations. The model form was issued over 10 years ago. The proliferation of data-sharing since then suggests a reassessment of the form is warranted. Federal guidance states that notices about information collection and usage are central to providing privacy protections and transparency.
    • Since Congress transferred authority to the CFPB for implementing GLBA privacy provisions, the agency has not reassessed if the form meets consumer expectations for disclosures of information-sharing. CFPB officials said they had not considered a reevaluation because they had not heard concerns from industry or consumer groups about privacy notices. Improvements to the model form could help ensure that consumers are better informed about all the ways banks and credit unions collect and share personal information
    • The increasing amounts of and changing ways in which industry collects and shares consumer personal information—including from online activities—highlights the importance of clearly disclosing practices for collection, sharing, and use. However, our work shows that banks and credit unions generally used the model form, which was created more than 10 years ago, to make disclosures required under GLBA. As a result, the disclosures often provided a limited view of how banks and credit unions collect, use, and share personal information.
    • We recognize that the model form is required to be succinct, comprehensible to consumers, and allow for comparability across institutions. But, as information practices continue to change or expand, consumer insights into those practices may become even more limited. Improvements and updates to the model privacy form could help ensure that consumers are better informed about all the ways that banks and credit unions collect, use, and share personal information. For instance, in online versions of privacy notices, there may be opportunities for readers to access additional details—such as through hyperlinks—in a manner consistent with statutory requirements.
  • The Australian Competition & Consumer Commission (ACCC) is asking for feedback on Google’s proposed $2.1 billion acquisition of Fitbit. In a rather pointed statement, the chair of the ACCC, Rod Sims, made clear “[o]ur decision to begin consultation should not be interpreted as a signal that the ACCC will ultimately accept the undertaking and approve the transaction.” The buyout is also under scrutiny in the European Union (EU) and may be affected by the suit the United States Department of Justice (DOJ) and some states have brought against the company for anti-competitive behavior. The ACCC released a Statement of Issues in June about the proposed deal.
    • The ACCC explained “[t]he proposed undertaking would require Google to:
      • not use certain user data collected through Fitbit and Google wearables for Google’s advertising purposes for 10 years, with an option for the ACCC to extend this obligation by up to a further 10 years;
      • maintain access for third parties, such as health and fitness apps, to certain user data collected through Fitbit and Google wearable devices for 10 years; and
      • maintain levels of interoperability between third party wearables and Android smartphones for 10 years.
    • In August, the EU “opened an in-depth investigation to assess the proposed acquisition of Fitbit by Google under the EU Merger Regulation.” The European Commission (EC) expressed its concerns “that the proposed transaction would further entrench Google’s market position in the online advertising markets by increasing the already vast amount of data that Google could use for personalisation of the ads it serves and displays.” The EC stated “[a]t this stage of the investigation, the Commission considers that Google:
      • is dominant in the supply of online search advertising services in the EEA countries (with the exception of Portugal for which market shares are not available);
      • holds a strong market position in the supply of online display advertising services at least in Austria, Belgium, Bulgaria, Croatia, Denmark, France, Germany, Greece, Hungary, Ireland, Italy, Netherlands, Norway, Poland, Romania, Slovakia, Slovenia, Spain, Sweden and the United Kingdom, in particular in relation to off-social networks display ads;
      • holds a strong market position in the supply of ad tech services in the EEA.
    • The EC explained that it “will now carry out an in-depth investigation into the effects of the transaction to determine whether its initial competition concerns regarding the online advertising markets are confirmed…[and] will also further examine:
      • the effects of the combination of Fitbit’s and Google’s databases and capabilities in the digital healthcare sector, which is still at a nascent stage in Europe; and
      • whether Google would have the ability and incentive to degrade the interoperability of rivals’ wearables with Google’s Android operating system for smartphones once it owns Fitbit.
    • Amnesty International (AI) sent EC Executive Vice-President Margrethe Vestager a letter, arguing “[t]he merger risks further extending the dominance of Google and its surveillance-based business model, the nature and scale of which already represent a systemic threat to human rights.” AI asserted “[t]he deal is particularly troubling given the sensitive nature of the health data that Fitbit holds that would be acquired by Google.” AI argued “[t]he Commission must ensure that the merger does not proceed unless the two business enterprises can demonstrate that they have taken adequate account of the human rights risks and implemented strong and meaningful safeguards that prevent and mitigate these risks in the future.”
  • Europol, the United Nations Interregional Crime and Justice Research Institute (UNICRI) and Trend Micro have cooperated on a report that looks “into current and predicted criminal uses of artificial intelligence (AI).
    • The organizations argued “AI could be used to support:
      • convincing social engineering attacks at scale;
      • document-scraping malware to make attacks more efficient;
      • evasion of image recognition and voice biometrics;
      • ransomware attacks, through intelligent targeting and evasion;
      • data pollution, by identifying blind spots in detection rules.
    • The organizations concluded:
      • Based on available insights, research, and a structured open-source analysis, this report covered the present state of malicious uses and abuses of AI, including AI malware, AI-supported password guessing, and AI-aided encryption and social engineering attacks. It also described concrete future scenarios ranging from automated content generation and parsing, AI-aided reconnaissance, smart and connected technologies such as drones and autonomous cars, to AI-enabled stock market manipulation, as well as methods for AI-based detection and defense systems.
      • Using one of the most visible malicious uses of AI — the phenomenon of so-called deepfakes — the report further detailed a case study on the use of AI techniques to manipulate or generate visual and audio content that would be difficult for humans or even technological solutions to immediately distinguish from authentic ones.
      • As speculated on in this paper, criminals are likely to make use of AI to facilitate and improve their attacks by maximizing opportunities for profit within a shorter period, exploiting more victims, and creating new, innovative criminal business models — all the while reducing their chances of being caught. Consequently, as “AI-as-a-Service”206 becomes more widespread, it will also lower the barrier to entry by reducing the skills and technical expertise required to facilitate attacks. In short, this further exacerbates the potential for AI to be abused by criminals and for it to become a driver of future crimes.
      • Although the attacks detailed here are mostly theoretical, crafted as proofs of concept at this stage, and although the use of AI to improve the effectiveness of malware is still in its infancy, it is plausible that malware developers are already using AI in more obfuscated ways without being detected by researchers and analysts. For instance, malware developers could already be relying on AI-based methods to bypass spam filters, escape the detection features of antivirus software, and frustrate the analysis of malware. In fact, DeepLocker, a tool recently introduced by IBM and discussed in this paper, already demonstrates these attack abilities that would be difficult for a defender to stop.
      • To add, AI could also enhance traditional hacking techniques by introducing new ways of performing attacks that would be difficult for humans to predict. These could include fully automated penetration testing, improved password-guessing methods, tools to break CAPTCHA security systems, or improved social engineering attacks. With respect to open-source tools providing such functionalities, the paper discussed some that have already been introduced, such as DeepHack, DeepExploit, and XEvil.
      • The widespread use of AI assistants, meanwhile, also creates opportunities for criminals who could exploit the presence of these assistants in households. For instance, criminals could break into a smart home by hijacking an automation system through exposed audio devices.

Coming Events

  • The National Institute of Standards and Technology (NIST) will hold a webinar on the Draft Federal Information Processing Standards (FIPS) 201-3 on 9 December.
  • On 9 December, the Senate Commerce, Science, and Transportation Committee will hold a hearing titled “The Invalidation of the EU-US Privacy Shield and the Future of Transatlantic Data Flows” with the following witnesses:
    • The Honorable Noah Phillips, Commissioner, Federal Trade Commission
    • Ms. Victoria Espinel, President and Chief Executive Officer, BSA – The Software Alliance
    • Mr. James Sullivan, Deputy Assistant Secretary for Services, International Trade Administration, U.S. Department of Commerce
    • Mr. Peter Swire, Elizabeth and Tommy Holder Chair of Law and Ethics, Georgia Tech Scheller College of Business, and Research Director, Cross-Border Data Forum
  • On 10 December, the Federal Communications Commission (FCC) will hold an open meeting and has released a tentative agenda:
    • Securing the Communications Supply Chain. The Commission will consider a Report and Order that would require Eligible Telecommunications Carriers to remove equipment and services that pose an unacceptable risk to the national security of the United States or the security and safety of its people, would establish the Secure and Trusted Communications Networks Reimbursement Program, and would establish the procedures and criteria for publishing a list of covered communications equipment and services that must be removed. (WC Docket No. 18-89)
    • National Security Matter. The Commission will consider a national security matter.
    • National Security Matter. The Commission will consider a national security matter.
    • Allowing Earlier Equipment Marketing and Importation Opportunities. The Commission will consider a Notice of Proposed Rulemaking that would propose updates to its marketing and importation rules to permit, prior to equipment authorization, conditional sales of radiofrequency devices to consumers under certain circumstances and importation of a limited number of radiofrequency devices for certain pre-sale activities. (ET Docket No. 20-382)
    • Promoting Broadcast Internet Innovation Through ATSC 3.0. The Commission will consider a Report and Order that would modify and clarify existing rules to promote the deployment of Broadcast Internet services as part of the transition to ATSC 3.0. (MB Docket No. 20-145)

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

American and Canadian Agencies Take Differing Approaches On Regulating AI

The outgoing Trump Administration tells agencies to lightly regulate AI; Canada’s privacy regulator calls for strong safeguards and limits on use of AI, including legislative changes.

The Office of Management and Budget (OMB) has issued guidance for federal agencies on how they are to regulate artificial intelligence (AI) not in use by the government. This guidance seeks to align policy across agencies in how they use their existing power to regulate AI according to the Trump Administration’s policy goals. Notably, this memorandum is binding on all federal agencies (including national defense) and even independent agencies such as the Federal Trade Commission (FTC) and Federal Communications Commission (FCC). OMB worked with other stakeholder agencies on this guidance per by Executive Order (EO) 13859, “Maintaining American Leadership in Artificial Intelligence” and issued a draft of the memorandum 11 months ago for comment.

In “Guidance for Regulation of Artificial Intelligence Applications,” OMB “sets out policy considerations that should guide, to the extent permitted by law, regulatory and non-regulatory approaches to AI applications developed and deployed outside of the Federal government.” OMB is directing agencies to take a light touch to regulating AI under its current statutory authorities, being careful to consider costs and benefits and keeping in mind the larger policy backdrop of taking steps to ensure United States (U.S.) dominance in AI in light of competition from the People’s Republic of China (PRC), the European Union, Japan, the United Kingdom, and others. OMB is requiring reports from agencies on how they will use and not use their authority to meet the articulated goals and requirements of this memorandum. However, given the due date for these reports will be well into the next Administration, it is very likely the Biden OMB at least pauses this initiative and probably alters it to meet new policy. It is possible that policy goals to protect privacy, combat algorithmic bias, and protect data are made more prominent in U.S. AI regulation.

As a threshold matter, it bears note that this memorandum uses a definition of statute that is narrower than AI is being popularly discussed. OMB explained that “[w]hile this Memorandum uses the definition of AI recently codified in statute, it focuses on “narrow” (also known as “weak”) AI, which goes beyond advanced conventional computing to learn and perform domain-specific or specialized tasks by extracting information from data sets, or other structured or unstructured sources of information.” Consequently, “[m]ore theoretical applications of “strong” or “general” AI—AI that may exhibit sentience or consciousness, can be applied to a wide variety of cross-domain activities and perform at the level of, or better than a human agent, or has the capacity to self-improve its general cognitive abilities similar to or beyond human capabilities—are beyond the scope of this Memorandum.”

The Trump OMB tells agencies to minimize regulation of AI and take into account how any regulatory action may affect growth and innovation in the field before putting implemented. OMB directs agencies to favor “narrowly tailored and evidence­ based regulations that address specific and identifiable risks” that foster an environment where U.S. AI can flourish. Consequently, OMB bars “a precautionary approach that holds AI systems to an impossibly high standard such that society cannot enjoy their benefits and that could undermine America’s position as the global leader in AI innovation.” Of course, what constitutes “evidence-based regulation” and an “impossibly high standard” are in the eye of the beholder, so this memorandum could be read by the next OMB in ways the outgoing OMB does not agree with. Finally, OMB is pushing agencies to factor potential benefits in any risk calculation, presumably allowing for greater risk of bad outcomes if the potential reward seems high. This would seem to suggest a more hands-off approach on regulating AI.

OMB listed the 10 AI principles agencies must in regulating AI in the private sector:

  • Public trust in AI
  • Public participation
  • Scientific integrity and information quality
  • Risk assessment and management
  • Benefits and costs
  • Flexibility
  • Fairness and non-discrimination
  • Disclosure and transparency
  • Safety and security
  • Interagency coordination

OMB also tells agencies to look at existing federal or state regulation that may prove inconsistent, duplicative, or inconsistent with this federal policy and “may use their authority to address inconsistent, burdensome, and duplicative State laws that prevent the emergence of a national market.”

OMB encouraged agencies to use “non-regulatory approaches” in the event existing regulations are sufficient or the benefits of regulation do not justify the costs. OMB counseled “[i]n these cases, the agency may consider either not taking any action or, instead, identifying non-regulatory approaches that may be appropriate to address the risk posed by certain AI applications” and provided examples of “non-regulatory approaches:”

  • Sector-Specific Policy Guidance or Frameworks
  • Pilot Programs and Experiments
  • Voluntary Consensus Standards
  • Voluntary Frameworks

As noted, the EO under which OMB is acting requires “that implementing agencies with regulatory authorities review their authorities relevant to AI applications and submit plans to OMB on achieving consistency with this Memorandum.” OMB directs:

The agency plan must identify any statutory authorities specifically governing agency regulation of AI applications, as well as collections of AI-related information from regulated entities. For these collections, agencies should describe any statutory restrictions on the collection or sharing of information (e.g., confidential business information, personally identifiable information, protected health information, law enforcement information, and classified or other national security information). The agency plan must also report on the outcomes of stakeholder engagements that identify existing regulatory barriers to AI applications and high-priority AI applications that are within an agency’s regulatory authorities. OMB also requests agencies to list and describe any planned or considered regulatory actions on AI. Appendix B provides a template for agency plans.

Earlier this year, the White House’s Office of Science and Technology Policy (OSTP) released a draft “Guidance for Regulation of Artificial Intelligence Applications,” a draft of this OMB memorandum that would be issued to federal agencies as directed by Executive Order (EO) 13859, “Maintaining American Leadership in Artificial Intelligence.” However, this memorandum is not aimed at how federal agencies use and deploy artificial intelligence (AI) but rather it “sets out policy considerations that should guide, to the extent permitted by law, regulatory and non-regulatory oversight of AI applications developed and deployed outside of the Federal government.” In short, if this draft is issued by OMB as written, federal agencies would need to adhere to the ten principles laid out in the document in regulating AI as part of their existing and future jurisdiction over the private sector. Not surprisingly, the Administration favors a light touch approach that should foster the growth of AI.

EO 13859 sets the AI policy of the government “to sustain and enhance the scientific, technological, and economic leadership position of the United States in AI.” The EO directed OMB and OSTP along with other Administration offices, to craft this draft memorandum for comment. OMB was to “issue a memorandum to the heads of all agencies that shall:

(i) inform the development of regulatory and non-regulatory approaches by such agencies regarding technologies and industrial sectors that are either empowered or enabled by AI, and that advance American innovation while upholding civil liberties, privacy, and American values; and
(ii) consider ways to reduce barriers to the use of AI technologies in order to promote their innovative application while protecting civil liberties, privacy, American values, and United States economic and national security.

A key regulator in a neighbor of the U.S. also weighed in on the proper regulation of AI from the vantage of privacy. The Office of the Privacy Commissioner of Canada (OPC) “released key recommendations…[that] are the result of a public consultation launched earlier this year.” OPC explained that it “launched a public consultation on our proposals for ensuring the appropriate regulation of AI in the Personal Information Protection and Electronic Documents Act (PIPEDA).” OPC’s “working assumption was that legislative changes to PIPEDA are required to help reap the benefits of AI while upholding individuals’ fundamental right to privacy.” It is to be expected that a privacy regulator will see matters differently than a Republican White House, and so it is here. The OPC

In an introductory paragraph, the OPC spelled out the problems and dangers created by AI:

uses of AI that are based on individuals’ personal information can have serious consequences for their privacy. AI models have the capability to analyze, infer and predict aspects of individuals’ behaviour, interests and even their emotions in striking ways. AI systems can use such insights to make automated decisions about individuals, including whether they get a job offer, qualify for a loan, pay a higher insurance premium, or are suspected of suspicious or unlawful behaviour. Such decisions have a real impact on individuals’ lives, and raise concerns about how they are reached, as well as issues of fairness, accuracy, bias, and discrimination. AI systems can also be used to influence, micro-target, and “nudge” individuals’ behaviour without their knowledge. Such practices can lead to troubling effects for society as a whole, particularly when used to influence democratic processes.

The OPC is focused on the potential for AI to be used in a more effective fashion than current data processing to predict, uncover, subvert, and influence the behavior of people in ways not readily apparent. There is also concern for another aspect of AI and other data processing that has long troubled privacy and human rights advocates: the potential for discriminatory treatement.

OPC asserted “an appropriate law for AI would:

  • Allow personal information to be used for new purposes towards responsible AI innovation and for societal benefits;
  • Authorize these uses within a rights based framework that would entrench privacy as a human right and a necessary element for the exercise of other fundamental rights;
  • Create provisions specific to automated decision-making to ensure transparency, accuracy and fairness; and
  • Require businesses to demonstrate accountability to the regulator upon request, ultimately through proactive inspections and other enforcement measures through which the regulator would ensure compliance with the law.

However, the OPC does not entirely oppose the use of AI and is proposing exceptions to the general requirement under Canadian federal law that meaningful consent is required before data processing. The OPC is “recommending a series of new exceptions to consent that would allow the benefits of AI to be better achieved, but within a rights based framework.” OPC stated “[t]he intent is to allow for responsible, socially beneficial innovation, while ensuring individual rights are respected…[and] [w]e recommend exceptions to consent for the use of personal information for research and statistical purposes, compatible purposes, and legitimate commercial interests purposes.” However, the OPC is proposing a number of safeguards:

The proposed exceptions to consent must be accompanied by a number of safeguards to ensure their appropriate use. This includes a requirement to complete a privacy impact assessment (PIA), and a balancing test to ensure the protection of fundamental rights. The use of de-identified information would be required in all cases for the research and statistical purposes exception, and to the extent possible for the legitimate commercial interests exception.

Further, the OPC made the case that enshrining strong privacy rights in Canadian law would not obstruct the development of AI but would, in fact, speed its development:

  • A rights-based regime would not stand in the way of responsible innovation. In fact, it would help support responsible innovation and foster trust in the marketplace, giving individuals the confidence to fully participate in the digital age. In our 2018-2019 Annual Report to Parliament, our Office outlined a blueprint for what a rights-based approach to protecting privacy should entail. This rights-based approach runs through all of the recommendations in this paper.
  • While we propose that the law should allow for uses of AI for a number of new purposes as outlined, we have seen examples of unfair, discriminatory, and biased practices being facilitated by AI which are far removed from what is socially beneficial. Given the risks associated with AI, a rights based framework would help to ensure that it is used in a manner that upholds rights. Privacy law should prohibit using personal information in ways that are incompatible with our rights and values.
  • Another important measure related to this human rights-based approach would be for the definition of personal information in PIPEDA to be amended to clarify that it includes inferences drawn about an individual. This is important, particularly in the age of AI, where individuals’ personal information can be used by organizations to create profiles and make predictions intended to influence their behaviour. Capturing inferred information clearly within the law is key for protecting human rights because inferences can often be drawn about an individual without their knowledge, and can be used to make decisions about them.

The OPC also called for a framework under which people could review and contest automated decisions:

we recommend that individuals be provided with two explicit rights in relation to automated decision-making. Specifically, they should have a right to a meaningful explanation of, and a right to contest, automated decision-making under PIPEDA. These rights would be exercised by individuals upon request to an organization. Organizations should be required to inform individuals of these rights through enhanced transparency practices to ensure individual awareness of the specific use of automated decision-making, as well as of their associated rights. This could include requiring notice to be provided separate from other legal terms.

The OPC also counseled that PIPEDA’s enforcement mechanism and incentives be changed:

PIPEDA should incorporate a right to demonstrable accountability for individuals, which would mandate demonstrable accountability for all processing of personal information. In addition to the measures detailed below, this should be underpinned by a record keeping requirement similar to that in Article 30 of the GDPR. This record keeping requirement would be necessary to facilitate the OPC’s ability to conduct proactive inspections under PIPEDA, and for individuals to exercise their rights under the Act.

The OPC called for the following to ensure “demonstrable accountability:”

  • Integrating privacy and human rights into the design of AI algorithms and models is a powerful way to prevent negative downstream impacts on individuals. It is also consistent with modern legislation, such as the GDPR and Bill 64. PIPEDA should require organizations to design for privacy and human rights by requiring organizations to implement “appropriate technical and organizational measures” that implement PIPEDA requirements prior to and during all phases of collection and processing.
  • In light of the new proposed rights to explanation and contestation, organizations should be required to log and trace the collection and use of personal information in order to adequately fulfill these rights for the complex processing involved in AI. Tracing supports demonstrable accountability as it provides documentation that the regulator could consult through the course of an inspection or investigation, to determine the personal information fed into the AI system, as well as broader compliance.
  • Demonstrable accountability must include a model of assured accountability pursuant to which the regulator has the ability to proactively inspect an organization’s privacy compliance. In today’s world where business models are often opaque and information flows are increasingly complex, individuals are unlikely to file a complaint when they are unaware of a practice that might cause them harm. This challenge will only become more pronounced as information flows gain complexity with the continued development of AI.
  • The significant risks posed to privacy and human rights by AI systems require a proportionally strong regulatory regime. To incentivize compliance with the law, PIPEDA must provide for meaningful enforcement with real consequences for organizations found to be non-compliant. To guarantee compliance and protect human rights, PIPEDA should empower the OPC to issue binding orders and financial penalties.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Photo by Tetyana Kovyrina from Pexels

Further Reading, Other Developments, and Coming Events (5 November)

Further Reading

  • Confusion and conflict stir online as Trump claims victory, questions states’ efforts to count ballots” By Craig Timberg, Tony Romm, Isaac Stanley-Becker and Drew Harwell — Washington Post. When the post-mortem on the 2020 Election is written, it is likely to be the case that foreign disinformation was not the primary threat. Rather, it may be domestic interference given the misinformation, disinformation, and lies circulating online despite the best efforts of social media platforms to label, take down, and block such material. However, if this article is accurate, much of it is coming from the right wing, including the President.
  • Polls close on Election Day with no apparent cyber interference” By Kevin Collier and Ken Dilanian — NBC News. Despite crowing from officials like The United States (U.S.) Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA) Director Christopher Krebs and U.S. Cyber Command head General Paul Naksone, it is not altogether clear that U.S. efforts, especially publicized offensive operations are the reason there were no significant cyber attacks on Election Day. However, officials are cautioning the country is not out of the woods as vote counting is ongoing and opportunities for interference and mischief remain.
  • Russian hackers targeted California, Indiana Democratic parties” By Raphael Satter, Christopher Bing, Joel Schectman — Reuters. Apparently, Microsoft helped foil Russian efforts to hack two state Democratic parties and think tanks, some of which are allied with the Democratic party. However, it appears none of the attempts, which occurred earlier this year, were successful. The article suggests but does not claim that increased cyber awareness and defenses foiled most of the attempts by hacking group, Fancy Bear.
  • LexisNexis to Pay $5 Million Class Action Settlement for Selling DMV Data” By Joseph Cox — Vice. Data broker LexisNexis is settling a suit that it violated the Drivers’ Privacy Protection Act (DPPA) by obtaining Department of Motor Vehicles (DMV) records on people for a purpose not authorized under the law. Vice has written a number of articles on the practices of DMVs selling people’s data, which has caught the attention of at least two Democratic Members of Congress who have said they will introduce legislation to tighten the circumstances under which these data may be shared or sold.
  • Spy agency ducks questions about ‘back doors’ in tech products” By Joseph Menn — Reuters. Senator Ron Wyden (D-OR) is demanding that the National Security Agency (NSA) reveal the guidelines put in place after former NSA contractor Edward Snowden revealed the agency’s practice of getting backdoors in United States (U.S.) technology it could use in the future. This practice allowed the NSA to sidestep warrant requirements, but it also may have weakened technology that was later exploited by other governments as the People’s Republic of China (PRC) allegedly did to Juniper in 2015. After Snowden divulged the NSA’s practice, reforms were supposedly put in place but never shared with Congress.

Other Developments

  • Australia’s Joint Committee on Intelligence and Security issued a new report into Australia’s mandatory data retention regime that makes 22 recommendations to “increase transparency around the use of the mandatory data retention and increase the threshold for when data can be accessed…[and] reduce the currently very broad access to telecommunications data under the Telecommunications Act.” The committee stated “[t]he report’s 22 recommendations include:
    • access to data kept under the mandatory data retention regime will only be available under specific circumstances
    • the Department of Home Affairs develop guidelines for data collection including an ability for enforcement agencies and Home Affairs to produce reports to oversight agencies or Parliament when requested
    • the repeal of section 280(1)(b) of the Telecommunications Act which allows for access where ‘disclosure or use is required or authorised by or under law.’ It is the broad language in this subsection that has allowed the access that concerned the committee
    • The committee explained:
      • The Parliamentary Joint Committee on Intelligence and Security (the Committee) is required by Part 5-1A of the Telecommunications (Interception and Access) Act 1979 (TIA Act) to undertake a review of the mandatory data retention regime (MDRR).
      • The mandatory data retention regime is a legislative framework which requires carriers, carriage service providers and internet service providers to retain a defined set of telecommunications data for two years, ensuring that such data remains available for law enforcement and national security investigations.
  • Senators Ron Wyden (D-OR) and Sherrod Brown (D-OH) wrote a letter “to trade associations urging them to take immediate action to ensure their members are not complicit in China’s state-directed human rights abuses, including by relocating production from the Xinjiang Uyghur Autonomous Region.” They stated:
    • We write to express our concerns over reports that the industries and companies that the U.S. Chamber of Commerce represents have supply chains that have been implicated in the state-sanctioned forced labor of Uyghurs and other Muslim groups in the Xinjiang Uyghur Autonomous Region of China (XUAR) and in sites where Uyghurs have been relocated.  The decision to operate or contract with production facilities overseas must be accompanied by high standards of supply chain accountability and transparency to ensure that no company’s products are made with forced labor.  We urge your members to take immediate action to ensure goods manufactured for them are not complicit in the China’s state-directed human rights abuses, including by relocating production from the XUAR.  In addition, we ask your members to take critical, comprehensive steps to achieve the supply chain integrity and transparency American consumers and workers deserve.  It is past time for American multinational companies to be part of the solution, not part of the problem, on efforts to eradicate forced labor and end human rights abuses against workers in China. 
  • The Federal Trade Commission (FTC) finalized a settlement alleging violations of the now struck down European Union-United States Privacy Shield. In its press release, the agency explained it had “alleged that NTT Global Data Centers Americas, Inc. (NTT), formerly known as RagingWire Data Centers, Inc., claimed in its online privacy policy and marketing materials that the company participated in the Privacy Shield framework and complied with the program’s requirements.” The FTC noted “the company’s certification lapsed in January 2018 and it failed to comply with certain Privacy Shield requirements while it was a participant in the framework.” The FTC stated:
    • Under the settlement, the company, among other things, is prohibited not just from misrepresenting its compliance with or participation in the Privacy Shield framework, but also any other privacy or data security program sponsored by the government or any self-regulatory or standard-setting organization. The company also must continue to apply the Privacy Shield requirements or equivalent protections to personal information it collected while participating in the framework or return or delete the information.
    • Although the European Court of Justice invalidated the Privacy Shield framework in July 2020, that decision does not affect the validity of the FTC’s decision and order relating to NTT’s misrepresentations about its participation in and compliance with the framework. The framework allowed participants to transfer data legally from the European Union to the United States.
  • The Commission nationale de l’informatique et des libertés (CNIL) issued a press release, explaining that France’s “Council of State acknowledges the existence of a risk of data transfer from the Health Data Hub to the United States and requests additional safeguards.” CNIL stated it “will advise the public authorities on appropriate measures and will ensure, for research authorization related to the health crisis, that there is a real need to use the platform.” This announcement follows from the Court of Justice of the European Union (CJEU) striking down the adequacy decision underpinning the European Union-United States Privacy Shield (aka Schrems II). CNIL summarized the “essentials:”
    • Fearing that some data might be transferred to the United States, some claimants lodged an appeal with the Council of State requesting the suspension of the “Health Data Hub”, the new platform designed to ultimately host all the health data of people who receive medical care in France.
    • The Court considers that a risk cannot be excluded with regard to the transfer of health data hosted on the Health Data Hub platform to the US intelligence.
    • Because of the usefulness of the Health Data Hub in managing the health crisis, it refuses to suspend the operation of the platform.
    • However, it requires the Health Data Hub to strengthen its contract with Microsoft on a number of points and to seek additional safeguards to better protect the data it hosts.
    • It is the responsibility of the CNIL to ensure, for authorization of research projects on the Health Data Hub in the context of the health crisis, that the use of the platform is technically necessary, and to advise public authorities on the appropriate safeguards.
    • These measures will have to be taken while awaiting a lasting solution that will eliminate any risk of access to personal data by the American authorities, as announced by the French Secretary of State for the Digital Agenda.
  • The United Kingdom’s (UK) National Cyber Security Centre (NCSC) has published its annual review that “looks back at some of the key developments and highlights from the NCSC’s work between 1 September 2019 and 31 August 2020.” In the foreword, new NCSC Chief Executive Officer Lindy Cameron provided an overview:
    • Expertise from across the NCSC has been surged to assist the UK’s response to the pandemic. More than 200 of the 723 incidents the NCSC handled this year related to coronavirus and we have deployed experts to support the health sector, including NHS Trusts, through cyber incidents they have faced. We scanned more than one million NHS IP addresses for vulnerabilities and our cyber expertise underpinned the creation of the UK’s coronavirus tracing app.
    • An innovative approach to removing online threats was created through the ‘Suspicious Email Reporting Service’ – leading to more than 2.3 million reports of malicious emails being flagged by the British public. Many of the 22,000 malicious URLs taken down as a result related to coronavirus scams, such as pretending to sell PPE equipment to hide a cyber attack. The NCSC has often been described as world-leading, and that has been evident over the last 12 months. Our innovative ‘Exercise in a Box’ tool, which supports businesses and individuals to test their cyber defences against realistic scenarios, was used in 125 countries in the last year.
    • Recognising the change in working cultures due to the pandemic, our team even devised a specific exercise on remote working, which has helped organisations to understand where current working practices may be presenting alternative cyber risks. Proving that cyber really is a team sport, none of this would be possible without strong partnerships internationally and domestically. We worked closely with law enforcement – particularly the National Crime Agency – and across government, industry, academia and, of course, the UK public.
    • The NCSC is also looking firmly ahead to the future of cyber security, as our teams work to understand both the risks and opportunities to the UK presented by emerging technologies. A prominent area of work this year was the NCSC’s reviews of high-risk vendors such as Huawei – and in particular the swift and thorough review of US sanctions against Huawei. The NCSC gave advice on the impact these changes would have in the UK, publishing a summary of the advice given to government as well as timely guidance for operators and the public.
  • Australia’s Department of Industry, Science, Energy and Resources has put out for comment a discussion paper titled “An AI Action Plan for all Australians” to “shape Australia’s vision for artificial intelligence (AI).” The department said it “is now consulting on the development of a whole-of-government AI Action Plan…[that] will help us maximise the benefits of AI for all Australians and manage the potential challenges.” The agency said “[t]he will help to:
    • ensure the development and use of AI in Australia is responsible
    • coordinate government policy and national capability under a clear, common vision for AI in Australia
    • explore the actions needed for our AI future
    • The department explained:
      • Building on Australia’s AI Ethics Framework, the Australian Government is developing an AI Action Plan. It is a key component of the government’s vision to be a leading digital economy by 2030. It builds on almost $800 million invested in the 2020-21 Budget to enable businesses to take advantage of digital technologies to grow their businesses and create jobs. It is an opportunity to leverage AI as part of the Australian Government’s economic recovery plan. We must work together to ensure all Australians can benefit from advances in AI.

Coming Events

  • On 10 November, the Senate Commerce, Science, and Transportation Committee will hold a hearing to consider nominations, including Nathan Simington’s to be a Member of the Federal Communications Commission.
  • On 17 November, the Senate Judiciary Committee will reportedly hold a hearing with Facebook CEO Mark Zuckerberg and Twitter CEO Jack Dorsey on Section 230 and how their platforms chose to restrict The New York Post article on Hunter Biden.
  • On 18 November, the Federal Communications Commission (FCC) will hold an open meeting and has released a tentative agenda:
    • Modernizing the 5.9 GHz Band. The Commission will consider a First Report and Order, Further Notice of Proposed Rulemaking, and Order of Proposed Modification that would adopt rules to repurpose 45 megahertz of spectrum in the 5.850-5.895 GHz band for unlicensed operations, retain 30 megahertz of spectrum in the 5.895-5.925 GHz band for the Intelligent Transportation Systems (ITS) service, and require the transition of the ITS radio service standard from Dedicated Short-Range Communications technology to Cellular Vehicle-to-Everything technology. (ET Docket No. 19-138)
    • Further Streamlining of Satellite Regulations. The Commission will consider a Report and Order that would streamline its satellite licensing rules by creating an optional framework for authorizing space stations and blanket-licensed earth stations through a unified license. (IB Docket No. 18-314)
    • Facilitating Next Generation Fixed-Satellite Services in the 17 GHz Band. The Commission will consider a Notice of Proposed Rulemaking that would propose to add a new allocation in the 17.3-17.8 GHz band for Fixed-Satellite Service space-to-Earth downlinks and to adopt associated technical rules. (IB Docket No. 20-330)
    • Expanding the Contribution Base for Accessible Communications Services. The Commission will consider a Notice of Proposed Rulemaking that would propose expansion of the Telecommunications Relay Services (TRS) Fund contribution base for supporting Video Relay Service (VRS) and Internet Protocol Relay Service (IP Relay) to include intrastate telecommunications revenue, as a way of strengthening the funding base for these forms of TRS and making it more equitable without increasing the size of the Fund itself. (CG Docket Nos. 03-123, 10-51, 12-38)
    • Revising Rules for Resolution of Program Carriage Complaints. The Commission will consider a Report and Order that would modify the Commission’s rules governing the resolution of program carriage disputes between video programming vendors and multichannel video programming distributors. (MB Docket Nos. 20-70, 17-105, 11-131)
    • Enforcement Bureau Action. The Commission will consider an enforcement action.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.