FY 2021 Omnibus and COVID Stimulus Become Law

The end-of-the-year funding package for FY 2021 is stuffed with technology policy changes.

At the tail end of the calendar year 2020, Congress and the White House finally agreed on FY 2021 appropriations and further COVID-19 relief funding and policies, much of which implicated or involved technology policy. As is often the practice, Congressional stakeholders used the opportunity of must-pass legislation as the vehicle for other legislation that perhaps could not get through a chamber of Congress or surmount the now customary filibuster in the Senate.

Congress cleared the “Consolidated Appropriations Act, 2021” (H.R.133) on 21 December 2020, but President Donald Trump equivocated on whether to sign the package, in part, because it did not provide for $2,000 in aid to every American, a new demand at odds with the one his negotiators worked out with House Democrats and Senate Republicans. Given this disparity, it seems more likely Trump made an issue of the $2,000 assistance to draw attention from a spate of controversial pardons issued to Trump allies and friends. Nonetheless, Trump ultimately signed the package on 27 December.

As one of the only bills or set of bills to annually pass Congress, appropriations acts are often the means by which policy and programmatic changes are made at federal agencies through the ability of the legislative branch to condition the use of such funds as are provided. This year’s package is different only in that it contains much more in the way of ride-along legislation than the average omnibus. In fact, there are hundreds, perhaps even more than 1,000 pages of non-appropriations legislation, some that pertains to technology policy. Moreover, with an additional supplemental bill attached to the FY 2021 omnibus also carries significant technology funding and programming.

First, we will review FY 2021 funding and policy for key U.S. agencies, then discuss COVID-19 related legislation, and then finally all the additional legislation Congress packed into the omnibus.

The Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) would receive $2.025 billion, a bare $9 million increase above FY 2020 with significant reordering of how the agency may spend its funds:

  • The agreement includes a net increase of $224,178,000 above the budget request. This includes $226,256,000 above the request to maintain current services, and $54,516,000 in enhancements that are described in more detail below. Assumed in the current services level of funding are several rejections of proposed reductions to prior year initiatives and the inclusion of necessary annualizations to sustain them, such as: $35,606,000 for threat analysis and response; $5,507,000 for soft targets and crowded places security, including school safety and best practices; $6,852,000 for bombing prevention activities, including the train-the-trainer programs; and $67,371,000 to fully fund the Chemical Facility Anti-Terrorism Standards program. The agreement includes the following reductions below the budget request: $6,937,000 for personnel cost adjustments; $2,500,000 of proposed increases to the CyberSentry program; $11,354,000 of proposed increases for the Vulnerability Management program; $2,000,000 of proposed increases to the Cybersecurity Quality Service Management Office (QSMO); $6,500,000 of proposed increases for cybersecurity advisors; and $27,303,000 for the requested increase for protective security advisors. Of the total amount provided for this account, $22,793,000 is available until September 30, 2022, for the National Infrastructure Simulation Analysis Center.

The FY 2021 omnibus requires of CISA the following:

  • Financial Transparency and Accountability.-The Cybersecurity and Infrastructure Security Agency (CISA) is directed to submit the fiscal year 2022 budget request at the same level of PP A detail provided in the table at the end of this report with no further adjustments to the PP A structure. Further, CISA shall brief the Committees not later than 45 days after the date of enactment of this Act and quarterly thereafter on: a spend plan; detailed hiring plans with a delineation of each mission critical occupation (MCO); procurement plans for all major investments to include projected spending and program schedules and milestones; and an execution strategy for each major initiative. The hiring plan shall include an update on CISA’s hiring strategy efforts and shall include the following for each MCO: the number of funded positions and FTE within each PP A; the projected and obligated funding; the number of actual onboard personnel as of the date of the plan; and the hiring and attrition projections for the fiscal year.
  • Cyber Defense Education and Training (CDET).-The agreement includes $29,457,000 for CISA’s CDET programs, an increase of$20,607,000 above the request that is described in further detail below. Efforts are underway to address the shortage of qualified national cybersecurity professionals in the current and future cybersecurity workforce. In order to move forward with a comprehensive plan for a cybersecurity workforce development effort, the agreement includes $10,000,000 above the request to enhance cybersecurity education and training and programs to address the national shortfall of cybersecurity professionals, including activities funded through the use of grants or cooperative agreements as needed in order to fully comply with congressional intent. CISA should consider building a higher education consortium of colleges and universities, led by at least one academic institution with an extensive history of education, research, policy, and outreach in computer science and engineering disciplines; existing designations as a land-grant institution with an extension role; a center of academic excellence in cyber security operations; a proven track record in hosting cyber corps programs; a record of distinction in research cybersecurity; and extensive experience in offering distance education programs and outreach with K-12 programs. The agreement also includes $4,300,000 above the request for the Cybersecurity Education and Training Assistance Program (CETAP), which was proposed for elimination, and $2,500,000 above the request to further expand and initiate cybersecurity education programs, including CETAP, which improve education delivery methods for K-12 students, teachers, counselors and post-secondary institutions and encourage students to pursue cybersecurity careers.
  • Further, the agreement includes $2,500,000 above the request to support CISA’s role with the National Institute of Standards and Technology, National Initiative for Cybersecurity Education Challenge project or for similar efforts to address shortages in the cybersecurity workforce through the development of content and curriculum for colleges, universities, and other higher education institutions.
  • Lastly, the agreement includes $800,000 above the request for a review of CISA’s program to build a national cybersecurity workforce. CISA is directed to enter into a contract for this review with the National Academy of Public Administration, or a similar non-profit organization, within 45 days of the date of enactment of this Act. The review shall assess: whether the partnership models under development by CISA are positioned to be effective and scalable to address current and anticipated needs for a highly capable cybersecurity workforce; whether other existing partnership models, including those used by other agencies and private industry, could usefully augment CISA’s strategy; and the extent to which CISA’s strategy has made progress on workforce development objectives, including excellence, scale, and diversity. A report with the findings of the review shall be provided to the Committees not later than 270 days after the date of enactment of this Act.
  • Cyber QSMO.-To help improve efforts to make strategic cybersecurity services available to federal agencies, the agreement provides $1,514,000 above the request to sustain and enhance prior year investments. As directed in the House report and within the funds provided, CISA is directed to work with the Management Directorate to conduct a crowd-sourced security testing program that uses technology platforms and ethical security researchers to test for vulnerabilities on departmental systems. In addition, not later than 90 days after the date of enactment of this Act, CISA is directed to brief the Committees on opportunities for state and local governments to leverage shared services provided through the Cyber QSMO or a similar capability and to explore the feasibility of executing a pilot program focused on this goal.
  • Cyber Threats to Critical Election Infrastructure.-The briefing required in House Report 116–458 regarding CISA’s efforts related to the 2020 elections shall be delivered not later than 60 days after the date of enactment of this Act. CISA is directed to continue working with SL TT stakeholders to implement election security measures.
  • Cybersecurity Worliforce.-By not later than September 30, 2021, CISA shall provide a joint briefing, in conjunction with the Department of Commerce and other appropriate federal departments and agencies, on progress made to date on each recommendation put forth in Executive Order 13800 and the subsequent “Supporting the Growth and Sustainment of the Nation’s Cybersecurity Workforce” report.
  • Hunt and Incident Response Teams.-The agreement includes an increase of $3,000,000 above fiscal year 2020 funding levels to expand CISA’s threat hunting capabilities.
  • Joint Cyber Planning Office (JCPO).-The agreement provides an increase of $10,568,000 above the request to establish a JCPO to bring together federal and SLTT governments, industry, and international partners to strategically and operationally counter nation-state cyber threats. CISA is directed to brief the Committees not later than 60 days after the date of enactment of this Act on a plan for establishing the JCPO, including a budget and hiring plan; a description of how JCPO will complement and leverage other CISA capabilities; and a strategy for partnering with the aforementioned stakeholders.
  • Multi-State Information Sharing and Analysis Center (MS-ISAC).-The agreement provides $5,148,000 above the request for the MS-ISAC to continue enhancements to SLTT election security support, and furthers ransomware detection and response capabilities, including endpoint detection and response, threat intelligence platform integration, and malicious domain activity blocking.
  • Software Assurance Tools.-Not later than 90 days after the date of enactment of this Act, CISA, in conjunction with the Science and Technology Directorate, is directed to brief the Committees on their collaborative efforts to transition cyber-related research and development initiatives into operational tools that can be used to provide continuous software assurance. The briefing should include an explanation for any completed projects and activities that were not considered viable for practice or were considered operationally self-sufficient. Such briefing shall include software assurance projects, such as the Software Assurance Marketplace.
  • Updated Lifecycle Cost Estimates.–CISA is directed to provide a briefing, not later than 60 days after the date of enactment of this Act, regarding the Continuous Diagnostics and Mitigation (COM) and National Cybersecurity Protection System (NCPS) program lifecycles. The briefing shall clearly describe the projected evolution of both programs by detailing the assumptions that have changed since the last approved program cost and schedule baseline, and by describing the plans to address such changes. In addition, the briefing shall include an analysis of alternatives for aligning vulnerability management, incident response, and NCPS capabilities. Finally, CISA is directed to provide a report not later than 120 days after the date of enactment of this Act with updated five-year program costs and schedules which is congruent with projected capability gaps across federal civilian systems and networks.
  • Vulnerability Management.-The agreement provides $9,452,000 above fiscal year 2020 levels to continue reducing the 12-month backlog in vulnerability assessments. The agreement also provides an increase of $8,000,000 above the request to address the increasing number of identified and reported vulnerabilities in the software and hardware that operates critical infrastructure. This investment will improve capabilities to identify, analyze, and share information about known vulnerabilities and common attack patterns, including through the National Vulnerability Database, and to expand the coordinated responsible disclosure of vulnerabilities.

There are a pair of provisions aimed at the People’s Republic of China (PRC) in Division B (i.e. the FY 2021 Commerce-Justice-Science Appropriations Act):

  • Section 514 prohibits funds for acquisition of certain information systems unless the acquiring department or agency has reviewed and assessed certain risks. Any acquisition of such an information system is contingent upon the development of a risk mitigation strategy and a determination that the acquisition is in the national interest. Each department or agency covered under section 514 shall submit a quarterly report to the Committees on Appropriations describing reviews and assessments of risk made pursuant to this section and any associated findings or determinations.
  • Section 526 prohibits the use of funds by National Aeronautics and Space Administration (NASA), Office of Science and Technology Policy (OSTP), or the National Space Council (NSC) to engage in bilateral activities with China or a Chinese-owned company or effectuate the hosting of official Chinese visitors at certain facilities unless the activities are authorized by subsequent legislation or NASA, OSTP, or NSC have made a certification…

The National Institute of Standards and Technology (NIST) is asked with a number of duties, most of which relate to current or ongoing efforts in artificial intelligence (AI), cybersecurity, and the Internet of Things:

  • Artificial Intelligence (Al). -The agreement includes no less than $6,500,000 above the fiscal year 2020 level to continue NIST’s research efforts related to AI and adopts House language on Data Characterization Standards in Al. House language on Framework for Managing AI Risks is modified to direct NIST to establish a multi-stakeholder process for the development of an Al Risk Management Framework regarding the reliability, robustness, and trustworthiness of Al systems. Further, within 180 days of enactment of this Act, NIST shall establish the process by which it will engage with stakeholders throughout the multi-year framework development process.
  • Cybersecurity.-The agreement includes no less than the fiscal year 2020 enacted level for cybersecurity research, outreach, industry partnerships, and other activities at NIST, including the National Cybersecurity Center of Excellence (NCCoE) and the National Initiative for Cybersecurity Education (NICE). Within the funds provided, the agreement encourages NIST to establish additional NICE cooperative agreements with regional alliances and multi-stakeholder partnerships for cybersecurity workforce and education.
  • Cybersecurity of Genomic Data.-The agreement includes no less than $1,250,000 for NIST and NCCoE to initiate a use case, in collaboration with industry and academia, to research the cybersecurity of personally identifiable genomic data, with a particular focus on better securing deoxyribonucleic acid sequencing techniques, including clustered regularly interspaced short palindromic repeat (CRISPR) technologies, and genomic data storage architectures from cyber threats. NIST and NCCoE should look to partner with entities who have existing capability to research and develop state-of-the-art cybersecurity technologies for the unique needs of genomic and biomedical-based systems.
  • Industrial Internet of Things (IIoT).-The agreement includes no less than the fiscal year 2020 enacted amount for the continued development of an IloT cybersecurity research initiative and to partner, as appropriate, with academic entities and industry to improve the sustainable security of IloT devices in industrial settings.

NIST would receive a modest increase in funding from $1.034 billion to $1.0345 billion from the last fiscal year to the next.

The National Telecommunications and Information Administration (NTIA) would be provided $45.5 million and “the agreement provides (1) up to $7,500,000 for broadband mapping in coordination with the Federal Communications Commission (FCC); (2) no less than the fiscal year 2020 enacted amount for Broadband Programs; (3) $308,000 for Public Safety Communications; and (4) no less than $3,000,000 above the fiscal year 2020 enacted level for Advanced Communications Research.” The agency’s funding for FY 2021 is higher than the last fiscal year at a bit more than $40 million but far less than the Trump Administration’s request of more than $70 million.

Regarding NTIA programmatic language, the bill provides:

  • Further, the agreement directs the additional funds for Advanced Communications Research be used to procure and maintain cutting-edge equipment for research and testing of the next generation of communications technologies, including 5G, as well as to hire staff as needed. The agreement further encourages NTIA to improve the deployment of 5G and spectrum sharing through academic partnerships to accelerate the development of low-cost sensors. For fiscal year 2021, NTIA is directed to follow prior year report language, included in Senate Report 116-127 and adopted in Public Law 116-93, on the following topics: Federal Spectrum Management, Spectrum Management for Science, and the Internet Corporation for Assigned Names and Numbers (ICANN).
  • Spectrum Management System.-The agreement encourages NTIA and the Department to consider alternative proposals to fully fund the needed upgrades to its spectrum management system, including options outside of direct appropriations, and is directed to brief the Committees regarding possible alternative options no later than 90 days after enactment of this Act.
  • Next Generation Broadband in Rural Areas.-NTIA is encouraged to ensure that deployment of last-mile broadband infrastructure is targeted to areas that are currently unserved or underserved, and to utilize public-private partnerships and projects where Federal funding will not exceed 50 percent of a project’s total cost where practicable.
  • National Broadband Map Augmentation.-NTIA is directed to engage with rural and Tribal communities to further enhance the accuracy of the national broadband availability map. NTIA should include in its fiscal year 2022 budget request an update on rural-and Tribal-related broadband availability and access trends, challenges, and Federal actions to achieve equitable access to broadband services in currently underserved communities throughout the Nation. Furthermore, NTIA is encouraged, in coordination with the FCC, to develop and promulgate a standardized process for collecting data from State and local partners.
  • Domain Name Registration.-NTIA is directed, through its position within the Governmental Advisory Committee to work with ICANN to expedite the establishment of a global access model that provides law enforcement, intellectual property rights holders, and third parties with timely access to accurate domain name registration information for legitimate purposes. NTIA is encouraged, as appropriate, to require registrars and registries based in the United States to collect and make public accurate domain name registration information.

The Federal Trade Commission (FTC) would receive $351 million, an increase of $20 million over FY 2020. The final bill includes this policy provision for the FTC to heed:

  • Resources for Data Privacy and Security. -The agreement urges the FTC to conduct a comprehensive internal assessment measuring the agency’s current efforts related to data privacy and security while separately identifying all resource-based needs of the FTC to improve in these areas. The agreement also urges the FTC to provide a report describing the assessment’s findings to the Committees within 180 days of enactment of this Act.

The Federal Communications Commission (FCC) would see a larger increase in funding for agency operations than the FTC, going from $339 million in FY 2020 to $374 million in FY 2021. However, $33 million of the increase is earmarked for implementing the “Broadband DATA Act” (P.L.116-130) along with the $65 million in COVID-19 supplemental funding for the same purpose. The FY 2021 omnibus directs the FCC on a range of policy issues:

  • Broadband Maps.-In addition to adopting the House report language on Broadband Maps, the agreement provides substantial dedicated resources for the FCC to implement the Broadband DATA Act. The FCC is directed to submit a report to the Committees on Appropriations within 90 days of enactment of this Act providing a detailed spending plan for these resources. In addition, the FCC, in coordination with the NTIA, shall outline the specific roles and responsibilities of each agency as it relates to the National Broadband Map and implementation of the Broadband DATA Act. The FCC is directed to report in writing to the Committees every 30 days on the date, amount, and purpose of any new obligation made for broadband mapping and any updates to the broadband mapping spending plan.
  • Lifeline Service. In lieu of the House report language on Lifeline Service, the agreement notes recent action by the FCC to partially waive its rules updating the Lifeline program’s minimum service standard for mobile broadband usage in light of the large increase to the standard that would have gone into effect on Dec. I, 2020, and the increased reliance by Americans on mobile broadband as a result of the pandemic. The FCC is urged to continue to balance the Lifeline program’s goals of accessibility and affordability.
  • 5G Fund and Rural America.-The agreement remains concerned about the feasible deployment of 5G in rural America. Rural locations will likely run into geographic barriers and infrastructure issues preventing the robust deployment of 5G technology, just as they have faced with 4G. The FCC’s proposed 5G Fund fails to provide adequate details or a targeted spend plan on creating seamless coverage in the most rural parts of the Nation. Given these concerns, the FCC is directed to report in writing on: (1) its current and future plans fix prioritizing deployment of 4G coverage in rural areas, (2) its plans for 5G deployment in rural areas, and (3) its plan for improving the mapping and long-term tracking of coverage in rural areas.
  • 6 Gigahertz. -As the FCC has authorized unlicensed use of the 6 gigahertz band, the agreement expects the Commission to ensure its plan does not result in harmful interference to incumbent users or impact critical infrastructure communications systems. The agreement is particularly concerned about the potential effects on the reliability of the electric transmission and distribution system. The agreement expects the FCC to ensure any mitigation technologies are rigorously tested and found to be effective in order to protect the electric transmission system. The FCC is directed to provide a report to the Committees within 90 days of enactment of this Act on its progress in ensuring rigorous testing related to unlicensed use of the 6 gigahertz band. Rural Broadband-The agreement remains concerned that far too many Americans living in rural and economically disadvantaged areas lack access to broadband at speeds necessary to fully participate in the Internet age. The agreement encourages the agency to prioritize projects in underserved areas, where the infrastructure to be installed provides access at download and upload speeds comparable to those available to Americans in urban areas. The agreement encourages the FCC to avoid efforts that could duplicate existing networks and to support deployment of last-mile broadband infrastructure to underserved areas. Further, the agreement encourages the agency to prioritize projects financed through public-private partnerships.
  • Contraband Cell Phones. -The agreement notes continued concern regarding the exploitation of contraband cell phones in prisons and jails nationwide. The agreement urges the FCC to act on the March 24, 2017 Further Notice of Proposed Rulemaking regarding combating contraband wireless devices. The FCC should consider all legally permissible options, including the creation, or use, of “quiet or no service zones,” geolocation-based denial, and beacon technologies to geographically appropriate correctional facilities. In addition, the agreement encourages the FCC to adopt a rules-based approach to cellphone disabling that would require immediate disabling by a wireless carrier upon proper identification of a contraband device. The agreement recommends that the FCC move forward with its suggestion in the Fiscal Year 2019 report to this Committee, noting that “additional field testing of jamming technology will provide a better understanding of the challenges and costs associated with the proper deployment of jamming system.” The agreement urges the FCC to use available funds to coordinate rigorous Federal testing of jamming technology and coordinate with all relevant stakeholders to effectively address this urgent problem.
  • Next-Generation Broadband Networks/or Rural America-Deployment of broadband and telecommunications services in rural areas is imperative to support economic growth and public safety. However, due to geographical challenges facing mobile connectivity and fiber providers, connectivity in certain areas remains challenging. Next generation satellite-based technology is being developed to deliver direct satellite to cellular capability. The FCC is encouraged to address potential regulatory hurdles, to promote private sector development and implementation of innovative, next generation networks such as this, and to accelerate broadband and telecommunications access to all Americans.

$635 million is provided for a Department of Agriculture rural development pilot program, and he Secretary will need to explain how he or she will use authority provided in the last farm bill to expand broadband:

  • The agreement provides $635,000,000 to support the ReConnect pilot program to increase access to broadband connectivity in unserved rural communities and directs the Department to target grants and loans to areas of the country with the largest broadband coverage gaps. These projects should utilize technology that will maximize coverage of broadband with the most benefit to taxpayers and the rural communities served. The agreement notes stakeholder concerns that the ReConnect pilot does not effectively recognize the unique challenges and opportunities that different technologies, including satellite, provide to delivering broadband in noncontiguous States or mountainous terrain and is concerned that providing preference to 100 mbps symmetrical service unfairly disadvantages these communities by limiting the deployment of other technologies capable of providing service to these areas.
  • The Agriculture Improvement Act of 2018 (Public Law 115-334) included new authorities for rural broadband programs that garnered broad stakeholder support as well as bipartisan, bicameral agreement in Congress. Therefore, the Secretary is directed to provide a report on how the Department plans to utilize these authorities to deploy broadband connectivity to rural communities.

In Division M of the package, the “Coronavirus Response and Relief Supplemental Appropriations Act, 2021,” there are provisions related to broadband policy and funding. The bill created a $3.2 billion program to help low-income Americans with internet service and buying devices for telework or distance education. The “Emergency Broadband Benefit Program” is established at the FCC, “under which eligible households may receive a discount of up to $50, or up to $75 on Tribal lands, off the cost of internet service and a subsidy for low-cost devices such as computers and tablets” according to a House Appropriations Committee summary. This funding is far short of what House Democrats wanted. And yet, this program aims to help those on the wrong side of the digital divide during the pandemic.

Moreover, this legislation also establishes two grant programs at the NTIA, designed to help provide broadband on tribal lands and in rural areas. $1 billion is provided for the former and $300 million for the latter with the funds going to tribal and state and local governments to obtain services from private sector providers. The $1 billion for tribal lands allows for greater flexibility in what the funds are ultimately spent on with the $320 million for underserved rural areas being restricted to broadband deployment. Again, these funds are aimed at bridging the disparity in broadband service exposed and exacerbated during the pandemic.

Congress also provided funds for the FCC to reimburse smaller telecommunications providers in removing and replacing risky telecommunications equipment from the People’s Republic of China (PRC). Following the enactment of the “Secure and Trusted Communications Networks Act of 2019” (P.L.116-124) that codified and added to a FCC regulatory effort to address the risks posed by Huawei and ZTE equipment in United States (U.S.) telecommunications networks, there was pressure in Congress to provide the funds necessary to help carriers meet the requirements of the program. The FY 2021 omnibus appropriates $1.9 billion for this program. In another but largely unrelated tranche of funding, the aforementioned $65 million given to the FCC to undertake the “Broadband DATA Act.”

Division Q contains text similar to the “Cybersecurity and Financial System Resilience Act of 2019” (H.R.4458) that would require “the Board of Governors of the Federal Reserve System, Office of the Comptroller of the Currency, Federal Deposit Insurance Corporation, and National Credit Union Administration to annually report on efforts to strengthen cybersecurity by the agencies, financial institutions they regulate, and third-party service providers.”

Division U contains two bills pertaining to technology policy:

  • Title I. The AI in Government Act of 2020. This title codifies the AI Center of Excellence within the General Services Administration to advise and promote the efforts of the federal government in developing innovative uses of artificial intelligence (AI) and competency in the use of AI in the federal government. The section also requires that the Office of Personnel Management identify key skills and competencies needed for federal positions related to AI and establish an occupational series for positions related to AI.
  • Title IX. The DOTGOV Act. This title transfers the authority to manage the .gov internet domain from the General Services Administration to the Cybersecurity and Infrastructure Security Agency (CISA) of the Department of Homeland Security. The .gov internet domain shall be available to any Federal, State, local, or territorial government entity, or other publicly controlled entity, subject to registration requirements established by the Director of CISA and approved by the Director of the Office of Management and Budget.

Division W is the FY 2021 Intelligence Authorization Act with the following salient provisions:

  • Section 323. Report on signals intelligence priorities and requirements. Section 323 requires the Director of National Intelligence (DNI) to submit a report detailing signals intelligence priorities and requirements subject to Presidential Policy Directive-28 (PPD-28) that stipulates “why, whether, when, and how the United States conducts signals intelligence activities.” PPD-28 reformed how the National Security Agency (NSA) and other Intelligence Community (IC) agencies conducted signals intelligence, specifically collection of cellphone and internet data, after former NSA contractor Edward Snowden exposed the scope of the agency’s programs.
  • Section 501. Requirements and authorities to improve education in science, technology, engineering, arts, and mathematics. Section 501 ensures that the Director of the Central Intelligence Agency (CIA) has the legal authorities required to improve the skills in science, technology, engineering, arts, and mathematics (known as STEAM) necessary to meet long-term national security needs. Section 502. Seedling investment in next-generation microelectronics in support of artificial intelligence. Section 502 requires the DNI, acting through the Director of the Intelligence Advanced Research Projects Activity, to award contracts or grants, or enter into other transactions, to encourage microelectronics research.
  • Section 601. Report on attempts by foreign adversaries to build telecommunications and cybersecurity equipment and services for, or to provide them to, certain U.S. Section 601 requires the CIA, NSA, and DIA to submit a joint report that describes the United States intelligence sharing and military posture in Five Eyes countries that currently have or intend to use adversary telecommunications or cybersecurity equipment, especially as provided by China or Russia, with a description of potential vulnerabilities of that information and assessment of mitigation options.
  • Section 602. Report on foreign use of cyber intrusion and surveillance technology. Section 602 requires the DNI to submit a report on the threats posed by foreign governments and foreign entities using and appropriating commercially available cyber intrusion and other surveillance technology.
  • Section 603. Reports on recommendations of the Cyberspace Solarium Commission. Section 603 requires the ODNI and representatives of other agencies to report to Congress their assessment of the recommendations submitted by the Cyberspace Solarium Commission pursuant to Section 1652(j) of the John S. McCain National Defense Authorization Act (NDAA) for Fiscal Year 2019, and to describe actions that each agency expects to take to implement these recommendations.
  • Section 604. Assessment of critical technology trends relating to artificial intelligence, microchips, and semiconductors and related matters. Section 604 requires the DNI to complete an assessment of export controls related to artificial intelligence (AI), microchips, advanced manufacturing equipment, and other AI-enabled technologies, including the identification of opportunities for further cooperation with international partners.
  • Section 605. Combating Chinese influence operations in the United States and strengthening civil liberties protections. Section 605 provides additional requirements to annual reports on Influence Operations and Campaigns in the United States by the Chinese Communist Party (CCP) by mandating an identification of influence operations by the CCP against the science and technology sector in the United States. Section 605 also requires the FBI to create a plan to increase public awareness of influence activities by the CCP. Finally, section 605 requires the FBI, in consultation with the Assistant Attorney General for the Civil Rights and the Chief Privacy and Civil Liberties Officer of the Department of Justice, to develop recommendations to strengthen relationships with communities targeted by the CCP and to build trust with such communities through local and regional grassroots outreach.
  • Section 606. Annual report on corrupt activities of senior officials of the CCP. Section 606 requires the CIA, in coordination with the Department of Treasury’s Office of Intelligence and Analysis and the FBI, to submit to designated congressional committees annually through 2025 a report that describes and assesses the wealth and corruption of senior officials of the CCP, as well as targeted financial measures, including potential targets for sanctions designation. Section 606 further expresses the Sense of Congress that the United States should undertake every effort and pursue every opportunity to expose the corruption and illicit practices of senior officials of the CCP, including President Xi Jinping.
  • Section 607. Report on corrupt activities of Russian and other Eastern European oligarchs. Section 607 requires the CIA, in coordination with the Department of the Treasury’s Office of Intelligence and Analysis and the FBI, to submit to designated congressional committees and the Under Secretary of State for Public Diplomacy, a report that describes the corruption and corrupt or illegal activities among Russian and other Eastern European oligarchs who support the Russian government and Russian President Vladimir Putin, and the impact of those activities on the economy and citizens of Russia. Section 607 further requires the CIA, in coordination with the Department of Treasury’s Office of Intelligence and Analysis, to describe potential sanctions that could be imposed for such activities. Section 608. Report on biosecurity risk and disinformation by the CCP and the PRC. Section 608 requires the DNI to submit to the designated congressional committees a report identifying whether and how CCP officials and the Government of the People’s Republic of China may have sought to suppress or exploit for national advantage information regarding the novel coronavirus pandemic, including specific related assessments. Section 608 further provides that the report shall be submitted in unclassified form, but may have a classified annex.
  • Section 612. Research partnership on activities of People’s Republic of China. Section 612 requires the Director of the NGA to seek to enter into a partnership with an academic or non-profit research institution to carry out joint unclassified geospatial intelligence analyses of the activities of the People’s Republic of China that pose national security risks to the United States, and to make publicly available unclassified products relating to such analyses.

Division Z would tweak a data center energy efficiency and energy savings program overseen by the Secretary of Energy and the Administrator of the Environmental Protection Agency that could impact the Office of Management and Budget’s (OMB) government-wide program. Specifically, “Section 1003 requires the development of a metric for data center energy efficiency, and requires the Secretary of Energy, Administrator of the Environmental Protection Agency (EPA), and Director of the Office of Management and Budget (OMB) to maintain a data center energy practitioner program and open data initiative for federally owned and operated data center energy usage.” There is also language that would require the U.S. government to buy and use more energy-efficient information technology (IT): “each Federal agency shall coordinate with the Director [of OMB], the Secretary, and the Administrator of the Environmental Protection Agency to develop an implementation strategy (including best-practices and measurement and verification techniques) for the maintenance, purchase, and use by the Federal agency of energy-efficient and energy-saving information technologies at or for facilities owned and operated by the Federal agency, taking into consideration the performance goals.”

Division FF contains telecommunications provisions:

  • Section 902. Don’t Break Up the T-Band Act of 2020. Section 902 repeals the requirement for the FCC to reallocate and auction the 470 to 512megahertz band, commonly referred to as the T-band. In certain urban areas, the T-band is utilized by public-safety entities. It also directs the FCC to implement rules to clarify acceptable expenditures on which 9-1- 1 fees can be spent, and creates a strike force to consider how the Federal Government can end 9-1-1 fee diversion.
  • Section 903. Advancing Critical Connectivity Expands Service, Small Business Resources, Opportunities, Access, and Data Based on Assessed Need and Demand (ACCESS BROADBAND) Act. Section 903 establishes the Office of Internet Connectivity and Growth (Office) at the NTIA. This Office would be tasked with performing certain responsibilities related to broadband access, adoption, and deployment, such as performing public outreach to promote access and adoption of high-speed broadband service, and streamlining and standardizing the process for applying for Federal broadband support. The Office would also track Federal broadband support funds, and coordinate Federal broadband support programs within the Executive Branch and with the FCC to ensure unserved Americans have access to connectivity and to prevent duplication of broadband deployment programs.
  • Section 904. Broadband Interagency Coordination Act. Section 904 requires the Federal Communications Commission (FCC), the National Telecommunications and Information Administration (NTIA), and the Department of Agriculture to enter into an interagency agreement to coordinate the distribution of federal funds for broadband programs, to prevent duplication of support and ensure stewardship of taxpayer dollars. The agreement must cover, among other things, the exchange of information about project areas funded under the programs and the confidentiality of such information. The FCC is required to publish and collect public comments about the agreement, including regarding its efficacy and suggested modifications.
  • Section 905. Beat CHINA for 5G Act of 2020. Section 905 directs the President, acting through the Assistant Secretary of Commerce for Communications and Information, to withdraw or modify federal spectrum assignments in the 3450 to 3550 megahertz band, and directs the FCC to begin a system of competitive bidding to permit non-Federal, flexible-use services in a portion or all of such band no later than December 31, 2021.

Section 905 would countermand the White House’s efforts to auction off an ideal part of spectrum for 5G (see here for analysis of the August 2020 announcement). Congressional and a number of Trump Administration stakeholders were alarmed by what they saw as a push to bestow a windfall on a private sector company in the rollout of 5G.

Title XIV of Division FF would allow the FTC to seek civil fines of more than $43,000 per violation during the duration of the public health emergency arising from the pandemic “for unfair and deceptive practices associated with the treatment, cure, prevention, mitigation, or diagnosis of COVID–19 or a government benefit related to COVID-19.”

Finally, Division FF is the vehicle for the “American COMPETES Act” that:

directs the Department of Commerce and the FTC to conduct studies and submit reports on technologies including artificial intelligence, the Internet of Things, quantum computing, blockchain, advanced materials, unmanned delivery services, and 3-D printing. The studies include requirements to survey each industry and report recommendations to help grow the economy and safely implement the technology.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2021. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by forcal35 from Pixabay

Further Reading, Other Developments, and Coming Events (9 December)

Further Reading

  • Secret Amazon Reports Expose the Company’s Surveillance of Labor and Environmental Groups” By Lauren Kaori Gurley — Vice’s Motherboard. Yet another article by Vice drawing back the curtain on Amazon’s labor practices, especially its apparently fervent desire to stop unionizing. This piece shines light on the company’s Global Security Operations Center that tracks labor organizing and union activities among Amazon’s workers and monitors environmental and human rights on social media. The company has even hired Pinkerton operatives to surveil its warehouse employees. Although the focus is on Europe because the leaked emails on which the story is based pertain to activities on that continent, there is no reason to expect the same tactics are not being used elsewhere. Moreover, the company may be violating the much stricter laws in Europe protecting workers and union activities.
  • Cyber Command deployed personnel to Estonia to protect elections against Russian threat” By Shannon Vavra — cyberscoop.  It was recently revealed that personnel from the United States (U.S.) Cyber Command were deployed to Estonia to work with the latter country’s Defense Forces Cyber Command to fend off potential Russian attacks during the U.S. election. This follows another recent “hunt forward” mission for Cyber Command in Montenegro, another nation on the “frontline” of Russian hacking activities. Whether this has any effect beyond building trust and capacity between nations opposed to state-sponsored hacking and disinformation is unclear.
  • How China Is Buying Up the West’s High-Tech Sector” By Elizabeth Braw — Foreign Policy. This piece by a fellow at the ring wing American Enterprise Institute (AEI) makes the case that reviewing and potentially banning direct foreign investment by People’s Republic of China (PRC) in the United States (U.S.), European Union (EU), and European nations is probably not cutting off PRC access to cutting edge technology. PRC entities are investing directly or indirectly as limited partners in venture capital firms and are probably still gaining access to new technology. For example, an entity associated with the University of Cambridge is working with Huawei on a private 5G wireless network even though London is advancing legislation and policy to ban the PRC giant from United Kingdom (UK) networks. The author advocates for expanding the regulation of foreign investment to include limited partnerships and other structures that are apparently allowing the PRC to continue investing in and reaping the benefit of Western venture capital. There is hope, however, as a number of Western nations are starting government-funded venture capital firms to fund promising technology.
  • Twitter expands hate speech rules to include race, ethnicity” By Katie Paul — Reuters. The social media platform announced that it “further expanding our hateful conduct policy to prohibit language that dehumanizes people on the basis of race, ethnicity, or national origin.” A human rights group, the Color of Change, that was part of a coalition to pressure Twitter and other platforms called the change “essential concessions” but took issue with the timing, stating it would have had more impact had it been made before the election. A spokesperson added “[t]he jury is still out for a company with a spotty track record of policy implementation and enforcing its rules with far-right extremist users…[and] [v]oid of hard evidence the company will follow through, this announcement will fall into a growing category of too little, too late PR stunt offerings.”
  • White House drafts executive order that could restrict global cloud computing companies” By Steven Overly and Eric Geller — Politico. The Trump Administration may make another foray into trying to ban foreign companies from United States (U.S.) key critical infrastructure, and this time would reportedly bar U.S. cloud companies like Microsoft, Amazon, and others from partnering with foreign companies or entities that pose risk to the U.S. through the use of these U.S. systems to conduct cyber-attacks. This seems like another attempt to strike at the People’s Republic of China’s (PRC) technology firms. If issued, it remains to be seen how a Biden Administration would use or implement such a directive given that there is not enough time for the Trump government to see things through to end on such an order. In any event, one can be sure that tech giants have already begun pressing both the outgoing and incoming Administration against any such order and most likely Congress as well.

Other Developments

  • A bipartisan group of Senators and Representatives issued the framework for a $908 billion COVID-19 stimulus package that is reportedly the subject of serious in Congress. The framework details $10 billion for broadband without no detail on how these funds would be distributed.
  • The Australian Competition & Consumer Commission (ACCC) announced the signing of the Australian Product Safety Pledge, “a voluntary initiative that commits its signatories to a range of safety related responsibilities that go beyond what is legally required of them” in e-commerce. The ACCC stated “AliExpress, Amazon Australia, Catch.com.au and eBay Australia, who together account for a significant share of online sales in Australia, are the first businesses to sign the pledge, signifying their commitment to consumers’ safety through a range of commitments such as removing unsafe product listings within two days of being notified by the ACCC.” The pledge consists of 12 commitments:
    • Regularly consult the Product Safety Australia website and other relevant sources for information on recalled/unsafe products. Take appropriate action[1] on these products once they are identified.
    • Provide a dedicated contact point(s) for Australian regulatory authorities to notify and request take-downs of recalled/unsafe products.
    • Remove identified unsafe product listings within two business days of the dedicated contact point(s) receiving a take-down request from Australian regulatory authorities. Inform authorities on the action that has been taken and any relevant outcomes.
    • Cooperate with Australian regulatory authorities in identifying, as far as possible, the supply chain of unsafe products by responding to data/information requests within ten business days should relevant information not be publicly available.
    • Have an internal mechanism for processing data/information requests and take-downs of unsafe products.
    • Provide a clear pathway for consumers to notify the pledge signatory directly of unsafe product listings. Such notifications are treated according to the signatory’s processes and where responses to consumers are appropriate, they are given within five business days.
    • Implement measures to facilitate sellers’ compliance with Australian product safety laws. Share information with sellers on compliance training/guidance, including a link to the ACCC’s Selling online page on the Product Safety Australia website.
    • Cooperate with Australian regulatory authorities and sellers to inform consumers[2] about relevant recalls or corrective actions on unsafe products.
    • Set up processes aimed at preventing or restricting the sale of banned, non-compliant and recalled products as appropriate.
    • Put in place reasonable measures to act against repeat offenders selling unsafe products, including in cooperation with Australian regulatory authorities.
    • Take measures aimed at preventing the reappearance of unsafe product listings already removed.
    • Explore the potential use of new technologies and innovation to improve the detection and removal of unsafe products.
  • Senator Ron Wyden (D-OR) and Representative Lauren Underwood (D-IL) introduced “The Federal Cybersecurity Oversight Act” (S.4912) that would amend the “Federal Cybersecurity Enhancement Act of 2015” (P.L. 114-113) to restrict the use of exceptions to longstanding requirements that federal agencies use measures such as multi-factor authentication and encryption. Currently federal agencies exempt themselves on a number of grounds. Wyden and Underwood’s bill would tighten this process by making the exceptions good only for a year at a time and require the Office of Management and Budget (OMB) approve the execption. In a fact sheet, they claimed:
    • [T]he bill requires the Director of the Office of Management and Budget to approve all waivers, which can currently be self-issued by the head of the agency. To request a waiver, the agency head will have to certify that:
      • It would be excessively burdensome to implement the particular requirement;
      • The particular requirement is not necessary to secure the agency system and data; and
      • The agency has taken all necessary steps to secure the agency system and data.
  • The Government Accountability Office (GAO) looked at the United States (U.S.) longstanding efforts to buy common services and equipment in bulk known as Category Management. The GAO found progress but saw room for considerably more progress. GAO noted:
    • Since 2016, the Office of Management and Budget (OMB) has led efforts to improve how agencies buy these products and services through the category management initiative, which directs agencies across the government to buy more like a single enterprise. OMB has reported the federal government has saved $27.3 billion in 3 years through category management.
  • The GAO concluded:
    • The category management initiative has saved the federal government billions of dollars, and in some instances, enhanced agencies’ mission capabilities. However, the initiative has opportunities to accomplish much more. To date, OMB has focused primarily on contracting aspects of the initiative, and still has several opportunities to help agencies improve how they define their requirements for common products and services. OMB can take concrete steps to improve how agencies define these requirements through more robust guidance and training, changes to leadership delegations and cost savings reporting, and the development of additional metrics to measure implementation of the initiative.
    • Additionally, OMB can lead the development of a coordinated strategy that addresses government-wide data challenges hindering agencies’ efforts to assess their spending and identify prices paid for common products and services.
    • Finally, OMB can tailor additional training courses to provide more relevant information to agency personnel responsible for small business matters, and improve public reporting about the impact of category management on small businesses. In doing so, OMB can enhance the quality of the information provided to the small business community and policymakers. Through these efforts to further advance the category management initiative, OMB can help federal agencies accomplish their missions more effectively while also being better stewards of taxpayer dollars.
    • The GAO made the following recommendations:
      • The Director of the Office of Management and Budget should emphasize in its overarching category management guidance the importance of effectively defining requirements for common products and services when implementing the category management initiative. (Recommendation 1)
      • The Director of the Office of Management and Budget should work with the Category Management Leadership Council and the General Services Administration’s Category Management Program Management Office, and other appropriate offices, to develop additional tailored training for Senior Accountable Officials and agency personnel who manage requirements for common products and services. (Recommendation 2)
      • The Director of the Office of Management and Budget should account for agencies’ training needs, including training needs for personnel who define requirements for common products and services, when setting category management training goals. (Recommendation 3)
      • The Director of the Office of Management and Budget should ensure that designated Senior Accountable Officials have the authority necessary to hold personnel accountable for defining requirements for common products and services as well as contracting activities. (Recommendation 4)
      • The Director of the Office of Management and Budget should report cost savings from the category management initiative by agency. (Recommendation 5)
      • The Director of the Office of Management and Budget should work with the Category Management Leadership Council and the Performance Improvement Council to establish additional performance metrics for the category management initiative that are related to agency requirements. (Recommendation 6)
      • The Director of the Office of Management and Budget should, in coordination with the Category Management Leadership Council and the Chief Data Officer Council, establish a strategic plan to coordinate agencies’ responses to government-wide data challenges hindering implementation of the category management initiative, including challenges involving prices-paid and spending data. (Recommendation 7)
      • The Director of the Office of Management and Budget should work with the General Services Administration’s Category Management Program Management Office and other organizations, as appropriate, to develop additional tailored training for Office of Small Disadvantaged Business Utilization personnel that emphasizes information about small business opportunities under the category management initiative. (Recommendation 8)
      • The Director of the Office of Management and Budget should update its methodology for calculating potentially duplicative contract reductions to strengthen the linkage between category management actions and the number of contracts eliminated. (Recommendation 9)
      • The Director of the Office of Management and Budget should identify the time frames covered by underlying data when reporting on how duplicative contract reductions have impacted small businesses. (Recommendation 10)
  • The chair and ranking member of the House Commerce Committee are calling on the Federal Communications Commission (FCC) to take preparatory steps before Congress provides funding to telecommunications providers to remove and replace Huawei and ZTE equipment. House Energy and Commerce Committee Chair Frank Pallone Jr (D-NJ) and Ranking Member Greg Walden (R-OR) noted the “Secure and Trusted Communications Networks Act” (P.L. 116-124):
    • provides the Federal Communications Commission (FCC) with several new authorities to secure our communications supply chain, including the establishment and administration of the Secure and Trusted Communications Networks Reimbursement Program (Program). Through this Program, small communications providers may seek reimbursement for the cost of removing and replacing suspect network equipment. This funding is critical because some small and rural communications providers would not otherwise be able to afford these upgrades. Among the responsibilities entrusted to the FCC to carry out the Program is the development of a list of suggested replacements for suspect equipment, including physical and virtual communications equipment, application and management software, and services.
    • Pallone and Walden conceded that Congress has not yet provided funds but asked the FCC to take some steps:
      • First, the FCC should develop and release the list of eligible replacement equipment, software, and services as soon as possible. Second, the agency should reassure companies that they will not jeopardize their eligibility for reimbursement under the Program just because replacement equipment purchases were made before the Program is funded, assuming other eligibility criteria are met.
  • The Office of Special Counsel (OSC) wrote one of the whistleblowers at the United States Agency for Global Media (USAGM) and indicated it has ordered the head of USAGM to investigate the claims of malfeasance at the agency. The OSC stated:
    • On December 2, 2020, after reviewing the information you submitted, we directed the Chief Executive Officer (CEO) of USAGM to order an investigation into the following allegations and report back to OSC pursuant to 5 U.S.C. § 1213(c). Allegations to be investigated include that, since June 2020, USAGM:
      • Repeatedly violated the Voice of America (VOA) firewall—the law that protects VOA journalists’ “professional independence and integrity”;
      • Engaged in gross mismanagement and abuse of authority by:
        • Terminating the Presidents of each USAGM-funded network— Radio Free Asia (RFA), Radio Free Europe/Radio Liberty (RFE/RL), the Middle East Broadcasting Networks (MBN), and the Office of Cuba Broadcasting (OCB)—as well as the President and the CEO of the Open Technology Fund (OTF);
        • Dismissing the bipartisan board members that governed the USAGM- funded networks, replacing those board members with largely political appointees, and designating the USAGM CEO as Chairman;
        • Revoking all authority from various members of USAGM’s Senior Executive Service (SES) and reassigning those authorities to political appointees outside of the relevant offices;
        • Removing the VOA Editor for News Standards and Best Practices—a central figure in the VOA editorial standards process and a critical component of the VOA firewall—from his position and leaving that position vacant;
        • Similarly removing the Executive Editor of RFA;
        • Suspending the security clearances of six of USAGM’s ten SES members and placing them on administrative leave; and
        • Prohibiting several offices critical to USAGM’s mission—including the Offices of General Counsel, Chief Strategy, and Congressional and Public Affairs—from communicating with outside parties without the front office’s express knowledge and consent;
      • Improperly froze all agency hiring, contracting, and Information Technology migrations, and either refused to approve such decisions or delayed approval until the outside reputation and/or continuity of agency or network operations, and at times safety of staff, were threatened;
      • Illegally repurposed, and pressured career staff to illegally repurpose, congressionally appropriated funds and programs without notifying Congress; and
      • Refused to authorize the renewal of the visas of non-U.S. citizen journalists working for the agency, endangering both the continuity of agency operations and those individuals’ safety.

Coming Events

  • The Senate Judiciary Committee will hold an executive session at which the “Online Content Policy Modernization Act” (S.4632), a bill to narrow the liability shield in 47 USC 230, may be marked up on 10 December.
  • On 10 December, the Federal Communications Commission (FCC) will hold an open meeting and has released a tentative agenda:
    • Securing the Communications Supply Chain. The Commission will consider a Report and Order that would require Eligible Telecommunications Carriers to remove equipment and services that pose an unacceptable risk to the national security of the United States or the security and safety of its people, would establish the Secure and Trusted Communications Networks Reimbursement Program, and would establish the procedures and criteria for publishing a list of covered communications equipment and services that must be removed. (WC Docket No. 18-89)
    • National Security Matter. The Commission will consider a national security matter.
    • National Security Matter. The Commission will consider a national security matter.
    • Allowing Earlier Equipment Marketing and Importation Opportunities. The Commission will consider a Notice of Proposed Rulemaking that would propose updates to its marketing and importation rules to permit, prior to equipment authorization, conditional sales of radiofrequency devices to consumers under certain circumstances and importation of a limited number of radiofrequency devices for certain pre-sale activities. (ET Docket No. 20-382)
    • Promoting Broadcast Internet Innovation Through ATSC 3.0. The Commission will consider a Report and Order that would modify and clarify existing rules to promote the deployment of Broadcast Internet services as part of the transition to ATSC 3.0. (MB Docket No. 20-145)

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Makalu from Pixabay

Further Reading, Other Development, and Coming Events (7 December)

Further Reading

  • Facebook steps up campaign to ban false information about coronavirus vaccines” By Elizabeth Dwoskin — The Washington Post. In its latest step to find and remove lies, misinformation, and disinformation, the social media giant is now committing to removing and blocking untrue material about COVID-19 vaccines, especially from the anti-vaccine community. Will the next step be to take on anti-vaccination proponents generally?
  • Comcast’s 1.2 TB data cap seems like a ton of data—until you factor in remote work” By Rob Pegoraro — Fast Company. Despite many people and children working and learning from home, Comcast is reimposing a 1.2 terabyte limit on data for homes. Sounds like quite a lot until you factor in video meetings, streaming, etc. So far, other providers have not set a cap.
  • Google’s star AI ethics researcher, one of a few Black women in the field, says she was fired for a critical email” By Drew Harwell and Nitasha Tiku — The Washington Post. Timnit Gebru, a top flight artificial intelligence (AI) computer scientist, was fired for questioning Google’s review of a paper she wanted to present at an AI conference that is likely critical of the company’s AI projects. Google claims she resigned, but Gebru says she was fired. She has long been an advocate for women and minorities in tech and AI and her ouster will likely only increase scrutiny of and questions about Google’s commitment to diversity and an ethical approach to the development and deployment of AI. It will also probably lead to more employee disenchantment about the company that follows in the wake of protests about Google’s involvement with the United States Department of Defense’s Project Maven and hiring of former United States Department of Homeland Security chief of staff Miles Taylor who was involved with the policies that resulted in caging children and separating families on the southern border of the United States.
  • Humans Can Help Clean Up Facebook and Twitter” By Greg Bensinger — The New York Times. In this opinion piece, the argument is made that social media platforms should redeploy their human monitors to the accounts that violate terms of service most frequently (e.g., President Donald Trump) and more aggressively label and remove untrue or inflammatory content, they would have a greater impact on lies, misinformation, and disinformation.
  • Showdown looms over digital services tax” By Ashley Gold — Axios. Because the Organization for Economic Cooperation and Development (OECD) has not reached a deal on digital services taxes, a number of the United States (U.S.) allies could move forward with taxes on U.S. multinationals like Amazon, Google, and Apple. The Trump Administration has variously taken an adversarial position threatening to retaliate against countries like France who have enacted a tax that has not been collected during the OECD negotiations. The U.S. also withdrew from talks. It is probable the Biden Administration will be more willing to work in a multi-lateral fashion and may strike a deal on an issue that it not going away as the United Kingdom, Italy, and Canada also have plans for a digital tax.
  • Trump’s threat to veto defense bill over social-media protections is heading to a showdown with Congress” By Karoun Demirjian and Tony Romm — The Washington Post. I suppose I should mention of the President’s demands that the FY 2021 National Defense Authorization Act (NDAA) contain a repeal of 47 U.S.C. 230 (Section 230 of the Communications Act) that came at the eleventh hour and fifty-ninth minute of negotiations on a final version of the bill. Via Twitter, Donald Trump threatened to veto the bill which has been passed annually for decades. Republicans were not having it, however, even if they agreed on Trump’s desire to remove liability protection for technology companies. And yet, if Trump continues to insist on a repeal, Republicans may find themselves in a bind and the bill could conceivably get pulled until President-elect Joe Biden is sworn in. On the other hand, Trump’s veto threats about renaming military bases currently bearing the names of Confederate figures have not been renewed even though the final version of the bill contains language instituting a process to do just that.

Other Developments

  • The Senate Judiciary Committee held over its most recent bill to narrow 47 U.S.C. 230 (Section 230 of the Communications Act) that provides liability protection for technology companies for third-party material posted on their platforms and any decisions to edit, alter, or remove such content. The committee opted to hold the “Online Content Policy Modernization Act” (S.4632), which may mean the bill’s chances of making it to the Senate floor are low. What’s more, even if the Senate passes Section 230 legislation, it is not clear there will be sufficient agreement with Democrats in the House to get a final bill to the President before the end of this Congress. On 1 October, the committee also decided to hold over bill to try to reconcile the fifteen amendments submitted for consideration. The Committee could soon meet again to formally markup and report out this legislation.
    • At the earlier hearing, Chair Lindsey Graham (R-SC) submitted an amendment revising the bill’s reforms to Section 230 that incorporate some of the below amendments but includes new language. For example, the bill includes a definition of “good faith,” a term not currently defined in Section 230. This term would be construed as a platform taking down or restricting content only according to its publicly available terms of service, not as a pretext, and equally to all similarly situated content. Moreover, good faith would require alerting the user and giving him or her an opportunity to respond subject to certain exceptions. The amendment also makes clear that certain existing means of suing are still available to users (e.g. suing claiming a breach of contract.)
    • Senator Mike Lee (R-UT) offered a host of amendments:
      • EHF20913 would remove “user[s]” from the reduced liability shield that online platforms would receive under the bill. Consequently, users would still not be legally liable for the content posted by another user.
      • EHF20914 would revise the language the language regarding the type of content platforms could take down with legal protection to make clear it would not just be “unlawful” content but rather content “in violation of a duly enacted law of the United States,” possibly meaning federal laws and not state laws. Or, more likely, the intent would be to foreclose the possibility a platform would say it is acting in concert with a foreign law and still assert immunity.
      • EHF20920 would add language making clear that taking down material that violates terms of service or use according to an objectively reasonable belief would be shielded from liability.
      • OLL20928 would expand legal protection to platforms for removing or restricting spam,
      • OLL20929 would bar the Federal Communications Commission (FCC) from a rulemaking on Section 230.
      • OLL20930 adds language making clear if part of the revised Section 230 is found unconstitutional, the rest of the law would still be applicable.
      • OLL20938 revises the definition of an “information content provider,” the term of art in Section 230 that identifies a platform, to expand when platforms may be responsible for the creation or development of information and consequently liable for a lawsuit.
    • Senator Josh Hawley (R-MO) offered an amendment that would create a new right of action for people to sue large platforms for taking down his or her content if not done in “good faith.” The amendment limits this right only to “edge providers” who are platforms with more than 30 million users in the U.S. , 300 million users worldwide, and with revenues of more than $1.5 billion. This would likely exclude all platforms except for Twitter, Facebook, Instagram, TikTok, Snapchat, and a select group of a few others.
    • Senator John Kennedy (R-LA) offered an amendment that removes all Section 230 legal immunity from platforms that collect personal data and then uses an “automated function” to deliver targeted or tailored content to a user unless a user “knowingly and intentionally elect[s]” to receive such content.
  • The Massachusetts Institute of Technology’s (MIT) Work of the Future Task Force issued its final report and drew the following conclusions:
    • Technological change is simultaneously replacing existing work and creating new work. It is not eliminating work altogether.
    • Momentous impacts of technological change are unfolding gradually.
    • Rising labor productivity has not translated into broad increases in incomes because labor market institutions and policies have fallen into disrepair.
    • Improving the quality of jobs requires innovation in labor market institutions.
    • Fostering opportunity and economic mobility necessitates cultivating and refreshing worker skills.
    • Investing in innovation will drive new job creation, speed growth, and meet rising competitive challenges.
    • The Task Force stated:
      • In the two-and-a-half years since the Task Force set to work, autonomous vehicles, robotics, and AI have advanced remarkably. But the world has not been turned on its head by automation, nor has the labor market. Despite massive private investment, technology deadlines have been pushed back, part of a normal evolution as breathless promises turn into pilot trials, business plans, and early deployments — the diligent, if prosaic, work of making real technologies work in real settings to meet the demands of hard-nosed customers and managers.
      • Yet, if our research did not confirm the dystopian vision of robots ushering workers off of factor y floors or artificial intelligence rendering superfluous human expertise and judgment, it did uncover something equally pernicious: Amidst a technological ecosystem delivering rising productivity, and an economy generating plenty of jobs (at least until the COVID-19 crisis), we found a labor market in which the fruits are so unequally distributed, so skewed towards the top, that the majority of workers have tasted only a tiny morsel of a vast har vest.
      • As this report documents, the labor market impacts of technologies like AI and robotics are taking years to unfold. But we have no time to spare in preparing for them. If those technologies deploy into the labor institutions of today, which were designed for the last century, we will see similar effects to recent decades: downward pressure on wages, skills, and benefits, and an increasingly bifurcated labor market. This report, and the MIT Work of the Future Task Force, suggest a better alternative: building a future for work that har vests the dividends of rapidly advancing automation and ever-more powerful computers to deliver opportunity and economic security for workers. To channel the rising productivity stemming from technological innovations into broadly shared gains, we must foster institutional innovations that complement technological change.
  • The European Data Protection Supervisor (EDPS) Wojciech Wiewiorówski published his “preliminary opinion on the European Commission’s (EC) Communication on “A European strategy for data” and the creation of a common space in the area of health, namely the European Health Data Space (EHDS).” The EDPS lauded the goal of the EHDS, “the prevention, detection and cure of diseases, as well as for evidence-based decisions in order to enhance effectiveness, accessibility and sustainability of the healthcare systems.” However, Wiewiorówski articulated his concerns that the EC needs to think through the applicability of the General Data Protection Regulation (GDPR), among other European Union (EU) laws before it can legally move forward. The EDPS stated:
    • The EDPS calls for the establishment of a thought-through legal basis for the processing operations under the EHDS in line with Article 6(1) GDPR and also recalls that such processing must comply with Article 9 GDPR for the processing of special categories of data.
    • Moreover, the EDPS highlights that due to the sensitivity of the data to be processed within the EHDS, the boundaries of what constitutes a lawful processing and a compatible further processing of the data must be crystal-clear for all the stakeholders involved. Therefore, the transparency and the public availability of the information relating to the processing on the EHDS will be key to enhance public trust in the EHDS.
    • The EDPS also calls on the Commission to clarify the roles and responsibilities of the parties involved and to clearly identify the precise categories of data to be made available to the EHDS. Additionally, he calls on the Member States to establish mechanisms to assess the validity and quality of the sources of the data.
    • The EDPS underlines the importance of vesting the EHDS with a comprehensive security infrastructure, including both organisational and state-of-the-art technical security measures to protect the data fed into the EHDS. In this context, he recalls that Data Protection Impact Assessments may be a very useful tool to determine the risks of the processing operations and the mitigation measures that should be adopted.
    • The EDPS recommends paying special attention to the ethical use of data within the EHDS framework, for which he suggests taking into account existing ethics committees and their role in the context of national legislation.
    • The EDPS is convinced that the success of the EHDS will depend on the establishment of a strong data governance mechanism that provides for sufficient assurances of a lawful, responsible, ethical management anchored in EU values, including respect for fundamental rights. The governance mechanism should regulate, at least, the entities that will be allowed to make data available to the EHDS, the EHDS users, the Member States’ national contact points/ permit authorities, and the role of DPAs within this context.
    • The EDPS is interested in policy initiatives to achieve ‘digital sovereignty’ and has a preference for data being processed by entities sharing European values, including privacy and data protection. Moreover, the EDPS calls on the Commission to ensure that the stakeholders taking part in the EHDS, and in particular, the controllers, do not transfer personal data unless data subjects whose personal data are transferred to a third country are afforded a level of protection essentially equivalent to that guaranteed within the European Union.
    • The EDPS calls on Member States to guarantee the effective implementation of the right to data portability specifically in the EHDS, together with the development of the necessary technical requirements. In this regard, he considers that a gap analysis might be required regarding the need to integrate the GDPR safeguards with other regulatory safeguards, provided e.g. by competition law or ethical guidelines.
  • The Office of Management and Budget (OMB) extended a guidance memorandum directing agencies to consolidate data centers after Congress pushed back the sunset date for the program. OMB extended OMB Memorandum M-19-19, Update to Data Center Optimization Initiative (DCOI) through 30 September 2022, which applies “to the 24 Federal agencies covered by the Chief Financial Officers (CFO) Act of 1990, which includes the Department of Defense.” The DCOI was codified in the “Federal Information Technology Acquisition Reform” (FITARA) (P.L. 113-291) and extended in 2018 until October 1, 2020. And this sunset was pushed back another two years in the FY 2020 National Defense Authorization Act (NDAA) (P.L. 116-92).
    • In March 2020, the Government Accountability Office (GAO) issued another of its periodic assessments of the DCOI, started in 2012 by the Obama Administration to shrink the federal government’s footprint of data centers, increase efficiency and security, save money, and reduce energy usage.
    • The GAO found that 23 of the 24 agencies participating in the DCOI met or planned to meet their FY 2019 goals to close 286 of the 2,727 data centers considered part of the DCOI. This latter figure deserves some discussion, for the Trump Administration changed the definition of what is a data center to exclude smaller ones (so-called non-tiered data centers). GAO asserted that “recent OMB DCOI policy changes will reduce the number of data centers covered by the policy and both OMB and agencies may lose important visibility over the security risks posed by these facilities.” Nonetheless, these agencies are projecting savings of $241.5 million when all the 286 data centers planned for closure in FY 2019 actually close. It bears note that the GAO admitted in a footnote it “did not independently validate agencies’ reported cost savings figures,” so these numbers may not be reliable.
    • In terms of how to improve the DCOI, the GAO stated that “[i]n addition to reiterating our prior open recommendations to the agencies in our review regarding their need to meet DCOI’s closure and savings goals and optimization metrics, we are making a total of eight new recommendations—four to OMB and four to three of the 24 agencies. Specifically:
      • The Director of the Office of Management and Budget should (1) require that agencies explicitly document annual data center closure goals in their DCOI strategic plans and (2) track those goals on the IT Dashboard. (Recommendation 1)
      • The Director of the Office of Management and Budget should require agencies to report in their quarterly inventory submissions those facilities previously reported as data centers, even if those facilities are not subject to the closure and optimization requirements of DCOI. (Recommendation 2)
      • The Director of the Office of Management and Budget should document OMB’s decisions on whether to approve individual data centers when designated by agencies as either a mission critical facility or as a facility not subject to DCOI. (Recommendation 3)
      • The Director of the Office of Management and Budget should take action to address the key performance measurement characteristics missing from the DCOI optimization metrics, as identified in this report. (Recommendation 4)
  • Australia’s Inspector-General of Intelligence and Security (IGIS) released its first report on how well the nation’s security services did in observing the law with respect to COVID  app  data. The IGIS “is satisfied that the relevant agencies have policies and procedures in place and are taking reasonable steps to avoid intentional collection of COVID app data.” The IGIS revealed that “[i]ncidental collection in the course of the lawful collection of other data has occurred (and is permitted by the Privacy Act); however, there is no evidence that any agency within IGIS jurisdiction has decrypted, accessed or used any COVID app data.” The IGIS is also “satisfied  that  the intelligence agencies within IGIS jurisdiction which have the capability to incidentally collect a least some types of COVID app data:
    • Are aware of their responsibilities under Part VIIIA of the Privacy Act and are taking active steps to minimise the risk that they may collect COVID app data.
    • Have appropriate  policies  and  procedures  in  place  to  respond  to  any  incidental  collection of COVID app data that they become aware of. 
    • Are taking steps to ensure any COVID app data is not accessed, used or disclosed.
    • Are taking steps to ensure any COVID app data is deleted as soon as practicable.
    • Have not decrypted any COVID app data.
    • Are applying the usual security measures in place in intelligence agencies such that a ‘spill’ of any data, including COVID app data, is unlikely.
  • New Zealand’s Government Communications Security Bureau’s National Cyber Security Centre (NCSC) has released its annual Cyber Threat Report that found that “nationally significant organisations continue to be frequently targeted by malicious cyber actors of all types…[and] state-sponsored and non-state actors targeted public and private sector organisations to steal information, generate revenue, or disrupt networks and services.” The NCSC added:
    • Malicious cyber actors have shown their willingness to target New Zealand organisations in all sectors using a range of increasingly advanced tools and techniques. Newly disclosed vulnerabilities in products and services, alongside the adoption of new services and working arrangements, are rapidly exploited by state-sponsored actors and cyber criminals alike. A common theme this year, which emerged prior to the COVID-19 pandemic, was the exploitation of known vulnerabilities in internet-facing applications, including corporate security products, remote desktop services and virtual private network applications.
  • The former Director of the United States’ (U.S.) Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA) wrote an opinion piece disputing President Donald Trump’s claims that the 2020 Presidential Election was fraudulent. Christopher Krebs asserted:
    • While I no longer regularly speak to election officials, my understanding is that in the 2020 results no significant discrepancies attributed to manipulation have been discovered in the post-election canvassing, audit and recount processes.
    • This point cannot be emphasized enough: The secretaries of state in Georgia, Michigan, Arizona, Nevada and Pennsylvania, as well officials in Wisconsin, all worked overtime to ensure there was a paper trail that could be audited or recounted by hand, independent of any allegedly hacked software or hardware.
    • That’s why Americans’ confidence in the security of the 2020 election is entirely justified. Paper ballots and post-election checks ensured the accuracy of the count. Consider Georgia: The state conducted a full hand recount of the presidential election, a first of its kind, and the outcome of the manual count was consistent with the computer-based count. Clearly, the Georgia count was not manipulated, resoundingly debunking claims by the president and his allies about the involvement of CIA supercomputers, malicious software programs or corporate rigging aided by long-gone foreign dictators.

Coming Events

  • The National Institute of Standards and Technology (NIST) will hold a webinar on the Draft Federal Information Processing Standards (FIPS) 201-3 on 9 December.
  • On 9 December, the Senate Commerce, Science, and Transportation Committee will hold a hearing titled “The Invalidation of the EU-US Privacy Shield and the Future of Transatlantic Data Flows” with the following witnesses:
    • The Honorable Noah Phillips, Commissioner, Federal Trade Commission
    • Ms. Victoria Espinel, President and Chief Executive Officer, BSA – The Software Alliance
    • Mr. James Sullivan, Deputy Assistant Secretary for Services, International Trade Administration, U.S. Department of Commerce
    • Mr. Peter Swire, Elizabeth and Tommy Holder Chair of Law and Ethics, Georgia Tech Scheller College of Business, and Research Director, Cross-Border Data Forum
  • On 10 December, the Federal Communications Commission (FCC) will hold an open meeting and has released a tentative agenda:
    • Securing the Communications Supply Chain. The Commission will consider a Report and Order that would require Eligible Telecommunications Carriers to remove equipment and services that pose an unacceptable risk to the national security of the United States or the security and safety of its people, would establish the Secure and Trusted Communications Networks Reimbursement Program, and would establish the procedures and criteria for publishing a list of covered communications equipment and services that must be removed. (WC Docket No. 18-89)
    • National Security Matter. The Commission will consider a national security matter.
    • National Security Matter. The Commission will consider a national security matter.
    • Allowing Earlier Equipment Marketing and Importation Opportunities. The Commission will consider a Notice of Proposed Rulemaking that would propose updates to its marketing and importation rules to permit, prior to equipment authorization, conditional sales of radiofrequency devices to consumers under certain circumstances and importation of a limited number of radiofrequency devices for certain pre-sale activities. (ET Docket No. 20-382)
    • Promoting Broadcast Internet Innovation Through ATSC 3.0. The Commission will consider a Report and Order that would modify and clarify existing rules to promote the deployment of Broadcast Internet services as part of the transition to ATSC 3.0. (MB Docket No. 20-145)

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Photo by Daniel Schludi on Unsplash

American and Canadian Agencies Take Differing Approaches On Regulating AI

The outgoing Trump Administration tells agencies to lightly regulate AI; Canada’s privacy regulator calls for strong safeguards and limits on use of AI, including legislative changes.

The Office of Management and Budget (OMB) has issued guidance for federal agencies on how they are to regulate artificial intelligence (AI) not in use by the government. This guidance seeks to align policy across agencies in how they use their existing power to regulate AI according to the Trump Administration’s policy goals. Notably, this memorandum is binding on all federal agencies (including national defense) and even independent agencies such as the Federal Trade Commission (FTC) and Federal Communications Commission (FCC). OMB worked with other stakeholder agencies on this guidance per by Executive Order (EO) 13859, “Maintaining American Leadership in Artificial Intelligence” and issued a draft of the memorandum 11 months ago for comment.

In “Guidance for Regulation of Artificial Intelligence Applications,” OMB “sets out policy considerations that should guide, to the extent permitted by law, regulatory and non-regulatory approaches to AI applications developed and deployed outside of the Federal government.” OMB is directing agencies to take a light touch to regulating AI under its current statutory authorities, being careful to consider costs and benefits and keeping in mind the larger policy backdrop of taking steps to ensure United States (U.S.) dominance in AI in light of competition from the People’s Republic of China (PRC), the European Union, Japan, the United Kingdom, and others. OMB is requiring reports from agencies on how they will use and not use their authority to meet the articulated goals and requirements of this memorandum. However, given the due date for these reports will be well into the next Administration, it is very likely the Biden OMB at least pauses this initiative and probably alters it to meet new policy. It is possible that policy goals to protect privacy, combat algorithmic bias, and protect data are made more prominent in U.S. AI regulation.

As a threshold matter, it bears note that this memorandum uses a definition of statute that is narrower than AI is being popularly discussed. OMB explained that “[w]hile this Memorandum uses the definition of AI recently codified in statute, it focuses on “narrow” (also known as “weak”) AI, which goes beyond advanced conventional computing to learn and perform domain-specific or specialized tasks by extracting information from data sets, or other structured or unstructured sources of information.” Consequently, “[m]ore theoretical applications of “strong” or “general” AI—AI that may exhibit sentience or consciousness, can be applied to a wide variety of cross-domain activities and perform at the level of, or better than a human agent, or has the capacity to self-improve its general cognitive abilities similar to or beyond human capabilities—are beyond the scope of this Memorandum.”

The Trump OMB tells agencies to minimize regulation of AI and take into account how any regulatory action may affect growth and innovation in the field before putting implemented. OMB directs agencies to favor “narrowly tailored and evidence­ based regulations that address specific and identifiable risks” that foster an environment where U.S. AI can flourish. Consequently, OMB bars “a precautionary approach that holds AI systems to an impossibly high standard such that society cannot enjoy their benefits and that could undermine America’s position as the global leader in AI innovation.” Of course, what constitutes “evidence-based regulation” and an “impossibly high standard” are in the eye of the beholder, so this memorandum could be read by the next OMB in ways the outgoing OMB does not agree with. Finally, OMB is pushing agencies to factor potential benefits in any risk calculation, presumably allowing for greater risk of bad outcomes if the potential reward seems high. This would seem to suggest a more hands-off approach on regulating AI.

OMB listed the 10 AI principles agencies must in regulating AI in the private sector:

  • Public trust in AI
  • Public participation
  • Scientific integrity and information quality
  • Risk assessment and management
  • Benefits and costs
  • Flexibility
  • Fairness and non-discrimination
  • Disclosure and transparency
  • Safety and security
  • Interagency coordination

OMB also tells agencies to look at existing federal or state regulation that may prove inconsistent, duplicative, or inconsistent with this federal policy and “may use their authority to address inconsistent, burdensome, and duplicative State laws that prevent the emergence of a national market.”

OMB encouraged agencies to use “non-regulatory approaches” in the event existing regulations are sufficient or the benefits of regulation do not justify the costs. OMB counseled “[i]n these cases, the agency may consider either not taking any action or, instead, identifying non-regulatory approaches that may be appropriate to address the risk posed by certain AI applications” and provided examples of “non-regulatory approaches:”

  • Sector-Specific Policy Guidance or Frameworks
  • Pilot Programs and Experiments
  • Voluntary Consensus Standards
  • Voluntary Frameworks

As noted, the EO under which OMB is acting requires “that implementing agencies with regulatory authorities review their authorities relevant to AI applications and submit plans to OMB on achieving consistency with this Memorandum.” OMB directs:

The agency plan must identify any statutory authorities specifically governing agency regulation of AI applications, as well as collections of AI-related information from regulated entities. For these collections, agencies should describe any statutory restrictions on the collection or sharing of information (e.g., confidential business information, personally identifiable information, protected health information, law enforcement information, and classified or other national security information). The agency plan must also report on the outcomes of stakeholder engagements that identify existing regulatory barriers to AI applications and high-priority AI applications that are within an agency’s regulatory authorities. OMB also requests agencies to list and describe any planned or considered regulatory actions on AI. Appendix B provides a template for agency plans.

Earlier this year, the White House’s Office of Science and Technology Policy (OSTP) released a draft “Guidance for Regulation of Artificial Intelligence Applications,” a draft of this OMB memorandum that would be issued to federal agencies as directed by Executive Order (EO) 13859, “Maintaining American Leadership in Artificial Intelligence.” However, this memorandum is not aimed at how federal agencies use and deploy artificial intelligence (AI) but rather it “sets out policy considerations that should guide, to the extent permitted by law, regulatory and non-regulatory oversight of AI applications developed and deployed outside of the Federal government.” In short, if this draft is issued by OMB as written, federal agencies would need to adhere to the ten principles laid out in the document in regulating AI as part of their existing and future jurisdiction over the private sector. Not surprisingly, the Administration favors a light touch approach that should foster the growth of AI.

EO 13859 sets the AI policy of the government “to sustain and enhance the scientific, technological, and economic leadership position of the United States in AI.” The EO directed OMB and OSTP along with other Administration offices, to craft this draft memorandum for comment. OMB was to “issue a memorandum to the heads of all agencies that shall:

(i) inform the development of regulatory and non-regulatory approaches by such agencies regarding technologies and industrial sectors that are either empowered or enabled by AI, and that advance American innovation while upholding civil liberties, privacy, and American values; and
(ii) consider ways to reduce barriers to the use of AI technologies in order to promote their innovative application while protecting civil liberties, privacy, American values, and United States economic and national security.

A key regulator in a neighbor of the U.S. also weighed in on the proper regulation of AI from the vantage of privacy. The Office of the Privacy Commissioner of Canada (OPC) “released key recommendations…[that] are the result of a public consultation launched earlier this year.” OPC explained that it “launched a public consultation on our proposals for ensuring the appropriate regulation of AI in the Personal Information Protection and Electronic Documents Act (PIPEDA).” OPC’s “working assumption was that legislative changes to PIPEDA are required to help reap the benefits of AI while upholding individuals’ fundamental right to privacy.” It is to be expected that a privacy regulator will see matters differently than a Republican White House, and so it is here. The OPC

In an introductory paragraph, the OPC spelled out the problems and dangers created by AI:

uses of AI that are based on individuals’ personal information can have serious consequences for their privacy. AI models have the capability to analyze, infer and predict aspects of individuals’ behaviour, interests and even their emotions in striking ways. AI systems can use such insights to make automated decisions about individuals, including whether they get a job offer, qualify for a loan, pay a higher insurance premium, or are suspected of suspicious or unlawful behaviour. Such decisions have a real impact on individuals’ lives, and raise concerns about how they are reached, as well as issues of fairness, accuracy, bias, and discrimination. AI systems can also be used to influence, micro-target, and “nudge” individuals’ behaviour without their knowledge. Such practices can lead to troubling effects for society as a whole, particularly when used to influence democratic processes.

The OPC is focused on the potential for AI to be used in a more effective fashion than current data processing to predict, uncover, subvert, and influence the behavior of people in ways not readily apparent. There is also concern for another aspect of AI and other data processing that has long troubled privacy and human rights advocates: the potential for discriminatory treatement.

OPC asserted “an appropriate law for AI would:

  • Allow personal information to be used for new purposes towards responsible AI innovation and for societal benefits;
  • Authorize these uses within a rights based framework that would entrench privacy as a human right and a necessary element for the exercise of other fundamental rights;
  • Create provisions specific to automated decision-making to ensure transparency, accuracy and fairness; and
  • Require businesses to demonstrate accountability to the regulator upon request, ultimately through proactive inspections and other enforcement measures through which the regulator would ensure compliance with the law.

However, the OPC does not entirely oppose the use of AI and is proposing exceptions to the general requirement under Canadian federal law that meaningful consent is required before data processing. The OPC is “recommending a series of new exceptions to consent that would allow the benefits of AI to be better achieved, but within a rights based framework.” OPC stated “[t]he intent is to allow for responsible, socially beneficial innovation, while ensuring individual rights are respected…[and] [w]e recommend exceptions to consent for the use of personal information for research and statistical purposes, compatible purposes, and legitimate commercial interests purposes.” However, the OPC is proposing a number of safeguards:

The proposed exceptions to consent must be accompanied by a number of safeguards to ensure their appropriate use. This includes a requirement to complete a privacy impact assessment (PIA), and a balancing test to ensure the protection of fundamental rights. The use of de-identified information would be required in all cases for the research and statistical purposes exception, and to the extent possible for the legitimate commercial interests exception.

Further, the OPC made the case that enshrining strong privacy rights in Canadian law would not obstruct the development of AI but would, in fact, speed its development:

  • A rights-based regime would not stand in the way of responsible innovation. In fact, it would help support responsible innovation and foster trust in the marketplace, giving individuals the confidence to fully participate in the digital age. In our 2018-2019 Annual Report to Parliament, our Office outlined a blueprint for what a rights-based approach to protecting privacy should entail. This rights-based approach runs through all of the recommendations in this paper.
  • While we propose that the law should allow for uses of AI for a number of new purposes as outlined, we have seen examples of unfair, discriminatory, and biased practices being facilitated by AI which are far removed from what is socially beneficial. Given the risks associated with AI, a rights based framework would help to ensure that it is used in a manner that upholds rights. Privacy law should prohibit using personal information in ways that are incompatible with our rights and values.
  • Another important measure related to this human rights-based approach would be for the definition of personal information in PIPEDA to be amended to clarify that it includes inferences drawn about an individual. This is important, particularly in the age of AI, where individuals’ personal information can be used by organizations to create profiles and make predictions intended to influence their behaviour. Capturing inferred information clearly within the law is key for protecting human rights because inferences can often be drawn about an individual without their knowledge, and can be used to make decisions about them.

The OPC also called for a framework under which people could review and contest automated decisions:

we recommend that individuals be provided with two explicit rights in relation to automated decision-making. Specifically, they should have a right to a meaningful explanation of, and a right to contest, automated decision-making under PIPEDA. These rights would be exercised by individuals upon request to an organization. Organizations should be required to inform individuals of these rights through enhanced transparency practices to ensure individual awareness of the specific use of automated decision-making, as well as of their associated rights. This could include requiring notice to be provided separate from other legal terms.

The OPC also counseled that PIPEDA’s enforcement mechanism and incentives be changed:

PIPEDA should incorporate a right to demonstrable accountability for individuals, which would mandate demonstrable accountability for all processing of personal information. In addition to the measures detailed below, this should be underpinned by a record keeping requirement similar to that in Article 30 of the GDPR. This record keeping requirement would be necessary to facilitate the OPC’s ability to conduct proactive inspections under PIPEDA, and for individuals to exercise their rights under the Act.

The OPC called for the following to ensure “demonstrable accountability:”

  • Integrating privacy and human rights into the design of AI algorithms and models is a powerful way to prevent negative downstream impacts on individuals. It is also consistent with modern legislation, such as the GDPR and Bill 64. PIPEDA should require organizations to design for privacy and human rights by requiring organizations to implement “appropriate technical and organizational measures” that implement PIPEDA requirements prior to and during all phases of collection and processing.
  • In light of the new proposed rights to explanation and contestation, organizations should be required to log and trace the collection and use of personal information in order to adequately fulfill these rights for the complex processing involved in AI. Tracing supports demonstrable accountability as it provides documentation that the regulator could consult through the course of an inspection or investigation, to determine the personal information fed into the AI system, as well as broader compliance.
  • Demonstrable accountability must include a model of assured accountability pursuant to which the regulator has the ability to proactively inspect an organization’s privacy compliance. In today’s world where business models are often opaque and information flows are increasingly complex, individuals are unlikely to file a complaint when they are unaware of a practice that might cause them harm. This challenge will only become more pronounced as information flows gain complexity with the continued development of AI.
  • The significant risks posed to privacy and human rights by AI systems require a proportionally strong regulatory regime. To incentivize compliance with the law, PIPEDA must provide for meaningful enforcement with real consequences for organizations found to be non-compliant. To guarantee compliance and protect human rights, PIPEDA should empower the OPC to issue binding orders and financial penalties.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Photo by Tetyana Kovyrina from Pexels

Bill To Reform IOT Security in U.S. Passes Congress

A long awaited bill to revamp how the U.S. government secures its IOT is on its way to the White House.

Last night, the Senate agreed to a House passed bill that would remake how the United States (U.S.) government buys Internet of Things (IOT) items, with the idea that requiring security standards in government IOT will drive greater security across the U.S. IOT market. Of course, such legislation, if implemented as intended, would also have the salutary effect of strengthening government networks. Incidentally, there is language in the bill that would seem to give the White House additional muscle to drive better information security across the civilian government.

The effort to pass this bill started in the last Congress and continued into this Congress. The bill will require the Office of Management and Budget (OMB) to set standards and practices that private sector contractors will need to meet in selling IOT to federal agencies. The OMB’s work is to be based on a series of IOT guidance documents the National Institute of Standards and Technology (NIST) has issued.

In September, the United States House of Representatives took up and passed a revised version of “Internet of Things Cybersecurity Improvement Act of 2020” (H.R. 1668) by voice vote. As noted, the United States Senate passed the same bill by unanimous consent yesterday, sending the legislation to the White House. While OMB did not issue a Statement of Administration Policy on H.R. 1668 or any of its previous iterations, Senate Republicans, particularly Majority Leader Mitch McConnell (R-KY), have not shown a willingness to even consider any bill the White House has not greenlit. Therefore, it may be reasonable to assume the President will sign this bill into law.

H.R. 1668 requires NIST to publish “standards and guidelines for the Federal Government on the appropriate use and management by agencies of Internet of Things devices owned or controlled by an agency and connected to information systems owned or controlled by an agency, including minimum information security requirements for managing cybersecurity risks associated with such devices.” These standards and guidelines are to be consistent with existing NIST standards and guidance on IOT, and the agency has issued a series of such documents described in some detail later in this article.

Six months after NIST issues such standards and guidelines, OMB must judge current agency standards and practices with IOT against NIST’s (excepting “national security systems, meaning almost all the Department of Defense and Intelligence Community). OMB is required to then issue policies and principles necessary to rectify shortcomings in agency IOT security after consulting with the United States’ (U.S.) Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA). At least once every five years after the initial policies and procedures are issued, OMB must revisit, assess, and adjust them as needed. Moreover, U.S. acquisition regulations must be amended to implement these standards and guidelines, meaning these would be binding in the purchase and use of IOT by civilian agencies.

NIST must also create and operate a system under which vulnerabilities and fixes in agency owned or operated IOT can be reported. OMB would oversee the establishment of this process, and DHS would administer the guidelines, possibly through its powers to issue Binding Operational Directives to federal civilian agencies.

Now, we come to a curious section of H.R.1668 that may well have implications for government bought or used technology beyond just IOT. Within two years of becoming law, OMB, in consultation with DHS, must “develop and oversee the implementation of policies, principles, standards, or guidelines as may be necessary to address security vulnerabilities of information systems (including Internet of Things devices) (emphasis added.) This is a seemingly open-ended grant of authority for OMB to put in place binding policies and procedures for all information systems, a very broad term that encompasses information technology and other resources, across federal agencies. OMB already possesses power and means to do much of this, begging the question why such authority was needed. The bill is not clear on this point, and OMB may well use this additional authority in areas not strictly pertaining to IOT.

And now the hammer to drive better IOT security. Civilian agencies will not be able to buy or use IOT until its Chief Information Officer (CIO) has certified such IOT meets the aforementioned standards developed along the dual tracks the bill requires. There are, of course, loopholes to this requirement since industry and agency stakeholders likely insisted on them. First, any purchase below the simplified acquisition threshold (which is currently $250,000) would be exempt from this requirement, and the agency could waive the need for the CIO to agree if

  • the waiver is necessary in the interest of national security;
  • procuring, obtaining, or using such device is necessary for research purposes; or
  • such device is secured using alternative and effective methods appropriate to the function of such device.

And so, these three grounds for waivers may be the exceptions that eat the rule. Time will tell.

In June, the Senate and House committees of jurisdictions marked up their versions of the “Internet of Things (IOT) Cybersecurity Improvement Act of 2020” (H.R. 1668/S. 734). The bill text as released in March 2019 for both bills was identical signaling agreement between the two chambers’ sponsors, but the process of marking up the bills resulted in different versions, requiring negotiation on a final bill. The House Oversight and Reform Committee marked up and reported out H.R. 1668 after adopting an amendment in the nature of a substitute that narrowed the scope of the bill and is more directive than the bill initially introduced in March. The Senate Homeland Security Committee marked up S. 734 a week later, making their own changes from the March bill. The March version of the legislation unified two similar bills from the 115th Congress of the same title: the “Internet of Things (IOT) Cybersecurity Improvement Act of 2017” (S. 1691) and the “Internet of Things (IOT) Federal Cybersecurity Improvement Act of 2018” (H.R. 7283).

Per the Committee Report for S. 734, the purpose of bill

is to proactively mitigate the risks posed by inadequately-secured Internet of Things (IOT) devices through the establishment of minimum security standards for IOT devices purchased by the Federal Government. The bill codifies the ongoing work of the NIST to develop standards and guidelines, including minimum-security requirements, for the use of IOT devices by Federal agencies. The bill also directs OMB, in consultation with DHS, to issue the necessary policies and principles to implement the NIST standards and guidelines on IOT security and management. Additionally, the bill requires NIST, in consultation with cybersecurity researchers and industry experts, to publish guidelines for the reporting, coordinating, publishing, and receiving of information about Federal agencies’ security vulnerabilities and the coordinate resolutions of the reported vulnerabilities. OMB will provide the policies and principles and DHS will develop and issue the procedures necessary to implement NIST’s guidelines on coordinated vulnerability disclosure for Federal agencies. The bill includes a provision allowing Federal agency heads to waive the IOT use and management requirements issued by OMB for national security, functionality, alternative means, or economic reasons.

According to a staff memorandum, H.R. 1668

would require the NIST to develop guidelines for managing cybersecurity risks of IOT devices by June 30, 2020. The bill would require OMB to issue standards for implementing those guidelines by December 31, 2020. The bill also would require similar guidelines from NIST and standards from OMB on reporting, coordinating, and publishing security vulnerabilities of IOT devices.

As noted earlier, NIST has worked on and published a suite of guidance documents on IOT. In June, NIST published final guidance as part of its follow up to A Report to the President on Enhancing the Resilience of the Internet and Communications Ecosystem Against Botnets and Other Automated, Distributed Threats and NIST’s Botnet Roadmap. Neither document is binding on federal agencies or private sector entities, but given the respect the agency enjoys, these will likely become referenced extensively by other standards.

NIST explained in a blog post:

In NISTIR 8259A, NIST explained the purpose of the publication as defining an “IOT device cybersecurity capability core baseline, which is a set of device capabilities generally needed to support common cybersecurity controls that protect an organization’s devices as well as device data, systems, and ecosystems.” NIST stated “[t]he purpose of this publication is to provide organizations a starting point to use in identifying the device cybersecurity capabilities for new IOT devices they will manufacture, integrate, or acquire…[and] can be used in conjunction with NISTIR 8259, Foundational Cybersecurity Activities for IOT Device Manufacturers.”

NIST further explained how the core baseline was developed:

  • The IOT device cybersecurity capability core baseline (core baseline) defined in this publication is a set of device capabilities generally needed to support commonly used cybersecurity controls that protect devices as well as device data, systems, and ecosystems.
  • The core baseline has been derived from researching common cybersecurity risk management approaches and commonly used capabilities for addressing cybersecurity risks to IOT devices, which were refined and validated using a collaborative public-private process to incorporate all viewpoints.
  • Regardless of an organization’s role, this baseline is intended to give all organizations a starting point for IOT device cybersecurity risk management, but the implementation of all capabilities is not considered mandatory. The individual capabilities in the baseline may be implemented in full, in part, or not at all. It is left to the implementing organization to understand the unique risk context in which it operates and what is appropriate for its given circumstance.

NIST 8259 is designed “give manufacturers recommendations for improving how securable the IOT devices they make are…[and] [t]his means the IOT devices offer device cybersecurity capabilities—cybersecurity features or functions the devices provide through their own technical means (i.e., device hardware and software)—that customers, both organizations and individuals, need to secure the devices when used within their systems and environments.”

NIST stated “[t]his publication describes six recommended foundational cybersecurity activities that manufacturers should consider performing to improve the securability of the new IOT devices they make…[and] [f]our of the six activities primarily impact decisions and actions performed by the manufacturer before a device is sent out for sale (pre-market), and the remaining two activities primarily impact decisions and actions performed by the manufacturer after device sale (post-market).” NIST claimed “[p]erforming all six activities can help manufacturers provide IOT devices that better support the cybersecurity-related efforts needed by IOT device customers, which in turn can reduce the prevalence and severity of IOT device compromises and the attacks performed using compromised IOT devices.” NIST asserted “[t]hese activities are intended to fit within a manufacturer’s existing development process and may already be achieved in whole or part by that existing process.”

In June 2019, NIST issued “Considerations for Managing Internet of Things (IOT) Cybersecurity and Privacy Risks” (NISTIR 8228) which is designed “to help organizations better understand and manage the cybersecurity and privacy risks associated with individual IOT devices throughout the devices’ lifecycles.” The agency claims the publication “provides insights to inform organizations’ risk management processes and “[a]fter reading this publication, an organization should be able to improve the quality of its risk assessments for IOT devices and its response to the identified risk through the lens of cybersecurity and privacy.” It bears note that from the onset of tackling IOT standards that NIST paired cybersecurity and privacy unlike its Cybersecurity Framework which addresses privacy as an important but ancillary concern to cybersecurity.

NIST explained that NIST Interagency or Internal Report 8228: Considerations for Managing Internet of Things (IOT) Cybersecurity and Privacy Risks is aimed at “personnel at federal agencies with responsibilities related to managing cybersecurity and privacy risks for IOT devices, although personnel at other organizations may also find value in the content.” NIST stated that “[t]his publication emphasizes what makes managing these risks different for IOT devices in general, including consumer, enterprise, and industrial IOT devices, than conventional information technology (IT) devices…[and] omits all aspects of risk management that are largely the same for IOT and conventional IT, including all aspects of risk management beyond the IOT devices themselves, because these are already addressed by many other risk management publications.”

NIST explained that “[t]his publication identifies three high-level considerations that may affect the management of cybersecurity and privacy risks for IOT devices as compared to conventional IT devices:

1. Many IOT devices interact with the physical world in ways conventional IT devices usually do not. The potential impact of some IOT devices making changes to physical systems and thus affecting the physical world needs to be explicitly recognized and addressed from cybersecurity and privacy perspectives. Also, operational requirements for performance, reliability, resilience, and safety may be at odds with common cybersecurity and privacy practices for conventional IT devices.

2. Many IOT devices cannot be accessed, managed, or monitored in the same ways conventional IT devices can. This can necessitate doing tasks manually for large numbers of IOT devices, expanding staff knowledge and tools to include a much wider variety of IOT device software, and addressing risks with manufacturers and other third parties having remote access or control over IOT devices.

3. The availability, efficiency, and effectiveness of cybersecurity and privacy capabilities are often different for IOT devices than conventional IT devices. This means organizations may have to select, implement, and manage additional controls, as well as determine how to respond to risk when sufficient controls for mitigating risk are not available.

NIST laid out “[c]ybersecurity and privacy risks for IOT devices can be thought of in terms of three high-level risk mitigation goals:

1. Protect device security. In other words, prevent a device from being used to conduct attacks, including participating in distributed denial of service (DDoS) attacks against other organizations, and eavesdropping on network traffic or compromising other devices on the same network segment. This goal applies to all IOT devices.

2. Protect data security. Protect the confidentiality, integrity, and/or availability of data (including personally identifiable information [PII]) collected by, stored on, processed by, or transmitted to or from the IOT device. This goal applies to each IOT device except those without any data that needs protection.

3. Protect individuals’ privacy. Protect individuals’ privacy impacted by PII processing beyond risks managed through device and data security protection. This goal applies to all IOT devices that process PII or that directly or indirectly impact individuals.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Photo by Free Creative Stuff from Pexels

Further Reading, Other Developments, and Coming Events (2 November)

Further Reading

  •  “Harris target of more misinformation than Pence, data shows” By Amanda Seitz — Associated Press News. Given the hostile treatment women and minorities in the United States face on social media, it is not a surprise that Senator Kamala Harris (D-CA) has faced a barrage of sexist, racist, and xenophobic invective online.
  • The Untold Technological Revolution Sweeping Through Rural China” By Clive Thompson — The New York Times. In a review of Xiaowei Wang’s new book, “Blockchain Chicken Farm,” one learns that the People’s Republic of China (PRC) is facing a bifurcated society of haves and haves not largely because of the boom in technology just like the United States.
  • DHS plans largest operation to secure U.S. election against hacking” By Joseph Marks — The Washington Post.  Looking to avert a repeat of 2016, the United States’ (U.S.) Department of Homeland Security (DHS) Cybersecurity and Infrastructure Security Agency (CISA) is expecting to be on high alert and will stand its capabilities through Election Day and beyond until winners have been declared. Not only will the agency’s technical capabilities be brought to bear, CISA will also look to liaise with the media regularly to tamp down any panic arising from reports of hacking or interference. And, it is expected that CISA’s relationship building with state and local officials will help speed action on any cyber intelligence the agency pushes out.
  • The Tech Antitrust Problem No One Is Talking About” By Tom Simonite — WIRED. The United States’ (U.S.) four dominant broadband providers Verizon, Comcast, Charter Communications, and AT&T appear to be providing inferior service at higher prices than broadband available in other advanced nations. The pandemic has, of course, focused more people on the lack of highspeed broadband for many Americans. But, the dominance of broadband providers has flown under the radar from an anti-trust and competition perspective. This could change in a Biden Administration.
  • ‘Tsunamis of Misinformation’ Overwhelm Local Election Officials” By Kellen Browning and Davey Alba — The New York Times. State and local officials are struggling in terms of human resources and capability to try to address the wave of misinformation and disinformation about the election and procedures being spewed across social media.

Other Developments

  • The United States’ (U.S.) Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA), the Federal Bureau of Investigation (FBI), and the Department of Health and Human Services (HHS) released a joint advisory titled “Ransomware Activity Targeting the Healthcare and Public Health Sector.” The advisory “describes the tactics, techniques, and procedures (TTPs) used by cybercriminals against targets in the Healthcare and Public Health (HPH) Sector to infect systems with ransomware, notably Ryuk and Conti, for financial gain.” The agencies’ key findings include:
    • CISA, FBI, and HHS assess malicious cyber actors are targeting the HPH Sector with TrickBot and BazarLoader malware, often leading to ransomware attacks, data theft, and the disruption of healthcare services.
    • These issues will be particularly challenging for organizations within the COVID-19 pandemic; therefore, administrators will need to balance this risk when determining their cybersecurity investments.
  • The National Institute of Standards and Technology (NIST) published a companion guidance document to accompany the major update to guidance issued in September that federal agencies and federal contractors must follow. NIST’s Control Baselines for Information Systems and Organizations, NIST Special Publication (SP) 800-53B, a companion publication to SP 800-53 Revision 5, “establishes security and privacy control baselines for federal information systems and organizations and provides tailoring guidance for those baselines.” NIST explained “[i]mplementation of a minimum set of controls selected from NIST SP 800-53, Revision 5 is mandatory to protect federal information and information systems in accordance with the Office of Management and Budget (OMB) Circular A-130 [and the provisions of the Federal Information Security Modernization Act” (FISMA). NIST added while “the privacy control baseline is not mandated by law or OMB A-130,  SP 800-53B—along with other supporting NIST publications—is designed to help organizations identify the security and privacy controls needed to manage risk and to satisfy the security and privacy requirements in FISMA, the Privacy Act of 1974, selected OMB policies, and designated Federal Information Processing Standards (FIPS), among others.”
  • The United Kingdom’s (UK) Information Commissioner’s Office (ICO) has released its third significant fine in a few weeks with a £18.4 million fine on Marriott International Inc under the General Data Protection Regulation (GDPR). Because the GDPR came into force in May 2018, only a portion of the data breach dating back to 2014 falls under the EU’s data protection law. Also, the ICO finished its investigation and levied its fine before the UK leaves the European Union (EU). A few weeks ago, the ICO levied a £20 million fine on British Airways “for failing to protect the personal and financial details of more than 400,000 of its customers.” More recently, the ICO completed its investigation into the data brokering practices of Equifax, Transunion, and Experian and found widespread privacy and data protection violations.
    • The ICO originally proposed a £99 million fine on Marriott, but like the British Airways fine, it was dramatically revised downward, in part, because of the pandemic’s effect on the company.
    • In its investigation of Marriott, the ICO found:
      • Marriott estimates that 339 million guest records worldwide were affected following a cyber-attack in 2014 on Starwood Hotels and Resorts Worldwide Inc. The attack, from an unknown source, remained undetected until September 2018, by which time the company had been acquired by Marriott. 
      • The personal data involved differed between individuals but may have included names, email addresses, phone numbers, unencrypted passport numbers, arrival/departure information, guests’ VIP status and loyalty programme membership number.
      • The precise number of people affected is unclear as there may have been multiple records for an individual guest. Seven million guest records related to people in the UK.
      • The ICO’s investigation found that there were failures by Marriott to put appropriate technical or organisational measures in place to protect the personal data being processed on its systems…
      • Because the breach happened before the UK left the EU, the ICO investigated on behalf of all EU authorities as lead supervisory authority under the GDPR. The penalty and action have been approved by the other EU DPAs through the GDPR’s cooperation process.
      • In July 2019, the ICO issued Marriott with a notice of intent to fine. As part of the regulatory process, the ICO considered representations from Marriott, the steps Marriott took to mitigate the effects of the incident and the economic impact of COVID-19 on their business before setting a final penalty.
  • Five Democratic Senators wrote the United States’ (U.S.) Department of Homeland Security’s Office of the Inspector General (OIG) requesting an investigation of “warrantless domestic surveillance of phones by Customs and Border Protection (CBP).” Senators Ron Wyden (D-OR), Sherrod Brown (D-OH), Elizabeth Warren (D-MA), Ed Markey (D-MA), and Brian Schatz (D-HI) stated
    • According to public government contracts, CBP has spent nearly half a million dollars for subscriptions to a commercial database provided by a government contractor named Venntel, containing location data collected from millions of Americans’ mobile phones. In an oversight call with Senate staff on September 16, 2020, CBP officials confirmed the agency’s use of this surveillance product, without a court order, in order to track and identify people in the United States.
    • The Senators asserted:
      • CBP is not above the law and it should not be able to buy its way around the Fourth Amendment. Accordingly, we urge you to investigate CBP’s warrantless use of commercial databases containing Americans’ information, including but not limited to Venntel’s location database. We urge you to examine what legal analysis, if any, CBP’s lawyers performed before the agency started to use this surveillance tool. We also request that you determine how CBP was able to begin operational use of Venntel’s location database without the Department of Homeland Security Privacy Office first publishing a Privacy Impact Assessment.
  • The United States Patent and Trademark Office (USPTO) published “Public Views on Artificial Intelligence and Intellectual Property Policy” on the basis of two rounds of comments on artificial intelligence (AI), patents, and intellectual property (IP). The USPTO said a key priority “is to maintain United States leadership in innovation, especially in emerging technologies, including AI.” The USPTO stated “[t]o further this goal, the USPTO has been actively engaging with the innovation community and experts in AI to promote the understanding and reliability of intellectual property (IP) rights in relation to AI technology…[and] is working to ensure that appropriate IP incentives are in place to encourage further innovation in and around this critical area.”
    • The USPTO stated “[f]rom the synthesis of the public comments, a number of themes emerged:
      • General Themes
        • Many comments addressed the fact that AI has no universally recognized definition. Due to the wide-ranging definitions of the term, often comments urged caution with respect to specific IP policymaking in relation to AI.
        • The majority of public commenters, while not offering definitions of AI, agreed that the current state of the art is limited to “narrow” AI. Narrow AI systems are those that perform individual tasks in well-defined domains (e.g., image recognition, translation, etc.). The majority viewed the concept of artificial general intelligence (AGI)— intelligence akin to that possessed by humankind and beyond—as merely a theoretical possibility that could arise in a distant future.
        • Based on the majority view that AGI has not yet arrived, the majority of comments suggested that current AI could neither invent nor author without human intervention. The comments suggested that human beings remain integral to the operation of AI, and this is an important consideration in evaluating whether IP law needs modification in view of the current state of AI technology.
        • Across all IP topics, a majority of public commenters expressed a general sense that the existing U.S. intellectual property laws are calibrated correctly to address the evolution of AI. However, commenters appear split as to whether any new classes of IP rights would be beneficial to ensure a more robust IP system.
  • New Zealand’s Office of the Privacy Commissioner (OPC) has released more materials in the run up to the 1 December effective date of the Privacy Act 2020:
  • The Office of the Privacy Commissioner of Canada (OPC) announced it “has opened investigations into recent cyber security incidents involving attacks on Government of Canada online service accounts.” The Privacy Commissioner initiated the two investigations and “will examine whether the government institutions met their obligations under the Privacy Act, the federal public sector privacy law.” The OPC explained:
    • One investigation will focus on cyberattacks on the GCKey, an electronic credential issued by the government and used by federal institutions to provide individuals and organizations with access to online services. It relates to Shared Services Canada, which issues the GCKey, and federal government departments affected by the attacks on the GCKey.
    • The second investigation relates to cyberattacks on Canada Revenue Agency accounts. The incidents involved “credential stuffing,” where hackers use passwords and usernames collected from previous breaches to take advantage of the fact that many people use the same passwords and usernames for various accounts.
  • Microsoft is claiming that it foiled an Iranian cyber-attack on a high-profile cybersecurity conference held in Saudi Arabia. In a blog posting, Microsoft stated “we’re sharing that we have detected and worked to stop a series of cyberattacks from the threat actor Phosphorus masquerading as conference organizers to target more than 100 high-profile individuals.” Microsoft claimed that “Phosphorus, an Iranian actor, has targeted with this scheme potential attendees of the upcoming Munich Security Conference and the Think 20 (T20) Summit in Saudi Arabia.”
    • Microsoft contended:
      • The attackers have been sending possible attendees spoofed invitations by email. The emails use near-perfect English and were sent to former government officials, policy experts, academics and leaders from non-governmental organizations. Phosphorus helped assuage fears of travel during the Covid-19 pandemic by offering remote sessions.
      • We believe Phosphorus is engaging in these attacks for intelligence collection purposes. The attacks were successful in compromising several victims, including former ambassadors and other senior policy experts who help shape global agendas and foreign policies in their respective countries.

Coming Events

  • On 10 November, the Senate Commerce, Science, and Transportation Committee will hold a hearing to consider nominations, including Nathan Simington’s to be a Member of the Federal Communications Commission.
  • On 17 November, the Senate Judiciary Committee will reportedly hold a hearing with Facebook CEO Mark Zuckerberg and Twitter CEO Jack Dorsey on Section 230 and how their platforms chose to restrict The New York Post article on Hunter Biden.
  • On 18 November, the Federal Communications Commission (FCC) will hold an open meeting and has released a tentative agenda:
    • Modernizing the 5.9 GHz Band. The Commission will consider a First Report and Order, Further Notice of Proposed Rulemaking, and Order of Proposed Modification that would adopt rules to repurpose 45 megahertz of spectrum in the 5.850-5.895 GHz band for unlicensed operations, retain 30 megahertz of spectrum in the 5.895-5.925 GHz band for the Intelligent Transportation Systems (ITS) service, and require the transition of the ITS radio service standard from Dedicated Short-Range Communications technology to Cellular Vehicle-to-Everything technology. (ET Docket No. 19-138)
    • Further Streamlining of Satellite Regulations. The Commission will consider a Report and Order that would streamline its satellite licensing rules by creating an optional framework for authorizing space stations and blanket-licensed earth stations through a unified license. (IB Docket No. 18-314)
    • Facilitating Next Generation Fixed-Satellite Services in the 17 GHz Band. The Commission will consider a Notice of Proposed Rulemaking that would propose to add a new allocation in the 17.3-17.8 GHz band for Fixed-Satellite Service space-to-Earth downlinks and to adopt associated technical rules. (IB Docket No. 20-330)
    • Expanding the Contribution Base for Accessible Communications Services. The Commission will consider a Notice of Proposed Rulemaking that would propose expansion of the Telecommunications Relay Services (TRS) Fund contribution base for supporting Video Relay Service (VRS) and Internet Protocol Relay Service (IP Relay) to include intrastate telecommunications revenue, as a way of strengthening the funding base for these forms of TRS and making it more equitable without increasing the size of the Fund itself. (CG Docket Nos. 03-123, 10-51, 12-38)
    • Revising Rules for Resolution of Program Carriage Complaints. The Commission will consider a Report and Order that would modify the Commission’s rules governing the resolution of program carriage disputes between video programming vendors and multichannel video programming distributors. (MB Docket Nos. 20-70, 17-105, 11-131)
    • Enforcement Bureau Action. The Commission will consider an enforcement action.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

“Awareness is Key” by Abraham Pena is licensed under CC BY 4.0

Further Reading, Other Developments, and Coming Events (7 October)

Coming Events

  • The European Union Agency for Cybersecurity (ENISA), Europol’s European Cybercrime Centre (EC3) and the Computer Emergency Response Team for the EU Institutions, Bodies and Agencies (CERT-EU) will hold the 4th annual IoT Security Conference series “to raise awareness on the security challenges facing the Internet of Things (IoT) ecosystem across the European Union:”
    • Artificial Intelligence – 14 October at 15:00 to 16:30 CET
    • Supply Chain for IoT – 21 October at 15:00 to 16:30 CET
  • The Federal Communications Commission (FCC) will hold an open commission meeting on 27 October, and the agency has released a tentative agenda:
    • Restoring Internet Freedom Order Remand – The Commission will consider an Order on Remand that would respond to the remand from the U.S. Court of Appeals for the D.C. Circuit and conclude that the Restoring Internet Freedom Order promotes public safety, facilitates broadband infrastructure deployment, and allows the Commission to continue to provide Lifeline support for broadband Internet access service. (WC Docket Nos. 17-108, 17-287, 11- 42)
    • Establishing a 5G Fund for Rural America – The Commission will consider a Report and Order that would establish the 5G Fund for Rural America to ensure that all Americans have access to the next generation of wireless connectivity. (GN Docket No. 20-32)
    • Increasing Unlicensed Wireless Opportunities in TV White Spaces – The Commission will consider a Report and Order that would increase opportunities for unlicensed white space devices to operate on broadcast television channels 2-35 and expand wireless broadband connectivity in rural and underserved areas. (ET Docket No. 20-36)
    • Streamlining State and Local Approval of Certain Wireless Structure Modifications –
    • The Commission will consider a Report and Order that would further accelerate the deployment of 5G by providing that modifications to existing towers involving limited ground excavation or deployment would be subject to streamlined state and local review pursuant to section 6409(a) of the Spectrum Act of 2012. (WT Docket No. 19-250; RM-11849)
    • Revitalizing AM Radio Service with All-Digital Broadcast Option – The Commission will consider a Report and Order that would authorize AM stations to transition to an all-digital signal on a voluntary basis and would also adopt technical specifications for such stations. (MB Docket Nos. 13-249, 19-311)
    • Expanding Audio Description of Video Content to More TV Markets – The Commission will consider a Report and Order that would expand audio description requirements to 40 additional television markets over the next four years in order to increase the amount of video programming that is accessible to blind and visually impaired Americans. (MB Docket No. 11-43)
    • Modernizing Unbundling and Resale Requirements – The Commission will consider a Report and Order to modernize the Commission’s unbundling and resale regulations, eliminating requirements where they stifle broadband deployment and the transition to next- generation networks, but preserving them where they are still necessary to promote robust intermodal competition. (WC Docket No. 19-308)
    • Enforcement Bureau Action – The Commission will consider an enforcement action.
  • On October 29, the Federal Trade Commission (FTC) will hold a seminar titled “Green Lights & Red Flags: FTC Rules of the Road for Business workshop” that “will bring together Ohio business owners and marketing executives with national and state legal experts to provide practical insights to business and legal professionals about how established consumer protection principles apply in today’s fast-paced marketplace.”

Other Developments

  • Consumer Reports released a study it did on the “California Consumer Privacy Act” (CCPA) (AB 375), specifically on the Do-Not-Sell right California residents were given under the newly effective privacy statute. For those people (like me) who expected a significant number of businesses to make it hard for people to exercise their rights, this study confirms this suspicion. Consumer Reports noted more than 40% of data brokers had hard to find links or extra, complicated steps for people to tell them not to sell their personal information.
    • In “CCPA: Are Consumers Digital Rights Protected?,” Consumer Reports used this methodology:
    • Consumer Reports’ Digital Lab conducted a mixed methods study to examine whether the new CCPA is working for consumers. This study focused on the Do-Not-Sell (DNS) provision in the CCPA, which gives consumers the right to opt out of the sale of their personal information to third parties through a “clear and conspicuous link” on the company’s homepage.1 As part of the study, 543 California residents made DNS requests to 214 data brokers listed in the California Attorney General’s data broker registry. Participants reported their experiences via survey.
    • Consumer Reports found:
      • Consumers struggled to locate the required links to opt out of the sale of their information. For 42.5% of sites tested, at least one of three testers was unable to find a DNS link. All three testers failed to find a “Do Not Sell” link on 12.6% of sites, and in several other cases one or two of three testers were unable to locate a link.
        • Follow-up research focused on the sites in which all three testers did not find the link revealed that at least 24 companies on the data broker registry do not have the required DNS link on their homepage.
        • All three testers were unable to find the DNS links for five additional companies, though follow-up research revealed that the companies did have DNS links on their homepages. This also raises concerns about compliance, since companies are required to post the link in a “clear and conspicuous” manner.
      • Many data brokers’ opt-out processes are so onerous that they have substantially impaired consumers’ ability to opt out, highlighting serious flaws in the CCPA’s opt-out model.
        • Some DNS processes involved multiple, complicated steps to opt out, including downloading third-party software.
        • Some data brokers asked consumers to submit information or documents that they were reluctant to provide, such as a government ID number, a photo of their government ID, or a selfie.
        • Some data brokers confused consumers by requiring them to accept cookies just to access the site.
        • Consumers were often forced to wade through confusing and intimidating disclosures to opt out.
        • Some consumers spent an hour or more on a request.
        • At least 14% of the time, burdensome or broken DNS processes prevented consumers from exercising their rights under the CCPA.
      • At least one data broker used information provided for a DNS request to add the user to a marketing list, in violation of the CCPA.
      • At least one data broker required the user to set up an account to opt out, in violation of the CCPA.
      • Consumers often didn’t know if their opt-out request was successful. Neither the CCPA nor the CCPA regulations require companies to notify consumers when their request has been honored. About 46% of the time, consumers were left waiting or unsure about the status of their DNS request.
      • About 52% of the time, the tester was “somewhat dissatisfied” or “very dissatisfied” with the opt-out processes.
      • On the other hand, some consumers reported that it was quick and easy to opt out, showing that companies can make it easier for consumers to exercise their rights under the CCPA. About 47% of the time, the tester was “somewhat satisfied” or “very satisfied” with the opt-out process.
    • Consumer Reports recommended:
      • The Attorney General should vigorously enforce the CCPA to address noncompliance.
      • To make it easier to exercise privacy preferences, consumers should have access to browser privacy signals that allow them to opt out of all data sales in one step.
      • The AG should more clearly prohibit dark patterns, which are user interfaces that subvert consumer intent, and design a uniform opt-out button. This will make it easier for consumers to locate the DNS link on individual sites.
      • The AG should require companies to notify consumers when their opt-out requests have been completed, so that consumers can know that their information is no longer being sold.
      • The legislature or AG should clarify the CCPA’s definitions of “sale” and “service provider” to more clearly cover data broker information sharing.
      • Privacy should be protected by default. Rather than place the burden on consumers to exercise privacy rights, the law should require reasonable data minimization, which limits the collection, sharing, retention, and use to what is reasonably necessary to operate the service.
  • Two agencies of the Department of the Treasury have issued guidance regarding the advisability and legality of paying ransomware to individuals or entities under United States (U.S.) sanction at a time when ransomware attacks are on the rise. It bears note that a person or entity in the U.S. may face criminal and civil liability for paying a sanctioned ransomware entity even if they did not know it was sanctioned. One of the agencies reasoned that paying ransoms to such parties is contrary to U.S. national security policy and only encourages more ransomware attacks.
    • The Office of Foreign Assets Control (OFAC) issued an “advisory to highlight the sanctions risks associated with ransomware payments related to malicious cyber-enabled activities.” OFAC added:
      • Demand for ransomware payments has increased during the COVID-19 pandemic as cyber actors target online systems that U.S. persons rely on to continue conducting business. Companies that facilitate ransomware payments to cyber actors on behalf of victims, including financial institutions, cyber insurance firms, and companies involved in digital forensics and incident response, not only encourage future ransomware payment demands but also may risk violating OFAC regulations. This advisory describes these sanctions risks and provides information for contacting relevant U.S. government agencies, including OFAC, if there is a reason to believe the cyber actor demanding ransomware payment may be sanctioned or otherwise have a sanctions nexus.
    • Financial Crimes Enforcement Network (FinCEN) published its “advisory to alert financial institutions to predominant trends, typologies, and potential indicators of ransomware and associated money laundering activities. This advisory provides information on:
      • (1) the role of financial intermediaries in the processing of ransomware payments;
      • (2) trends and typologies of ransomware and associated payments;
      • (4) reporting and sharing information related to ransomware attacks.
  • The Government Accountability Office (GAO) found uneven implementation at seven federal agencies in meeting the Office of Management and Budget’s (OMB) requirements in using the category management initiative for buying information technology (IT). This report follows in a long line of assessments of how the federal government is not spending its billions of dollars invested in IT to maximum effect. The category management initiative was launched two Administrations ago as a means of driving greater efficiency and savings for the nearly $350 billion the U.S. government spends annually in services and goods, much of which could be bought in large quantities instead of piecemeal by agency as is now the case.
    • The chair and ranking member of the House Oversight Committee and other Members had asked the GAO “to conduct a review of federal efforts to reduce IT contract duplication and/or waste” specifically “to determine the extent to which (1) selected agencies’ efforts to prevent, identify, and reduce duplicative or wasteful IT contracts were consistent with OMB’s category management initiative; and (2) these efforts were informed by spend analyses.” The GAO ended up looking at the Departments of Agriculture (USDA), Defense (DOD), Health and Human Services (HHS), Homeland Security (DHS), Justice (DOJ), State (State), and Veterans Affairs (VA).
    • The GAO found:
      • The seven agencies in our review varied in their implementation of OMB’s category management activities that contribute to identifying, preventing, and reducing duplicative IT contracts. Specifically, most of the agencies fully implemented the two activities to identify a Senior Accountable Official and develop processes and policies for implementing category management efforts, and to engage their workforces in category management training. However, only about half the agencies fully implemented the activities to reduce unaligned IT spending, including increasing the use of Best in Class contract solutions, and share prices paid, terms, and conditions for purchased IT goods and services. Agencies cited several reasons for their varied implementation, including that they were still working to define how to best integrate category management into the agency.
      • Most of the agencies used spend analyses to inform their efforts to identify and reduce duplication, and had developed and implemented strategies to address the identified duplication, which, agency officials reported resulted in millions in actual and anticipated future savings. However, two of these agencies did not make regular use of the spend analyses.
      • Until agencies fully implement the activities in OMB’s category management initiative, and make greater use of spend analyses to inform their efforts to identify and reduce duplicative contracts, they will be at increased risk of wasteful spending. Further, agencies will miss opportunities to identify and realize savings of potentially hundreds of millions of dollars.
  • The Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) provided “specific Chinese government and affiliated cyber threat actor tactics, techniques, and procedures (TTPs) and recommended mitigations to the cybersecurity community to assist in the protection of our Nation’s critical infrastructure.” CISA took this action “[i]n light of heightened tensions between the United States and China.”
    • CISA asserted
      • According to open-source reporting, offensive cyber operations attributed to the Chinese government targeted, and continue to target, a variety of industries and organizations in the United States, including healthcare, financial services, defense industrial base, energy, government facilities, chemical, critical manufacturing (including automotive and aerospace), communications, IT, international trade, education, videogaming, faith-based organizations, and law firms.
    • CISA recommends organizations take the following actions:
      • Adopt a state of heightened awareness. Minimize gaps in personnel availability, consistently consume relevant threat intelligence, and update emergency call trees.
      • Increase organizational vigilance. Ensure security personnel monitor key internal security capabilities and can identify anomalous behavior. Flag any known Chinese indicators of compromise (IOCs) and TTPs for immediate response.
      • Confirm reporting processes. Ensure personnel know how and when to report an incident. The well-being of an organization’s workforce and cyber infrastructure depends on awareness of threat activity. Consider reporting incidents to CISA to help serve as part of CISA’s early warning system (see the Contact Information section below).
      • Exercise organizational incident response plans. Ensure personnel are familiar with the key steps they need to take during an incident. Do they have the accesses they need? Do they know the processes? Are various data sources logging as expected? Ensure personnel are positioned to act in a calm and unified manner.
  • The Supreme Court of the United States (SCOTUS) declined to hear a case on an Illinois revenge porn law that the Illinois State Supreme Court upheld, finding it did not impinge on a woman’s First Amendment rights. Bethany Austin was charged with a felony under an Illinois law barring the nonconsensual dissemination of private sexual pictures when she printed and distributed pictures of her ex-fiancé’s lover. Because SCOTUS decided not to hear this case, the Illinois case and others like it remain Constitutional.
    • The Illinois State Supreme Court explained the facts of the case:
      • Defendant (aka Bethany Austin) was engaged to be married to Matthew, after the two had dated for more than seven years. Defendant and Matthew lived together along with her three children. Defendant shared an iCloud account with Matthew, and all data sent to or from Matthew’s iPhone went to their shared iCloud account, which was connected to defendant’s iPad. As a result, all text messages sent by or to Matthew’s iPhone automatically were received on defendant’s iPad. Matthew was aware of this data sharing arrangement but took no action to disable it.
      • While Matthew and defendant were engaged and living together, text messages between Matthew and the victim, who was a neighbor, appeared on defendant’s iPad. Some of the text messages included nude photographs of the victim. Both Matthew and the victim were aware that defendant had received the pictures and text messages on her iPad. Three days later, Matthew and the victim again exchanged several text messages. The victim inquired, “Is this where you don’t want to message [because] of her?” Matthew responded, “no, I’m fine. [S]omeone wants to sit and just keep watching want [sic] I’m doing I really do not care. I don’t know why someone would wanna put themselves through that.” The victim replied by texting, “I don’t either. Soooooo baby ….”
      • Defendant and Matthew cancelled their wedding plans and subsequently broke up. Thereafter, Matthew began telling family and friends that their relationship had ended because defendant was crazy and no longer cooked or did household chores.
      • In response, defendant wrote a letter detailing her version of events. As support, she attached to the letter four of the naked pictures of the victim and copies of the text messages between the victim and Matthew. When Matthew’s cousin received the letter along with the text messages and pictures, he informed Matthew.
      • Upon learning of the letter and its enclosures, Matthew contacted the police. The victim was interviewed during the ensuing investigation and stated that the pictures were private and only intended for Matthew to see. The victim acknowledged that she was aware that Matthew had shared an iCloud account with defendant, but she thought it had been deactivated when she sent him the nude photographs.
    • In her petition for SCOTUS to hear her case, Austin asserted:
      • Petitioner Bethany Austin is being prosecuted under Illinois’ revenge porn law even though she is far from the type of person such laws were intended to punish. These laws proliferated rapidly in recent years because of certain reprehensible practices, such as ex-lovers widely posting images of their former mates to inflict pain for a bad breakup, malicious stalkers seeking to damage an innocent person’s reputation, or extortionists using intimate photos to collect ransom. Austin did none of those things, yet is facing felony charges because she tried to protect her reputation from her former fiancé’s lies about the reason their relationship ended.
      • The Illinois Supreme Court rejected Petitioner’s constitutional challenge to the state revenge porn law only because it ignored well-established First Amendment rules: It subjected the law only to intermediate, rather than strict scrutiny, because it incorrectly classified a statute that applies only to sexual images as content neutral; it applied diminished scrutiny because the speech at issue was deemed not to be a matter of public concern; and it held the law need not require a showing of malicious intent to justify criminal penalties, reasoning that such intent can be inferred from the mere fact that the specified images were shared. Each of these conclusions contradicts First Amendment principles recently articulated by this Court, and also is inconsistent with decisions of various state courts, including the Vermont Supreme Court.
    • Illinois argued in its brief to SCOTUS:
      • The nonconsensual dissemination of private sexual images exposes victims to a wide variety of serious harms that affect nearly every aspect of their lives. The physical, emotional, and economic harms associated with such conduct are well-documented: many victims are exposed to physical violence, stalking, and harassment; suffer from emotional and psychological harm; and face limited professional prospects and lowered income, among other repercussions. To address this growing problem and protect its residents from these harms, Illinois enacted section 11-23.5,720 ILCS 5/11-23.5. Petitioner—who was charged with violating section 11-23.5 after she disseminated nude photos of her fiancé’s paramour without consent—asks this Court to review the Illinois Supreme Court’s decision rejecting her First Amendment challenge.
  • Six U.S. Agency for Global Media (USAGM) whistleblowers have filed a complaint concerning “retaliatory actions” with the Office of the Inspector General (OIG) at the Department of State and the Office of Special Counsel, arguing the newly installed head of USAGM punished them for making complaints through proper channels about his actions. This is the latest development at the agency. the United States Court of Appeals for the District of Columbia enjoined USAGM from “taking any action to remove or replace any officers or directors of the OTF,” pending the outcome of the suit which is being expedited.
  • Additionally, USAGM CEO and Chair of the Board Michael Pack is being accused in two different letters of seeking to compromise the integrity and independence of two organizations he oversees. There have been media accounts of the Trump Administration’s remaking of USAGM in ways critics contend are threatening the mission and effectiveness of the Open Technology Fund (OTF), a U.S. government non-profit designed to help dissidents and endangered populations throughout the world. The head of the OTF has been removed, evoking the ire of Members of Congress, and other changes have been implemented that are counter to the organization’s mission. Likewise, there are allegations that politically-motivated policy changes seek to remake the Voice of America (VOA) into a less independent entity.
  • The whistleblowers claimed in their complaint:
    • Each of the Complainants made protected disclosures –whether in the form of OIG complaints, communications with USAGM leadership, and/or communications with appropriate Congressional committees–regarding their concerns about official actions primarily taken by Michael Pack, who has been serving as the Chief Executive Officer for USAGM since June 4, 2020. The Complainants’ concerns involve allegations that Mr. Pack has engaged in conduct that violates federal law and/or USAGM regulations, and that constitutes an abuse of authority and gross mismanagement. Moreover, each of the Complainants was targeted for retaliatory action by Mr. Pack because of his belief that they held political views opposed to his, which is a violation of the Hatch Act.
    • Each of the Complainants was informed by letter, dated August 12, 2020, that their respective accesses to classified information had been suspended pending further investigation. Moreover, they were all concurrently placed on administrative leave. In each of the letters to the Complainants, USAGM claimed that the Complainants had been improperly granted security clearances, and that the Complainants failed to take remedial actions to address personnel and security concerns prior to permitting other USAGM employees to receive security clearances. In addition, many or all of the Complainants were earlier subject to retaliatory adverse personnel actions in the form of substantial limitations on their ability to carry out their work responsibilities(i.e. a significant change in duties and responsibilities), which limitations were imposed without following appropriate personnel procedures.

Further Reading

  • Big Tech Was Their Enemy, Until Partisanship Fractured the Battle Plans” By Cecilia Kang and David McCabe — The New York Times. There’s a bit of court intrigue in this piece about how Republicans declined to join Democrats in the report on the antirust report released this week, sapping the recommendations on how to address Big Tech of power.
  • Facebook Keeps Data Secret, Letting Conservative Bias Claims Persist” By Bobby Allyn — NPR. Still no evidence of an anti-conservative bias at Facebook, according to experts, and the incomplete data available seem to indicate conservative content may be more favored by users than liberal content. Facebook does not release data that settle the question, however, and there are all sorts of definitional questions that need answers before this issue could be definitely settled. And yet, some food for thought is a significant percentage of sharing a link may be driven by bots and not humans.
  • News Corp. changes its tune on Big Tech” By Sara Fischer — Axios.  After beating the drum for years about the effect of Big Tech on journalism, the parent company of the Wall Street Journal and other media outlets is much more conciliatory these days. It may have something to do with all the cash the Googles and Facebooks of the world are proposing to throw at some media outlets for their content. It remains to be seen how this change in tune will affect the Australian Competition and Consumer Commission’s (ACCC) proposal to ensure that media companies are compensated for articles and content online platforms use. In late July the ACCC released for public consultation a draft of “a mandatory code of conduct to address bargaining power imbalances between Australian news media businesses and digital platforms, specifically Google and Facebook.”
  • Silicon Valley Opens Its Wallet for Joe Biden” By Daniel Oberhaus — WIRED. In what will undoubtedly be adduced as evidence that Silicon Valley is a liberal haven, this article claims according to federal elections data for this election cycle, Alphabet, Amazon, Apple, Facebook, Microsoft, and Oracle employees have contributed $4,787,752 to former Vice President Joe Biden and $239,527 to President Donald Trump. This is only for contributions of $200 and higher, so it is likely these data are not complete.
  • Facebook bans QAnon across its platforms” By Ben Collins and Brandy Zadrozny — NBC News. The social media giant has escalated and will remove all content related to the conspiracy group and theory known as QAnon. However, believers have been adaptable and agile in dropping certain terms and using methods to evade detection. Some experts say Facebook’s actions are too little, too late as these beliefs are widespread and are fueling a significant amount of violence and unrest in the real world.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Katie White from Pixabay

Further Reading, Other Developments, and Coming Events (18 September)

Coming Events

  • The United States’ Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) announced that its third annual National Cybersecurity Summit “will be held virtually as a series of webinars every Wednesday for four weeks beginning September 16 and ending October 7:”
    • September 16: Key Cyber Insights
    • September 23: Leading the Digital Transformation
    • September 30: Diversity in Cybersecurity
    • October 7: Defending our Democracy
    • One can register for the event here.
  • On 22 September, the Federal Trade Commission (FTC) will hold a public workshop “to examine the potential benefits and challenges to consumers and competition raised by data portability.” The agency has released its agenda and explained:
    • The workshop will also feature four panel discussions that will focus on: case studies on data portability rights in the European Union, India, and California; case studies on financial and health portability regimes; reconciling the benefits and risks of data portability; and the material challenges and solutions to realizing data portability’s potential.
  • The Senate Judiciary Committee’s Intellectual Property Subcommittee will hold a hearing on 23 September titled “Examining Threats to American Intellectual Property: Cyber-attacks and Counterfeits During the COVID-19 Pandemic” with these witnesses:
    • Adam Hickey, Deputy Assistant Attorney General National Security Division, Department of Justice
    • Clyde Wallace, Deputy Assistant Director Cyber Division, Federal Bureau of Investigation
    • Steve Francis, Assistant Director, HSI Global Trade Investigations Division Director, National Intellectual Property Rights Center, U.S. Immigration and Customs Enforcement, Department of Homeland Security
    • Bryan S. Ware, Assistant Director for Cybersecurity Cyber Security and Infrastructure Security Agency, Department of Homeland Security
  • On 23 September, the Commerce, Science, and Transportation Committee will hold a hearing titled “Revisiting the Need for Federal Data Privacy Legislation,” with these witnesses:
    • The Honorable Julie Brill, Former Commissioner, Federal Trade Commission
    • The Honorable William Kovacic, Former Chairman and Commissioner, Federal Trade Commission
    • The Honorable Jon Leibowitz, Former Chairman and Commissioner, Federal Trade Commission
    • The Honorable Maureen Ohlhausen, Former Commissioner and Acting Chairman, Federal Trade Commission
  • The Senate Judiciary Committee’s Antitrust, Competition Policy & Consumer Rights Subcommittee will hold a hearing on 30 September titled “Oversight of the Enforcement of the Antitrust Laws” with Federal Trade Commission Chair Joseph Simons and United States Department of Justice Antitrust Division Assistant Attorney General Makan Delhrahim.
  • The Federal Communications Commission (FCC) will hold an open meeting on 30 September and has made available its agenda with these items:
    • Facilitating Shared Use in the 3.1-3.55 GHz Band. The Commission will consider a Report and Order that would remove the existing non-federal allocations from the 3.3-3.55 GHz band as an important step toward making 100 megahertz of spectrum in the 3.45-3.55 GHz band available for commercial use, including 5G, throughout the contiguous United States. The Commission will also consider a Further Notice of Proposed Rulemaking that would propose to add a co-primary, non-federal fixed and mobile (except aeronautical mobile) allocation to the 3.45-3.55 GHz band as well as service, technical, and competitive bidding rules for flexible-use licenses in the band. (WT Docket No. 19-348)
    • Expanding Access to and Investment in the 4.9 GHz Band. The Commission will consider a Sixth Report and Order that would expand access to and investment in the 4.9 GHz (4940-4990 MHz) band by providing states the opportunity to lease this spectrum to commercial entities, electric utilities, and others for both public safety and non-public safety purposes. The Commission also will consider a Seventh Further Notice of Proposed Rulemaking that would propose a new set of licensing rules and seek comment on ways to further facilitate access to and investment in the band. (WP Docket No. 07-100)
    • Improving Transparency and Timeliness of Foreign Ownership Review Process. The Commission will consider a Report and Order that would improve the timeliness and transparency of the process by which it seeks the views of Executive Branch agencies on any national security, law enforcement, foreign policy, and trade policy concerns related to certain applications filed with the Commission. (IB Docket No. 16-155)
    • Promoting Caller ID Authentication to Combat Spoofed Robocalls. The Commission will consider a Report and Order that would continue its work to implement the TRACED Act and promote the deployment of caller ID authentication technology to combat spoofed robocalls. (WC Docket No. 17-97)
    • Combating 911 Fee Diversion. The Commission will consider a Notice of Inquiry that would seek comment on ways to dissuade states and territories from diverting fees collected for 911 to other purposes. (PS Docket Nos. 20-291, 09-14)
    • Modernizing Cable Service Change Notifications. The Commission will consider a Report and Order that would modernize requirements for notices cable operators must provide subscribers and local franchising authorities. (MB Docket Nos. 19-347, 17-105)
    • Eliminating Records Requirements for Cable Operator Interests in Video Programming. The Commission will consider a Report and Order that would eliminate the requirement that cable operators maintain records in their online public inspection files regarding the nature and extent of their attributable interests in video programming services. (MB Docket No. 20-35, 17-105)
    • Reforming IP Captioned Telephone Service Rates and Service Standards. The Commission will consider a Report and Order, Order on Reconsideration, and Further Notice of Proposed Rulemaking that would set compensation rates for Internet Protocol Captioned Telephone Service (IP CTS), deny reconsideration of previously set IP CTS compensation rates, and propose service quality and performance measurement standards for captioned telephone services. (CG Docket Nos. 13-24, 03-123)
    • Enforcement Item. The Commission will consider an enforcement action.

Other Developments

  • Former Principal Deputy Under Secretary in the Office of Intelligence and Analysis Brian Murphy has filed a whistleblower reprisal complaint against the United States Department of Homeland Security (DHS) for providing intelligence analysis the Trump White House and DHS did not want, mainly for political reasons, and then refusing to make alterations to fit the Administration’s chosen narrative on issues, especially on the Russian Federation’s interference in the 2020 Election. Murphy alleges “he was retaliatorily demoted to the role of Assistant to the Deputy Under Secretary for the DHS Management Division” because he refused to comply with orders from acting Secretary of Homeland Security Chad Wolf. Specifically, he claims:
    • In mid-May 2020, Mr. Wolf instructed Mr. Murphy to cease providing intelligence assessments on the threat of Russian interference in the United States, and instead start reporting on interference activities by China and Iran. Mr. Wolf stated that these instructions specifically originated from White House National Security Advisor Robert O’Brien. Mr. Murphy informed Mr. Wolf he would not comply with these instructions, as doing so would put the country in substantial and specific danger.
  • The National Security Agency (NSA) Office of the Inspector General (OIG) issued an unclassified version of its Semiannual Report to Congress consisting of “the audits, evaluations, inspections, and investigations that were completed and ongoing” from 1 October 2019 to 31 March 2020.
    • The OIG found ongoing problems with how the NSA is administering surveillance of United States’ people overseas (i.e. Section 704 and 705 of the Foreign Intelligence Surveillance Act), something that has been a long running problem at the agency. The OIG found
      • NSA does not have adequate and complete documentation of scenario-based data tagging rules for accurately assigning data labels to restrict access to data in accordance with legal and policy requirements, and consistently assessing data labeling errors;
      • NSA has not designated a standardized field in NSA data tags to efficiently store and identify data needed to verify the accuracy of data label assignments;
      • NSA does not document in its targeting tool a majority of a certain type of targeting request; and
      • NSA controls do not adequately and completely verify the accuracy of data labels assigned to data prior to ingest into NSA repositories.
      • As a result of these findings, the OIG made seven recommendations, six to assist NSA in strengthening its corporate data tagging controls and governance, and a seventh to help ensure that NSA’s FISA §§704 and 705(b) data tagging legal and policy determinations are consistent with NSA representations made to the FISC and other external overseers regarding how NSA handles such data, and that these tagging requirements are fully documented and promulgated to the NSA workforce.
    • The OIG noted the middling progress the NSA has made in securing its information technology, a weakness that could well be used by adversaries to penetrate the agency’s networks:
      • In accordance with U.S. Office of Management and Budget guidance, the OIG is required annually to assess the effectiveness of information security programs on a maturity model spectrum, which ranges from Level 1 (ad hoc) to Level 5 (optimized). Our assessment of eight IT security areas revealed that while progress was made in some areas from FY2018 to FY2019, there continues to be room for improvement in all eight IT security areas.
      • For the second consecutive year, Identity and Access Management was deemed the strongest security area with an overall maturity level of 3, consistently implemented. The Agency’s challenges in Security Training dropped the maturity level from 3, consistently implemented, to 2, defined. For the second consecutive year, Contingency Planning was assessed at an overall maturity level of ad hoc; although the Agency has made some improvements to the program, additional improvements need to be made.
  • The Office of the National Director of Intelligence (ODNI) released a June 2020 Foreign Intelligence Surveillance Court (FISC) opinion that sets the limits on using information gained from electronic surveillance of former Trump campaign adviser Carter Page
    • FISC noted
      • The government has acknowledged that at least some of its collection under color of those FISC orders was unlawful. It nevertheless now contends that it must temporarily retain, and potentially use and disclose, the information collected, largely in the context of ongoing or anticipated litigation. The Court hereby sets parameters for such use or disclosure.
    • The FISC ordered:
      • (1) With regard to the third-party FOIA litigation, see supra pp. 9-10, and the pending litigation with Page, see supra p. 12, the government may use or disclose Page FISA information insofar as necessary for the good-faith conduct of that litigation;
      • (2) With regard to any future claims brought by Page seeking redress for unlawful electronic surveillance or physical search or for disclosure of the results of such surveillance or search, the government may use or disclose Page FISA information insofar as necessary to the good-faith conduct of the litigation of such claims;
      • (3) Further use or disclosure of Page FISA information is permitted insofar as necessary to effective performance or disciplinary reviews of government personnel, provided that any such use or disclosure of raw information is permitted only insofar as a particular need to use or disclose the specific information at issue has been demonstrated. This paragraph applies, but is not limited to, use by, and disclosure by or to, the FBI’s INSD or OPR;
      • (4) Further use or disclosure of Page FISA information by DOJ OIG is permitted only insofar as necessary to assess the implementation of Recommendation 9 of the OIG Report;
      • (5) Further use or disclosure of Page FISA information is permitted only insofar as necessary to investigate or prosecute potential crimes relating to the conduct of the Page or Crossfire Hurricane investigations, provided that any such use or disclosure of raw information is permitted only insofar as a particular need to use or disclose the specific information at issue has been demonstrated. This paragraph applies, but is not limited to, use by, and disclosure by or to, personnel engaged in the review being lead by United States Attorney Durham. See supra p.17;and
      • (6) By January 29, 2021, and at intervals of no more than six months thereafter, the government shall submit under oath a written report on the retention, and any use or disclosure, of Page FISA information
  • Portland, Oregon has passed bans on the use of facial recognition technology by its government and private entities that is being characterized as the most stringent in the United States. Effective immediately, no city agency may use FRT and come 1 January 2021 no private companies may do so. In contrast, FRT bans in Boston, San Francisco, and Oakland only bar government entities from using the technology. However, Portland residents would still be permitted to use FRT; for example, those choosing to use FRT to unlock their phone would still be legal. The legislation explains
    • The purpose of this Chapter is to prohibit the use of Face Recognition Technologies in Places of Public Accommodation by Private Entities within the boundaries of the City of Portland.
    • Face Recognition Technologies have been shown to falsely identify women and People of Color on a routine basis. While progress continues to be made in improving Face Recognition Technologies, wide ranges in accuracy and error rates that differ by race and gender have been found in vendor testing.
    • Community members have raised concerns on the impacts of Face Recognition Technologies on civil liberties and civil rights. In addition, the collection, trade, and use of face biometric information may compromise the privacy of individuals even in their private setting. While these claims are being assessed, the City is creating safeguards aiming to protect Portlanders’ sensitive information until better infrastructure and policies are in place.
    • Portland’s commitment to equity means that we prioritize the safety and well-being of communities of color and other marginalized and vulnerable community members.
    • However, the ban does not apply
      • To the extent necessary for a Private Entity to comply with federal, state, or local laws;
      • For user verification purposes by an individual to access the individual’s own personal or employer issued communication and electronic devices; or
      • In automatic face detection services in social media applications.
  • President Donald Trump has nominated Nathan Simington to replace Federal Communications Commission (FCC) Commissioner Michael O’Reilly. Reports indicate Trump was displeased that O’Reilly was not receptive to Executive Order (EO) 13925 “Preventing Online Censorship” and so declined to renominate O’Reilly for anther term. Simington is currently serving as Senior Advisor in the National Telecommunications and Information Administration (NTIA) and is reported to have been deeply involved in the drafting of the EO. A White House press release provided this biography:
    • Among his many responsibilities across the telecommunications industry, he works on 5G security and secure supply chains, the American Broadband Initiative, and is NTIA’s representative to the Government Advisory Committee of the Internet Corporation for Assigned Names and Numbers.
    • Prior to his appointment at NTIA, Mr. Simington was Senior Counsel to Brightstar Corporation, a leading international company in the wireless industry.  In this role, he negotiated deals with companies across the spectrum of the telecommunications and internet industry, including most of the world’s leading wireless carriers. As the head lawyer on the advanced mobility products team, he spearheaded numerous international transactions in the devices, towers and services fields and forged strong relationships with leading telecom equipment manufacturers.  Prior to his career with Brightstar, Mr. Simington was an attorney in private practice with prominent national and international law firms.
    • Following the directive in the EO, on 27 July, the NTIA filed a petition with the FCC, asking the agency to start a rulemaking to clarify alleged ambiguities in 47 USC 230 regarding the limits of the liability shield for the content others post online versus the liability protection for “good faith” moderation by the platform itself.
    • In early August, the FCC asked for comments on the NTIA petition, and comments were due by 2 September. Over 2500 comments have been filed, and a cursory search turned up numerous form letter comments drafted by a conservative organization that were then submitted by members and followers.

Further Reading

  • “I Have Blood on My Hands”: A Whistleblower Says Facebook Ignored Global Political Manipulation” By Craig Silverman, Ryan Mac, and Pranav Dixit — BuzzFeed News. In a blistering memo on her way out the door, a Facebook engineer charged with moderating fake content around the world charged the company is unconcerned about how the manipulation of its platform is benefitting regimes throughout the world. There is also the implication the company is much more focused on content moderation in the United States (U.S.) and western Europe, possibly because of political pressure from those nations. Worse than allowing repressive and anti-democratic governments target news organizations and opposition figures, the company was slow to respond when human rights advocates accounts were falsely flagged as violating terms of service. The engineer finally quit after sleepless nights of worrying about how her time and efforts may be falling short of protecting nations and people in many nations. She further claimed “[i]t’s an open secret within the civic integrity space that Facebook’s short-term decisions are largely motivated by PR and the potential for negative attention.”
  • Online learning’s toll on kids’ privacy” By Ashley Gold — Axios. With the shift to online education for many students in the United States, the privacy and data security practices of companies in this space are starting to be examined. But schools and parents may be woefully underinformed about or lack power to curb some data collection and usage practices. The Federal Trade Commission (FTC) enforces the Children’s Online Privacy Protection Act (COPPA), which critics claim is not strong enough and to the extent the FTC enforces the law, it is “woefully insufficient.” Moreover, the differences between richer schools and poorer schools plays out with respect to privacy and data security and the latter group of schools likely cannot afford to vet and use the best companies.
  • Unlimited Information Is Transforming Society” By Naomi Oreskes and Erik M. Conway — Scientific American. This comprehensive article traces the field of information alongside other technological advances like electricity, nuclear power, and space travel. The authors posit that we are at a new point with information in that creation and transmission of it now flows in two directions whereas for much of history it flowed one way, often from the elites to everyone else.
  • First death reported following a ransomware attack on a German hospital” By Catalin Cimpanu — ZDNet. The first fatality associated with a ransomware attack happened in Gernmany when a patient in an ambulance was diverted from a hospital struggling with ransomware. Appafently, the hackers did not even mean to target the hospital in Dusseldorf and instead were aiming to infect and extort a university hospital nearby. Nonetheless, Germany’s Bundesamt für Sicherheit in der Informationstechnik thereafter issued a warning advising entities to update the CVE-2019-19871 vulnerability on Citrix network gateways.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Peggy und Marco Lachmann-Anke from Pixabay

Pending Legislation In U.S. Congress, Part VI

At this point, Congress is just looking to organize U.S. AI efforts, maximize resources, and better understand the field.

Today, let us survey bills on artificial intelligence (AI), an area of growing interest and concern among Democratic and Republican Members. Lawmakers and staff have been grappling with this new technology, and, at this point, are looking to study and foster its development, particularly in maintain the technological dominance of the United States (U.S.) There are some bills that may get enacted this year. However, any legislative action would play out against extensive executive branch AI efforts. In any event, Congress does not seem close to passing legislation that would regulate the technology and is looking to rely on existing statutes and regulators (e.g. the Federal Trade Commission’s powers to police unfair and deceptive practices.)

The bill with the best chances of enactment at present is the “National Artificial Intelligence Initiative Act of 2020” (H.R.6216), which was added to the “William M. (Mac) Thornberry National Defense Authorization Act for Fiscal Year 2021” (H.R.6395), a bill that has other mostly defense related AI provisions.

Big picture, H.R. 6216 would require better coordination of federal AI initiatives, research, and funding, and more involvement in the development of voluntary, consensus-based standards for AI. Much of this would happen through the standing up of a new “National Artificial Intelligence Initiative Office” by the Office of Science and Technology Policy (OSTP) in the White House. This new entity would be the locus of AI activities and programs in the United States’ (U.S.) government with the ultimate goal of ensuring the nation is the world’s foremost developer and user of the new technology.

Moreover, OSTP would “acting through the National Science and Technology Council…establish or designate an Interagency Committee to coordinate Federal programs and activities in support of the Initiative.” This body would “provide for interagency coordination of Federal artificial intelligence research, development, and demonstration activities, development of voluntary consensus standards and guidelines for research, development, testing, and adoption of ethically developed, safe, and trustworthy artificial intelligence systems, and education and training activities and programs of Federal departments and agencies undertaken pursuant to the Initiative.” The committee would need to “develop a strategic plan for AI” within two years and update it every three years thereafter. Moreover, the committee would need to “propose an annually coordinated interagency budget for the Initiative to the Office of Management and Budget (OMB) that is intended to ensure that the balance of funding across the Initiative is sufficient to meet the goals and priorities established for the Initiative.” However, OMB would be under no obligation to take notice of this proposal save for pressure from AI stakeholders in Congress or AI champions in any given Administration. The Secretary of Energy would create a ‘‘National Artificial Intelligence Advisory Committee” to advise the President and National Artificial Intelligence Initiative Office on a range of AI policy matters.

Federal agencies would be permitted to award funds to new Artificial Intelligence Research Institutes to pioneer research in any number of AI fields or considerations. The bill does not authorize any set amount of money for this program and instead kicks the decision over to the Appropriations Committees on any funding. The National Institute of Standards and Technology (NIST) must “support measurement research and development of best practices and voluntary standards for trustworthy artificial intelligence systems” and “support measurement research and development of best practices and voluntary standards for trustworthy artificial intelligence systems” among other duties. NIST must “shall work to develop, and periodically update, in collaboration with other public and private sector organizations, including the National Science Foundation and the Department of Energy, a voluntary risk management framework for the trustworthiness of artificial intelligence systems.” NIST would also “develop guidance to facilitate the creation of voluntary data sharing arrangements between industry, federally funded research centers, and Federal agencies for the purpose of advancing artificial intelligence research and technologies.”

The National Science Foundation (NSF) would need to “fund research and education activities in artificial intelligence systems and related fields, including competitive awards or grants to institutions of higher education or eligible non-profit organizations (or consortia thereof).” The Department of Energy must “carry out a cross-cutting research and development program to advance artificial intelligence tools, systems, capabilities, and workforce needs and to improve the reliability of artificial intelligence methods and solutions relevant to the mission of the Department.” This department would also be tasked with advancing “expertise in artificial intelligence and high-performance computing in order to improve health outcomes for veteran populations.”

According to a fact sheet issued by the House Science, Space, and Technology Committee, [t]he legislation will:

  • Formalize interagency coordination and strategic planning efforts in AI research, development, standards, and education through an Interagency Coordination Committee and a coordination office managed by the Office of Science and Technology Policy (OSTP).
  • Create an advisory committee to better inform the Coordination Committee’s strategic plan, track the state of the science around artificial intelligence, and ensure the Initiative is meeting its goals.
  • Create a network of AI institutes, coordinated through the National Science Foundation, that any Federal department of agency could fund to create partnerships between the academia and the public and private sectors to accelerate AI research focused on an economic sector, social sector, or on a cross-cutting AI challenge.
  • Support basic AI measurement research and standards development at the National Institute for Standards and Technology(NIST) and require NIST to create a framework for managing risks associated with AI systems and best practices for sharing data to advance trustworthy AI systems.
  • Support research at the National Science Foundation (NSF) across a wide variety of AI related research areas to both improve AI systems and use those systems to advance other areas of science. This section requires NSF to include an obligation for an ethics statement for all research proposals to ensure researchers are considering, and as appropriate, mitigating potential societal risks in carrying out their research.
  • Support education and workforce development in AI and related fields, including through scholarships and traineeships at NSF.
  • Support AI research and development efforts at the Department of Energy (DOE), utilize DOE computing infrastructure for AI challenges, promote technology transfer, data sharing, and coordination with other Federal agencies, and require an ethics statement for DOE funded research as required at NSF.
  • Require studies to better understand workforce impacts and opportunities created by AI, and identify the computing resources necessary to ensure the United States remains competitive in AI.

 As mentioned, the House’s FY 2021 NDAA has a number of other AI provisions, including:

  • Section 217–Modification of Joint Artificial Intelligence Research, Development, and Transition Activities. This section would amend section 238 of the John S. McCain National Defense Authorization Act for Fiscal Year 2019 (Public Law 115-232) by assigning responsibility for the Joint Artificial Intelligence Center (JAIC) to the Deputy Secretary of Defense and ensure data access and visibility for the JAIC.
  • Section 224–Board of Directors for the Joint Artificial Intelligence Center. This section would direct the Secretary of Defense to create and resource a Board of Directors for the Joint Artificial Intelligence Center (JAIC), comprised of senior Department of Defense officials, as well as civilian directors not employed by the Department of Defense. The objective would be to have a standing body over the JAIC that can bring governmental and non-governmental experts together for the purpose of assisting the Department of Defense in correctly integrating and operationalizing artificial intelligence technologies.
  • Section 242–Training for Human Resources Personnel in Artificial Intelligence and Related Topics. This section would direct the Secretary of Defense to develop and implement a program to provide human resources personnel with training in the fields of software development, data science, and artificial intelligence, as such fields relate to the duties of such personnel, not later 1 year after the date of the enactment of this Act.
  • Section 248–Acquisition of Ethically and Responsibly Developed Artificial Intelligence Technology. This section would direct the Secretary of Defense, acting through the Board of Directors of the Joint Artificial Intelligence Center, to conduct an assessment to determine whether the Department of Defense has the ability to ensure that any artificial intelligence technology acquired by the Department is ethically and responsibly developed.
  • Section 805–Acquisition Authority of the Director of the Joint Artificial Intelligence Center. This section would authorize the Director of the Joint Artificial Intelligence Center with responsibility for the development, acquisition, and sustainment of artificial intelligence technologies, services, and capabilities through fiscal year 2025.

The “FUTURE of Artificial Intelligence Act of 2020” (S.3771) was marked up and reported out of the Senate Commerce, Science, and Transportation Committee in July 2020. This bill would generally “require the Secretary of Commerce to establish the Federal Advisory Committee on the Development and Implementation of Artificial Intelligence” to advise the department on a range of AI related matters, including competitiveness, workforce, education, ethics training and development, the open sharing of data and research, international cooperation, legal and civil rights, government efficiency, and others. Additionally, a subcommittee will be empaneled to focus on the intersection of AI and law enforcement and national security issues. 18 months after enactment, this committee must submit its findings in a report to Congress and the Department of Commerce. A bill with the same titled has been introduced in the House (H.R.7559) but has not been acted upon. This bill would “require the Director of the National Science Foundation, in consultation with the Director of the Office of Science and Technology Policy, to establish an advisory committee to advise the President on matters relating to the development of artificial intelligence.”

The same day S.3771 was marked up, the committee took up another AI bill: the “Advancing Artificial Intelligence Research Act of 2020” (S.3891) that would “require the Director of the National Institute of Standards and Technology (NIST) to advance the development of technical standards for artificial intelligence, to establish the National Program to Advance Artificial Intelligence Research, to promote research on artificial intelligence at the National Science Foundation” (NSF). $250 million a year would be authorized for NIST to distribute for AI research. NIST would also need to establish at least six AI research institutes. The NSF would “establish  a  pilot  program to assess the feasibility and advisability of awarding  grants  for  the  conduct  of  research in rapidly evolving, high priority topics.”

In early November 2019, the Senate Homeland Security & Governmental Affairs Committee marked up the “AI in Government Act of 2019” (S.1363) that would establish an AI Center of Excellence in the General Services Administration (GSA) to:

  • promote the efforts of the Federal Government in developing innovative uses of and acquiring artificial intelligence technologies by the Federal Government;
  • improve cohesion and competency in the adoption and use of artificial intelligence within the Federal Government

The bill stipulates that both of these goals would be pursued “for the purposes of benefitting the public and enhancing the productivity and efficiency of Federal Government operations.”

The Office of Management and Budget (OMB) must “issue a memorandum to the head of each agency that shall—

  • inform the development of policies regarding Federal acquisition and use by agencies regarding technologies that are empowered or enabled by artificial intelligence;
  • recommend approaches to remove barriers for use by agencies of artificial intelligence technologies in order to promote the innovative application of those technologies while protecting civil liberties, privacy, civil rights, and economic and national security; and
  • identify best practices for identifying, assessing, and mitigating any discriminatory impact or bias on the basis of any classification protected under Federal nondiscrimination laws, or any unintended consequence of the use of artificial intelligence by the Federal Government.”

OMB is required to coordinate the drafting of this memo with the Office of Science and Technology Policy, GSA, other relevant agencies, and other key stakeholders.

This week, the House passed its version of S.1363, the “AI in Government Act of 2019” (H.R.2575), by voice vote sending it over to the Senate.

In September 2019, the House sent another AI bill to the Senate where it has not been taken up. The “Advancing Innovation to Assist Law Enforcement Act” (H.R.2613) would task the Financial Crimes Enforcement Network (FinCEN) with studying

  • the status of implementation and internal use of emerging technologies, including AI, digital identity technologies, blockchain technologies, and other innovative technologies within FinCEN;
  • whether AI, digital identity technologies, blockchain technologies, and other innovative technologies can be further leveraged to make FinCEN’s data analysis more efficient and effective; and
  • how FinCEN could better utilize AI, digital identity technologies, blockchain technologies, and other innovative technologies to more actively analyze and disseminate the information it collects and stores to provide investigative leads to Federal, State, Tribal, and local law enforcement, and other Federal agencies…and better support its ongoing investigations when referring a case to the Agencies.”

All of these bills are being considered against a backdrop of significant Trump Administration action on AI, using existing authority to manage government operations. The Administration sees AI as playing a key role in ensuring and maintaining U.S. dominance in military affairs and in other realms.

Most recently, OMB and the Office of Science and Technology Policy (OSTP) released their annual guidance to United States department and agencies to direct their budget requests for FY 2022 with respect to research and development (R&D). OMB and OSTP explained:

For FY2022, the five R&D budgetary priorities in this memorandum ensure that America remains at the global forefront of science and technology (S&T) discovery and innovation. The Industries of the Future (IotF) -artificial intelligence (AI), quantum information sciences (QIS), advanced communication networks/5G, advanced manufacturing, and biotechnology-remain the Administration’s top R&D priority.

Specifically, regarding AI, OMB and OSTP stated

Artificial Intelligence: Departments and agencies should prioritize research investments consistent with the Executive Order (EO) 13859 on Maintaining American Leadership in Artificial Intelligence and the 2019 update of the National Artificial Intelligence Research and Development Strategic Plan. Transformative basic research priorities include research on ethical issues of AI, data-efficient and high performance machine learning (ML) techniques, cognitive AI, secure and trustworthy Al, scalable and robust AI, integrated and interactive AI, and novel AI hardware. The current pandemic highlights the importance of use-inspired AI research for healthcare, including AI for discovery of therapeutics and vaccines; Al-based search of publications and patents for scientific insights; and Al for improved imaging, diagnosis, and data analysis. Beyond healthcare, use-inspired AI research for scientific and engineering discovery across many domains can help the Nation address future crises. AI infrastructure investments are prioritized, including national institutes and testbeds for AI development, testing, and evaluation; data and model resources for AI R&D; and open knowledge networks. Research is also prioritized for the development of AI measures, evaluation methodologies, and standards, including quantification of trustworthy AI in dimensions of accuracy, fairness, robustness, explainability, and transparency.

In February 2020, OSTP published the “American Artificial Intelligence Initiative: Year One Annual Report” in which the agency claimed “the Trump Administration has made critical progress in carrying out this national strategy and continues to make United States leadership in [artificial intelligence] (AI) a top priority.” OSTP asserted that “[s]ince the signing of the EO, the United States has made significant progress on achieving the objectives of this national strategy…[and] [t]his document provides both a summary of progress and a continued long-term vision for the American AI Initiative.” However, some agencies were working on AI-related initiatives independently of the EO, but the White House has folded those into the larger AI strategy it is pursuing. Much of the document recites already announced developments and steps.

However, OSTP seems to reference a national AI strategy that differs a bit from the one laid out in EO 13859 and appears to represent the Administration’s evolved thinking on how to address AI across a number of dimensions in the form of “key policies and practices:”

1)  Invest in AI research and development: The United States must promote Federal investment in AI R&D in collaboration with industry, academia, international partners and allies, and other non- Federal entities to generate technological breakthroughs in AI. President Trump called for a 2-year doubling of non-defense AI R&D in his fiscal year (FY) 2021 budget proposal, and in 2019 the Administration updated its AI R&D strategic plan, developed the first progress report describing the impact of Federal R&D investments, and published the first-ever reporting of government-wide non-defense AI R&D spending.

2)  Unleash AI resources: The United States must enhance access to high-quality Federal data, models, and computing resources to increase their value for AI R&D, while maintaining and extending safety, security, privacy, and confidentiality protections. The American AI Initiative called on Federal agencies to identify new opportunities to increase access to and use of Federal data and models. In 2019, the White House Office of Management and Budget established the Federal Data Strategy as a framework for operational principles and best practices around how Federal agencies use and manage data. 

3) Remove barriers to AI innovation: The United States must reduce barriers to the safe development, testing, deployment, and adoption of AI technologies by providing guidance for the governance of AI consistent with our Nation’s values and by driving the development of appropriate AI technical standards. As part of the American AI Initiative, The White House published for comment the proposed United States AI Regulatory Principles, the first AI regulatory policy that advances innovation underpinned by American values and good regulatory practices. In addition, the National Institute of Standards and Technology (NIST) issued the first-ever strategy for Federal engagement in the development of AI technical standards. 

4) Train an AI-ready workforce: The United States must empower current and future generations of American workers through apprenticeships; skills programs; and education in science, technology, engineering, and mathematics (STEM), with an emphasis on computer science, to ensure that American workers, including Federal workers, are capable of taking full advantage of the opportunities of AI. President Trump directed all Federal agencies to prioritize AI-related apprenticeship and job training programs and opportunities. In addition to its R&D focus, the National Science Foundation’s new National AI Research Institutes program will also contribute to workforce development, particularly of AI researchers. 

5) Promote an international environment supportive of American AI innovation: The United States must engage internationally to promote a global environment that supports American AI research and innovation and opens markets for American AI industries while also protecting our technological advantage in AI. Last year, the United States led historic efforts at the Organisation for Economic Cooperation and Development (OECD) to develop the first international consensus agreements on fundamental principles for the stewardship of trustworthy AI. The United States also worked with its international partners in the G7 and G20 to adopt similar AI principles. 

6) Embrace trustworthy AI for government services and missions: The United States must embrace technology such as artificial intelligence to improve the provision and efficiency of government services to the American people and ensure its application shows due respect for our Nation’s values, including privacy, civil rights, and civil liberties. The General Services Administration established an AI Center of Excellence to enable Federal agencies to determine best practices for incorporating AI into their organizations. 

Also in February 2020, the Department of Defense (DOD) announced in a press release that it “officially adopted a series of ethical principles for the use of Artificial Intelligence today following recommendations provided to Secretary of Defense Dr. Mark T. Esper by the Defense Innovation Board last October.” The DOD claimed “[t]he adoption of AI ethical principles aligns with the DOD AI strategy objective directing the U.S. military lead in AI ethics and the lawful use of AI systems.” The Pentagon added “[t]he DOD’s AI ethical principles will build on the U.S. military’s existing ethics framework based on the U.S. Constitution, Title 10 of the U.S. Code, Law of War, existing international treaties and longstanding norms and values.” The DOD stated “[t]he DOD Joint Artificial Intelligence Center (JAIC) will be the focal point for coordinating implementation of AI ethical principles for the department.”

The DOD explained that “[t]hese principles will apply to both combat and non-combat functions and assist the U.S. military in upholding legal, ethical and policy commitments in the field of AI…[and] encompass five major areas:

  • Responsible. DOD personnel will exercise appropriate levels of judgment and care, while remaining responsible for the development, deployment, and use of AI capabilities.
  • Equitable. The Department will take deliberate steps to minimize unintended bias in AI capabilities.
  • Traceable. The Department’s AI capabilities will be developed and deployed such that relevant personnel possess an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including with transparent and auditable methodologies, data sources, and design procedure and documentation.
  • Reliable. The Department’s AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire life-cycles.
  • Governable. The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.

It bears note that the DOD’s recitation of these five AI Ethics differs from those drafted by the Defense Innovation Board. Notably, in “Equitable,” the Defense Innovation Board also included that the “DOD should take deliberate steps to avoid unintended bias in the development and deployment of combat or non-combat AI systems that would inadvertently cause harm to persons” (emphasis added.) Likewise, in “Governable,” the Board recommended that “DOD AI systems should be designed and engineered to fulfill their intended function while possessing the ability to detect and avoid unintended harm or disruption, and for human or automated disengagement or deactivation of deployed systems that demonstrate unintended escalatory or other behavior (emphasis added.)

Additionally, the DOD has declined, at least at this time, to adopt the recommendations made by the Board regarding the use of AI:

1. Formalize these principles via official DOD channels. 

2. Establish a DOD-wide AI Steering Committee. 

3. Cultivate and grow the field of AI engineering. 

4. Enhance DOD training and workforce programs. 

5. Invest in research on novel security aspects of AI. 

6. Invest in research to bolster reproducibility. 

7. Define reliability benchmarks. 

8. Strengthen AI test and evaluation techniques. 

9. Develop a risk management methodology. 

10. Ensure proper implementation of AI ethics principles. 

11. Expand research into understanding how to implement AI ethics principles.

12. Convene an annual conference on AI safety, security, and robustness. 

In January 2020 OMB rand OSTP requested comments on a draft “Guidance for Regulation of Artificial Intelligence Applications” that would be issued to federal agencies as directed by EO 13859. OMB listed the 10 AI principles agencies must in regulating AI in the private sector, some of which have some overlap with the DOD’s Ethics:

  • Public trust in AI
  • Public participation
  • Scientific integrity and information quality
  • Risk assessment and management
  • Benefits and costs
  • Flexibility
  • Fairness and non-discrimination
  • Disclosure and transparency
  • Safety and security
  • Interagency coordination

OSTP explained how the ten AI principles should be used:

Consistent with law, agencies should take into consideration the following principles when formulating regulatory and non-regulatory approaches to the design, development, deployment, and operation of AI applications, both general and sector-specific. These principles, many of which are interrelated, reflect the goals and principles in Executive Order 13859. Agencies should calibrate approaches concerning these principles and consider case-specific factors to optimize net benefits. Given that many AI applications do not necessarily raise novel issues, these considerations also reflect longstanding Federal regulatory principles and practices that are relevant to promoting the innovative use of AI. Promoting innovation and growth of AI is a high priority of the United States government. Fostering innovation and growth through forbearing from new regulations may be appropriate. Agencies should consider new regulation only after they have reached the decision, in light of the foregoing section and other considerations, that Federal regulation is necessary.

In November 2019 , the National Security Commission on Artificial Intelligence (NSCAI) released its interim report and explained that “[b]etween now and the publication of our final report, the Commission will pursue answers to hard problems, develop concrete recommendations on “methods and means” to integrate AI into national security missions, and make itself available to Congress and the executive branch to inform evidence-based decisions about resources, policy, and strategy.” The Commission released its initial report in July that laid out its work plan. 

In July 2020, NSCAI published its Second Quarter Recommendations, a compilation of policy proposals made this quarter. NSCAI said it is still on track to release its final recommendations in March 2021. The NSCAI asserted

The recommendations are not a comprehensive follow-up to the interim report or first quarter memorandum. They do not cover all areas that will be included in the final report. This memo spells out recommendations that can inform ongoing deliberations tied to policy, budget, and legislative calendars. But it also introduces recommendations designed to build a new framework for pivoting national security for the artificial intelligence (AI) era.

In August 2019, NIST published “U.S. LEADERSHIP IN AI: A Plan for Federal Engagement in Developing Technical Standards and Related Tools” as required by EO 13859. The EO directed the Secretary of Commerce, through NIST, to issue “a plan for Federal engagement in the development of technical standards and related tools in support of reliable, robust, and trustworthy systems that use AI technologies” that must include:

(A) Federal priority needs for standardization of AI systems development and deployment;
(B)  identification  of  standards  development  entities  in  which  Federal  agencies  should  seek  membership  with  the  goal  of  establishing  or  supporting United States technical leadership roles; and

(C) opportunities for and challenges to United States leadership in standardization related to AI technologies.

NIST’s AI plan meets those requirements in the broadest of strokes and will require much from the Administration and agencies to be realized, including further steps required by the EO.

Finally, all these Trump Administration efforts are playing out at the same global processes are as well. In late May 2019, the Organization for Economic Cooperation and Development (OECD) adopted recommendations from the OECD Council on Artificial Intelligence (AI), and non-OECD members Argentina, Brazil, Colombia, Costa Rica, Peru and Romania also pledged to adhere to the recommendations. Of course, OECD recommendations have no legal binding force on any nation, but standards articulated by the OECD are highly respected and sometime do form the basis for nations’ approaches on an issue like the 1980 OECD recommendations on privacy. Moreover, the National Telecommunications and Information Administration (NTIA) signaled the Trump Administration’s endorsement of the OECD effort. In February 2020, the European Commission (EC) released its latest policy pronouncement on artificial intelligence, “On Artificial Intelligence – A European approach to excellence and trust,” in which the Commission articulates its support for “a regulatory and investment oriented approach with the twin objective of promoting the uptake of AI and of addressing the risks associated with certain uses of this new technology.” The EC stated that “[t]he purpose of this White Paper is to set out policy options on how to achieve these objectives…[but] does not address the development and use of AI for military purposes.”

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Gerd Altmann from Pixabay