Further Reading, Other Developments, and Coming Events (19 January 2021)

Further Reading

  • Hong Kong telecoms provider blocks website for first time, citing security law” — Reuters; “A Hong Kong Website Gets Blocked, Raising Censorship Fears” By Paul Mozur and Aaron Krolik — The New York Times. The Hong Kong Broadband Network (HKBN) blocked access to a website about the 2019 protests against the People’s Republic of China (PRC) (called HKChronicles) under a recently enacted security law critics had warned would lead to exactly this sort of outcome. Allegedly, the Hong Kong police had invoked the National Security Law for the first time, and other telecommunications companies have followed suit.
  • Biden to counter China tech by urging investment in US: adviser” By Yifan Yu — Nikkei Asia. President-elect Joe Biden’s head of the National Economic Council said at a public event that the Biden Administration would focus less on tariffs and other similar instruments to counter the People’s Republic of China (PRC). Instead, the incoming President would try to foster investment in United States companies and technologies to fend off the PRC’s growing strength in a number of crucial fields. Also, a Biden Administration would work more with traditional U.S. allies to contest policies from Beijing.
  • Revealed: walkie-talkie app Zello hosted far-right groups who stormed Capitol” By Micah Loewinger and Hampton Stall — The Guardian. Some of the rioters and insurrectionists whop attacked the United States Capitol on 6 January were using another, lesser known communications app, Zello, to coordinate their actions. The app has since taken down a number of right-wing and extremist groups that have flourished for months if not years on the platform. It remains to be seen how smaller platforms will be scrutinized under a Biden Presidency. Zello has reportedly been aware that these groups have been using their platform and opted not to police their conduct.
  • They Used to Post Selfies. Now They’re Trying to Reverse the Election.” By Stuart A. Thompson and Charlie Warzel — The New York Times. The three people who amassed considerable extremist followings seem each to be part believer and part opportunist. A fascinating series of profiles about the three.
  • Telegram tries, and fails, to remove extremist content” By Mark Scott — Politico. Platforms other than Facebook and Twiiter are struggling to moderate right wing and extremist content that violates their policies and terms of service.

Other Developments

  • The Biden-Harris transition team announced that a statutorily established science advisor will now be a member of the Cabinet and named its nominee for this and other positions. The Office of Science and Technology Policy (OSTP) was created by executive order in the Ford Administration and then codified by Congress. However, the OSTP Director has not been a member of the Cabinet alongside the Senate-confirmed Secretaries and others. President-elect Joe Biden has decided to elevate the OSTP Director to the Cabinet, likely in order to signal the importance of science and technology in his Administration. The current OSTP has exercised unusual influence in the Trump Administration under the helm of OSTP Associate Director Michael Kratsios and shaped policy in a number of realms like artificial intelligence, national security, and others.
    • In the press release, the transition team explained:
      • Dr. Eric Lander will be nominated as Director of the OSTP and serve as the Presidential Science Advisor. The president-elect is elevating the role of science within the White House, including by designating the Presidential Science Advisor as a member of the Cabinet for the first time in history. One of the country’s leading scientists, Dr. Lander was a principal leader of the Human Genome Project and has been a pioneer in the field of genomic medicine. He is the founding director of the Broad Institute of MIT and Harvard, one of the nation’s leading research institutes. During the Obama-Biden administration, he served as external Co-Chair of the President’s Council of Advisors on Science and Technology. Dr. Lander will be the first life scientist to serve as Presidential Science Advisor.
      • Dr. Alondra Nelson will serve as OSTP Deputy Director for Science and Society. A distinguished scholar of science, technology, social inequality, and race, Dr. Nelson is president of the Social Science Research Council, an independent, nonprofit organization linking social science research to practice and policy. She is also a professor at the Institute for Advanced Study, one of the nation’s most distinguished research institutes, located in Princeton, NJ.
      • Dr. Frances H. Arnold and Dr. Maria Zuber will serve as the external Co-Chairs of the President’s Council of Advisors on Science and Technology (PCAST). An expert in protein engineering, Dr. Arnold is the first American woman to win the Nobel Prize in Chemistry. Dr. Zuber, an expert in geophysics and planetary science, is the first woman to lead a NASA spacecraft mission and has chaired the National Science Board. They are the first women to serve as co-chairs of PCAST.
      • Dr. Francis Collins will continue serving in his role as Director of the National Institutes of Health.
      • Kei Koizumi will serve as OSTP Chief of Staff and is one of the nation’s leading experts on the federal science budget.
      • Narda Jones, who will serve as OSTP Legislative Affairs Director, was Senior Technology Policy Advisor and Counsel for the Democratic staff of the U.S. Senate Committee on Commerce, Science and Transportation.
  • The United States (U.S.) Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA) issued a report on supply chain security by a public-private sector advisory body, which represents one of the lines of effort of the U.S. government to better secure technology and electronics that emanate from the People’s Republic of China (PRC). CISA’s National Risk Management Center co-chairs the Information and Communications Technology (ICT) Supply Chain Risk Management (SCRM) Task Force along with the Information Technology Sector Coordinating Council and the Communications Sector Coordinating Council. The ICT SCRM published its Year 2 Report that “builds upon” its Interim Report and asserted:
    • Over the past year, the Task Force has expanded upon its first-year progress to advance meaningful partnership around supply chain risk management. Specifically, the Task Force:
      • Developed reference material to support overcoming legal obstacles to information sharing
      • Updated the Threat Evaluation Report, which evaluates threats to suppliers, with additional scenarios and mitigation measures for the corresponding threat scenarios
      • Produced a report and case studies providing in -depth descriptions of control categories and information regarding when and how to use a Qualified List to manage supply chain risks
      • Developed a template for SCRM compliance assessments and internal evaluations of alignment to industry standards
      • Analyzed the current and potential impacts from the COVID-19 pandemic, and developed a system map to visualize ICT supply chain routes and identify chokepoints
      • Surveyed supply chain related programs and initiatives that provide opportunities for potential TaskForce engagement
    • Congress established an entity to address and help police supply chain risk at the end of 2018 in the “Strengthening and Enhancing Cyber-capabilities by Utilizing Risk Exposure Technology Act” (SECURE Act) (P.L. 115-390). The Federal Acquisition Security Council (FASC) has a number of responsibilities, including:
      • developing an information sharing process for agencies to circulate decisions throughout the federal government made to exclude entities determined to be IT supply chain risks
      • establishing a process by which entities determined to be IT supply chain risks may be excluded from procurement government-wide (exclusion orders) or suspect IT must be removed from government systems (removal orders)
      • creating an exception process under which IT from an entity subject to a removal or exclusion order may be used if warranted by national interest or national security
      • issuing recommendations for agencies on excluding entities and IT from the IT supply chain and “consent for a contractor to subcontract” and mitigation steps entities would need to take in order for the Council to rescind a removal or exclusion order
      • In September 2020, the FASC released an interim regulation that took effect upon being published that “implement[s] the requirements of the laws that govern the operation of the FASC, the sharing of supply chain risk information, and the exercise of its authorities to recommend issuance of removal and exclusion orders to address supply chain security risks…”
  • The Australian government has released its bill to remake how platforms like Facebook, Google, and others may use the content of new media, including provision for payment. The “Treasury Laws Amendment (News Media and Digital Platforms Mandatory Bargaining Code) Bill 2020” “establishes a mandatory code of conduct to help support the sustainability of the Australian news media sector by addressing bargaining power imbalances between digital platforms and Australian news businesses.” The agency charged with developing legislation, the Australian Competition and Consumer Commission (ACCC), has tussled with Google in particular over what this law would look like with the technology giant threatening to withdraw from Australia altogether. The ACCC had determined in its July 2019 Digital Platform Inquiry:
    • that there is a bargaining power imbalance between digital platforms and news media businesses so that news media businesses are not able to negotiate for a share of the revenue generated by the digital platforms and to which the news content created by the news media businesses contributes. Government intervention is necessary because of the public benefit provided by the production and dissemination of news, and the importance of a strong independent media in a well-functioning democracy.
    • In an Explanatory Memorandum, it is explained:
      • The Bill establishes a mandatory code of conduct to address bargaining power imbalances between digital platform services and Australian news businesses…by setting out six main elements:
        • bargaining–which require the responsible digital platform corporations and registered news business corporations that have indicated an intention to bargain, to do so in good faith;
        • compulsory arbitration–where parties cannot come to a negotiated agreement about remuneration relating to the making available of covered news content on designated digital platform services, an arbitral panel will select between two final offers made by the bargaining parties;
        • general requirements –which, among other things, require responsible digital platform corporations to provide registered news business corporations with advance notification of planned changes to an algorithm or internal practice that will have a significant effect on covered news content;
        • non-differentiation requirements –responsible digital platform corporations must not differentiate between the news businesses participating in the Code, or between participants and non-participants, because of matters that arise in relation to their participation or non-participation in the Code;
        • contracting out–the Bill recognises that a digital platform corporation may reach a commercial bargain with a news business outside the Code about remuneration or other matters. It provides that parties who notify the ACCC of such agreements would not need to comply with the general requirements, bargaining and compulsory arbitration rules (as set out in the agreement); and
        • standard offers –digital platform corporations may make standard offers to news businesses, which are intended to reduce the time and cost associated with negotiations, particularly for smaller news businesses. If the parties notify the ACCC of an agreed standard offer, those parties do not need to comply with bargaining and compulsory arbitration (as set out in the agreement);
  • The Federal Trade Commission (FTC) has reached a settlement with an mobile advertising company over “allegations that it failed to provide in-game rewards users were promised for completing advertising offers.” The FTC unanimously agreed to the proposed settlement with Tapjoy, Inc. that bars the company “from misleading users about the rewards they can earn and must monitor its third-party advertiser partners to ensure they do what is necessary to enable Tapjoy to deliver promised rewards to consumers.” The FTC drafted a 20 year settlement that will obligate Tapjoy, Inc. to refrain from certain practices that violate the FTC Act; in this case that includes not making false claims about the rewards people can get if they take or do not take some action in an online game. Tapjoy, Inc. will also need to submit compliance reports, keep records, and make materials available to the FTC upon demand. Any failure to meet the terms of the settlement could prompt the FTC to seek redress in federal court, including more than $43,000 per violation.
    • In the complaint, the FTC outlined Tapjoy, Inc.’s illegal conduct:
      • Tapjoy operates an advertising platform within mobile gaming applications (“apps”). On the platform, Tapjoy promotes offers of in-app rewards (e.g., virtual currency) to consumers who complete an action, such as taking a survey or otherwise engaging with third-party advertising. Often, these consumers must divulge personal information or spend money. In many instances, Tapjoy never issues the promised reward to consumers who complete an action as instructed, or only issues the currency after a substantial delay. Consumers who attempt to contact Tapjoy to complain about missing rewards find it difficult to do so, and many consumers who complete an action as instructed and are able to submit a complaint nevertheless do not receive the promised reward.  Tapjoy has received hundreds of thousands of complaints concerning its failure to issue promised rewards to consumers. Tapjoy nevertheless has withheld rewards from consumers who have completed all required actions.
    • In its press release, the FTC highlighted the salient terms of the settlement:
      • As part of the proposed settlement, Tapjoy is prohibited from misrepresenting the rewards it offers consumers and the terms under which they are offered. In addition, the company must clearly and conspicuously display the terms under which consumers can receive such rewards and must specify that the third-party advertisers it works with determine if a reward should be issued. Tapjoy also will be required to monitor its advertisers to ensure they are following through on promised rewards, investigate complaints from consumers who say they did not receive their rewards, and discipline advertisers who deceive consumers.
    • FTC Commissioners Rohit Chopra and Rebecca Kelly Slaughter issued a joint statement, and in their summary section, they asserted:
      • The explosive growth of mobile gaming has led to mounting concerns about harmful practices, including unlawful surveillance, dark patterns, and facilitation of fraud.
      • Tapjoy’s failure to properly police its mobile gaming advertising platform cheated developers and gamers out of promised compensation and rewards.
      • The Commission must closely scrutinize today’s gaming gatekeepers, including app stores and advertising middlemen, to prevent harm to developers and gamers.
    • On the last point, Chopra and Kelly Slaughter argued:
      • We should all be concerned that gatekeepers can harm developers and squelch innovation. The clearest example is rent extraction: Apple and Google charge mobile app developers on their platforms up to 30 percent of sales, and even bar developers from trying to avoid this tax through offering alternative payment systems. While larger gaming companies are pursuing legal action against these practices, developers and small businesses risk severe retaliation for speaking up, including outright suspension from app stores – an effective death sentence.
      • This market structure also has cascading effects on gamers and consumers. Under heavy taxation by Apple and Google, developers have been forced to adopt alternative monetization models that rely on surveillance, manipulation, and other harmful practices.
  • The United Kingdom’s (UK) High Court ruled against the use of general warrants for online surveillance by the Uk’s security agencies (MI5, MI6, and the Government Communication Headquarters (GCHQ)). Privacy International (PI), a British advocacy organization, had brought the suit after Edward Snowden revealed the scope of the United States National Security Agency’s (NSA) surveillance activities, including bulk collection of information, a significant portion of which required hacking. PI sued in a special tribunal formed to resolve claims against British security agencies where the government asserted general warrants would suffice for purposes of mass hacking. PI disagreed and argued this was counter to 250 years of established law in the UK that warrants must be based on reasonable suspicion, specific in what is being sought, and proportionate. The High Court agreed with PI.
    • In its statement after the ruling, PI asserted:
      • Because general warrants are by definition not targeted (and could therefore apply to hundreds, thousands or even millions of people) they violate individuals’ right not to not have their property searched without lawful authority, and are therefore illegal.
      • The adaptation of these 250-year-old principles to modern government hacking and property interference is of great significance. The Court signals that fundamental constitutional principles still need to be applied in the context of surveillance and that the government cannot circumvent traditional protections afforded by the common law.
  • In Indiana, the attorney general is calling on the governor to “to adopt a safe harbor rule I proposed that would incentivize companies to take strong data protection measures, which will reduce the scale and frequency of cyberattacks in Indiana.” Attorney General Curtis Hill urged Governor Eric J. Holcomb to allow a change in the state’s data security regulations to be made effective.
    • The proposed rule provides:
      • Procedures adopted under IC 24-4.9-3-3.5(c) are presumed reasonable if the procedures comply with this section, including one (1) of the following applicable standards:
        • (1) A covered entity implements and maintains a cybersecurity program that complies with the National Institute of Standards and Technology (NIST) cybersecurity framework and follows the most recent version of one (1) of the following standards:
          • (A) NIST Special Publication 800-171.
          • (B) NIST SP 800-53.
          • (C) The Federal Risk and Authorization Management Program (FedRAMP) security assessment framework.
          • (D) International Organization for Standardization/International Electrotechnical Commission 27000 family – information security management systems.
        • (2) A covered entity is regulated by the federal or state government and complies with one (1) of the following standards as it applies to the covered entity:
          • (A) The federal USA Patriot Act (P.L. 107-56).
          • (B) Executive Order 13224.
          • (C) The federal Driver’s Privacy Protection Act (18 U.S.C. 2721 et seq.).
          • (D) The federal Fair Credit Reporting Act (15 U.S.C. 1681 et seq.).
          • (E) The federal Health Insurance Portability and Accountability Act (HIPAA) (P.L. 104-191).
        • (3) A covered entity complies with the current version of the payment card industry data security standard in place at the time of the breach of security of data, as published by the Payment Card Industry Security Standard Council.
      • The regulations further provide that if a data base owner can show “its data security plan was reasonably designed, implemented, and executed to prevent the breach of security of data” then it “will not be subject to a civil action from the office of the attorney general arising from the breach of security of data.”
  • The Tech Transparency Project (TTP) is claiming that Apple “has removed apps in China at the government’s request” the majority of which “involve activities like illegal gambling and porn.” However, TTP is asserting that its analysis “suggests Apple is proactively blocking scores of other apps that are politically sensitive for Beijing.”

Coming Events

  • On 19 January, the Senate Intelligence Committee will hold a hearing on the nomination of Avril Haines to be the Director of National Intelligence.
  • The Senate Homeland Security and Governmental Affairs Committee will hold a hearing on the nomination of Alejandro N. Mayorkas to be Secretary of Homeland Security on 19 January.
  • On 19 January, the Senate Armed Services Committee will hold a hearing on former General Lloyd Austin III to be Secretary of Defense.
  • On 27 July, the Federal Trade Commission (FTC) will hold PrivacyCon 2021.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2021. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Further Reading, Other Developments, and Coming Events (11 January 2021)

Further Reading

  • Why the Russian hack is so significant, and why it’s close to a worst-case scenario” By Kevin Collier — NBC News. This article quotes experts who paint a very ugly picture for the United States (U.S.) in trying to recover from the Russian Federation’s hack. Firstly, the Russians are very good at what they do and likely built multiple backdoors in systems they would want to ensure they have access to after using SolarWinds’ update system to gain initial entry. Secondly, broadly speaking, at present, U.S. agencies and companies have two very unpalatable options: spend months hunting through their systems for any such backdoors or other issues or rebuild their systems from scratch. The ramifications of this hack will continue to be felt well into the Biden Administration.
  • The storming of Capitol Hill was organized on social media.” By Sheera Frenkel — The New York Times. As the repercussions of the riot and apparently attempted insurrection continue to be felt, one aspect that has received attention and will continue to receive attention is the role social media platforms played. Platforms used predominantly by right wing and extremist groups like Gab and Parler were used extensively to plan and execute the attack. This fact and the ongoing content moderation issues at larger platforms will surely inform the Section 230 and privacy legislation debates expected to occur this year and into the future.
  • Comcast data cap blasted by lawmakers as it expands into 12 more states” By Jon Brodkin — Ars Technica. Comcast has extended to other states its 1.2TB cap on household broadband usage, and lawmakers in Massachusetts have written the company, claiming this will hurt low-income families working and schooling children at home. Comcast claims this affects only a small class of subscribers, so-called “super users.” Such a move always seemed in retrospect as data is now the most valuable commodity.
  • Finnish lawmakers’ emails hacked in suspected espionage incident” By Shannon Vavra — cyberscoop. Another legislature of a democratic nation has been hacked, and given the recent hacks of Norway’s Parliament and Germany’s Bundestag by the Russians, it may well turn out they were behind this hack that “obtain[ed] information either to benefit a foreign state or to harm Finland” according to Finland’s National Bureau of Investigation.
  • Facebook Forced Its Employees To Stop Discussing Trump’s Coup Attempt” By Ryan Mac — BuzzFeed News. Reportedly, Facebook shut down internal dialogue about the misgivings voiced by employees about its response to the lies in President Donald Trump’s video and the platform’s role in creating the conditions that caused Trump supporters to storm the United States (U.S.) Capitol. Internally and externally, Facebook equivocated on whether it would go so far as Twitter in taking down Trump’s video and content.
  • WhatsApp gives users an ultimatum: Share data with Facebook or stop using the app” By Dan Goodin — Ars Technica. Very likely in response to coming changes to the Apple iOS that will allow for greater control of privacy, Facebook is giving WhatsApp users a choice: accept our new terms of service that allows personal data to be shared with and used by Facebook or have your account permanently deleted.
  • Insecure wheels: Police turn to car data to destroy suspects’ alibis” By Olivia Solon — NBC News. Like any other computerized, connected device, cars are increasingly a source law enforcement (and likely intelligence agencies) are using to investigate crimes. If you sync your phone via USB or Bluetooth, most modern cars will access your phone and store all sorts of personal data that can later be accessed. But, other systems in cars can tell investigators where the car was, how heavy it was (i.e. how many people), when doors opened, etc. And, there are not specific federal or state laws in the United States to mandate protection of these data.

Other Developments

  • The Federal Bureau of Investigation (FBI), the Cybersecurity and Infrastructure Security Agency (CISA), the Office of the Director of National Intelligence (ODNI), and the National Security Agency (NSA) issued a joint statement, finally naming the Russian Federation as the likely perpetrator of the massive SolarWinds hack. However, the agencies qualified the language, claiming:
    • This work indicates that an Advanced Persistent Threat (APT) actor, likely Russian in origin, is responsible for most or all of the recently discovered, ongoing cyber compromises of both government and non-governmental networks. At this time, we believe this was, and continues to be, an intelligence gathering effort.
      • Why the language is not more definitive is not clear. Perhaps the agencies are merely exercising caution about whom is blamed for the attack. Perhaps the agencies do not want to anger a White House and President averse to reports of Russian hacking for fear it will be associated with the hacking during the 2016 election that aided the Trump Campaign.
      • However, it is noteworthy the agencies are stating their belief the hacking was related to “intelligence gathering,” suggesting the purpose of the incursions was not to destroy data or launch an attack. Presumably, such an assertion is meant to allays concerns that the Russian Federation intends to attack the United States (U.S.) like it did in Ukraine and Georgia in the last decade.
    • The Cyber Unified Coordination Group (UCG) convened per Presidential Policy Directive (PPD) 41 (which technically is the FBI, CISA, and the ODNI but not the NSA) asserted its belief that
      • of the approximately 18,000 affected public and private sector customers of SolarWinds’ Orion products, a much smaller number has been compromised by follow-on activity on their systems. We have so far identified fewer than 10 U.S. government agencies that fall into this category, and are working to identify the nongovernment entities who also may be impacted.
      • These findings are, of course, preliminary, and there may be incentives for the agencies to be less than forthcoming about what they know of the scope and impact of the hacking.
  • Federal Communications Commission (FCC) Chair Ajit Pai has said he will not proceed with a rulemaking to curtail 47 USC 230 (Section 230) in response to a petition the National Telecommunications and Information Administration (NTIA) filed at the direction of President Donald Trump. Pai remarked “I do not intend to move forward with the notice of proposed rule-making at the FCC” because “in part, because given the results of the election, there’s simply not sufficient time to complete the administrative steps necessary in order to resolve the rule-making.” Pai cautioned Congress and the Biden Administration “to study and deliberate on [reforming Section 230] very seriously,” especially “the immunity provision.”  
    • In October, Pai had announced the FCC would proceed with a notice and comment rulemaking based on the NTIA’s petition asking the agency to start a rulemaking to clarify alleged ambiguities in 47 USC 230 regarding the limits of the liability shield for the content others post online versus the liability protection for “good faith” moderation by the platform itself. The NTIA was acting per direction in an executive order allegedly aiming to correct online censorship. Executive Order 13925, “Preventing Online Censorship” was issued in late May after Twitter factchecked two of President Donald Trump’s Tweets regarding false claims made about mail voting in California in response to the COVID-19 pandemic.
  • A House committee released its most recent assessment of federal cybersecurity and information technology (IT) assessment. The House Oversight Committee’s Government Operations Subcommittee released its 11th biannual scorecard under the “Federal Information Technology Acquisition Reform Act (FITARA). The subcommittee stressed this “marks the first time in the Scorecard’s history that all 24 agencies included in the law have received A’s in a single category” and noted it is “the first time that a category will be retired.” Even though this assessment is labeled the FITARA Scorecard, it is actually a compilation of different metrics borne of other pieces of legislation and executive branch programs.
    • Additionally, 19 of the 24 agencies reviewed received A’s on the Data Center Optimization Initiative (DCOI)
    • However, four agencies received F’s on Agency Chief Information Officer (CIO) authority enhancements, measures aiming to fulfill one of the main purposes of FITARA: empowering agency CIOs as a means of controlling and managing better IT acquisition and usage. It has been an ongoing struggle to get agency compliance with the letter and spirit of federal law and directives to do just this.
    • Five agencies got F’s and two agencies got D’s for failing to hit the schedule for transitioning off of the “the expiring Networx, Washington Interagency Telecommunications System (WITS) 3, and Regional Local Service Agreement (LSA) contracts” to the General Services Administration’s $50 billion Enterprise Infrastructure Solutions (EIS). The GSA explained this program in a recent letter:
      • After March 31, 2020, GSA will disconnect agencies, in phases, to meet the September 30, 2022 milestone for 100% completion of transition. The first phase will include agencies that have been “non-responsive” to transition outreach from GSA. Future phases will be based on each agency’s status at that time and the individual circumstances impacting that agency’s transition progress, such as protests or pending contract modifications. The Agency Transition Sponsor will receive a notification before any services are disconnected, and there will be an opportunity for appeal.
  • A bipartisan quartet of United States Senators urged the Trump Administration in a letter to omit language in a trade agreement with the United Kingdom (UK) that mirrors the liability protection in 47 U.S.C. 230 (Section 230). Senators Rob Portman (R-OH), Mark R. Warner (D-VA), Richard Blumenthal (D-CT), and Charles E. Grassley (R-IA) argued to U.S. Trade Representative Ambassador Robert Lighthizer that a “safe harbor” like the one provided to technology companies for hosting or moderating third party content is outdated, not needed in a free trade agreement, contrary to the will of both the Congress and UK Parliament, and likely to be changed legislatively in the near future. However, left unsaid in the letter, is the fact that Democrats and Republicans generally do not agree on how precisely to change Section 230. There may be consensus that change is needed, but what that change looks like is still a matter much in dispute.
    • Stakeholders in Congress were upset that the Trump Administration included language modeled on Section 230 in the United States-Mexico-Canada Agreement (USMCA), the modification of the North American Free Trade Agreement (NAFTA). For example, House Energy and Commerce Committee Chair Frank Pallone Jr (D-NJ) and then Ranking Member Greg Walden (R-OR) wrote Lighthizer, calling it “inappropriate for the United States to export language mirroring Section 230 while such serious policy discussions are ongoing” in Congress.
  • The Trump White House issued a new United States (U.S.) government strategy for advanced computing to replace the 2019 strategy. The “PIONEERING THE FUTURE ADVANCED COMPUTING ECOSYSTEM: A STRATEGIC PLAN” “envisions a future advanced computing ecosystem that provides the foundation for continuing American leadership in science and engineering, economic competitiveness, and national security.” The Administration asserted:
    • It develops a whole-of-nation approach based on input from government, academia, nonprofits, and industry sectors, and builds on the objectives and recommendations of the 2019 National Strategic Computing Initiative Update: Pioneering the Future of Computing. This strategic plan also identifies agency roles and responsibilities and describes essential operational and coordination structures necessary to support and implement its objectives. The plan outlines the following strategic objectives:
      • Utilize the future advanced computing ecosystem as a strategic resource spanning government, academia, nonprofits, and industry.
      • Establish an innovative, trusted, verified, usable, and sustainable software and data ecosystem.
      • Support foundational, applied, and translational research and development to drive the future of advanced computing and its applications.
      • Expand the diverse, capable, and flexible workforce that is critically needed to build and sustain the advanced computing ecosystem.
  • A federal court threw out a significant portion of a suit Apple brought against a security company, Corellium, that offers technology allowing security researchers to virtualize the iOS in order to undertake research. The United States District Court for the Southern District of Florida summarized the case:
    • On August 15, 2019, Apple filed this lawsuit alleging that Corellium infringed Apple’s copyrights in iOS and circumvented its security measures in violation of the federal Digital Millennium Copyright Act (“DMCA”). Corellium denies that it has violated the DMCA or Apple’s copyrights. Corellium further argues that even if it used Apple’s copyrighted work, such use constitutes “fair use” and, therefore, is legally permissible.
    • The court found “that Corellium’s use of iOS constitutes fair use” but did not for the DMCA claim, thus allowing Apple to proceed with that portion of the suit.
  • The Trump Administration issued a plan on how cloud computing could be marshalled to help federally funded artificial intelligence (AI) research and development (R&D). A select committee made four key recommendations that “should accelerate the use of cloud resources for AI R&D: 1)launch and support pilot projects to identify and explore the advantages and challenges associated with the use of commercial clouds in conducting federally funded AI research; (2) improve education and training opportunities to help researchers better leverage cloud resources for AI R&D; (3) catalog best practices in identity management and single-sign-on strategies to enable more effective use of the variety of commercial cloud resources for AI R&D; and (4) establish and publish best practices for the seamless use of different cloud platforms for AI R&D. Each recommendation, if adopted, should accelerate the use of cloud resources for AI R&D.”

Coming Events

  • On 13 January, the Federal Communications Commission (FCC) will hold its monthly open meeting, and the agency has placed the following items on its tentative agenda “Bureau, Office, and Task Force leaders will summarize the work their teams have done over the last four years in a series of presentations:
    • Panel One. The Commission will hear presentations from the Wireless Telecommunications Bureau, International Bureau, Office of Engineering and Technology, and Office of Economics and Analytics.
    • Panel Two. The Commission will hear presentations from the Wireline Competition Bureau and the Rural Broadband Auctions Task Force.
    • Panel Three. The Commission will hear presentations from the Media Bureau and the Incentive Auction Task Force.
    • Panel Four. The Commission will hear presentations from the Consumer and Governmental Affairs Bureau, Enforcement Bureau, and Public Safety and Homeland Security Bureau.
    • Panel Five. The Commission will hear presentations from the Office of Communications Business Opportunities, Office of Managing Director, and Office of General Counsel.
  • On 27 July, the Federal Trade Commission (FTC) will hold PrivacyCon 2021.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2021. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Gerd Altmann from Pixabay

FY 2021 Omnibus and COVID Stimulus Become Law

The end-of-the-year funding package for FY 2021 is stuffed with technology policy changes.

At the tail end of the calendar year 2020, Congress and the White House finally agreed on FY 2021 appropriations and further COVID-19 relief funding and policies, much of which implicated or involved technology policy. As is often the practice, Congressional stakeholders used the opportunity of must-pass legislation as the vehicle for other legislation that perhaps could not get through a chamber of Congress or surmount the now customary filibuster in the Senate.

Congress cleared the “Consolidated Appropriations Act, 2021” (H.R.133) on 21 December 2020, but President Donald Trump equivocated on whether to sign the package, in part, because it did not provide for $2,000 in aid to every American, a new demand at odds with the one his negotiators worked out with House Democrats and Senate Republicans. Given this disparity, it seems more likely Trump made an issue of the $2,000 assistance to draw attention from a spate of controversial pardons issued to Trump allies and friends. Nonetheless, Trump ultimately signed the package on 27 December.

As one of the only bills or set of bills to annually pass Congress, appropriations acts are often the means by which policy and programmatic changes are made at federal agencies through the ability of the legislative branch to condition the use of such funds as are provided. This year’s package is different only in that it contains much more in the way of ride-along legislation than the average omnibus. In fact, there are hundreds, perhaps even more than 1,000 pages of non-appropriations legislation, some that pertains to technology policy. Moreover, with an additional supplemental bill attached to the FY 2021 omnibus also carries significant technology funding and programming.

First, we will review FY 2021 funding and policy for key U.S. agencies, then discuss COVID-19 related legislation, and then finally all the additional legislation Congress packed into the omnibus.

The Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) would receive $2.025 billion, a bare $9 million increase above FY 2020 with significant reordering of how the agency may spend its funds:

  • The agreement includes a net increase of $224,178,000 above the budget request. This includes $226,256,000 above the request to maintain current services, and $54,516,000 in enhancements that are described in more detail below. Assumed in the current services level of funding are several rejections of proposed reductions to prior year initiatives and the inclusion of necessary annualizations to sustain them, such as: $35,606,000 for threat analysis and response; $5,507,000 for soft targets and crowded places security, including school safety and best practices; $6,852,000 for bombing prevention activities, including the train-the-trainer programs; and $67,371,000 to fully fund the Chemical Facility Anti-Terrorism Standards program. The agreement includes the following reductions below the budget request: $6,937,000 for personnel cost adjustments; $2,500,000 of proposed increases to the CyberSentry program; $11,354,000 of proposed increases for the Vulnerability Management program; $2,000,000 of proposed increases to the Cybersecurity Quality Service Management Office (QSMO); $6,500,000 of proposed increases for cybersecurity advisors; and $27,303,000 for the requested increase for protective security advisors. Of the total amount provided for this account, $22,793,000 is available until September 30, 2022, for the National Infrastructure Simulation Analysis Center.

The FY 2021 omnibus requires of CISA the following:

  • Financial Transparency and Accountability.-The Cybersecurity and Infrastructure Security Agency (CISA) is directed to submit the fiscal year 2022 budget request at the same level of PP A detail provided in the table at the end of this report with no further adjustments to the PP A structure. Further, CISA shall brief the Committees not later than 45 days after the date of enactment of this Act and quarterly thereafter on: a spend plan; detailed hiring plans with a delineation of each mission critical occupation (MCO); procurement plans for all major investments to include projected spending and program schedules and milestones; and an execution strategy for each major initiative. The hiring plan shall include an update on CISA’s hiring strategy efforts and shall include the following for each MCO: the number of funded positions and FTE within each PP A; the projected and obligated funding; the number of actual onboard personnel as of the date of the plan; and the hiring and attrition projections for the fiscal year.
  • Cyber Defense Education and Training (CDET).-The agreement includes $29,457,000 for CISA’s CDET programs, an increase of$20,607,000 above the request that is described in further detail below. Efforts are underway to address the shortage of qualified national cybersecurity professionals in the current and future cybersecurity workforce. In order to move forward with a comprehensive plan for a cybersecurity workforce development effort, the agreement includes $10,000,000 above the request to enhance cybersecurity education and training and programs to address the national shortfall of cybersecurity professionals, including activities funded through the use of grants or cooperative agreements as needed in order to fully comply with congressional intent. CISA should consider building a higher education consortium of colleges and universities, led by at least one academic institution with an extensive history of education, research, policy, and outreach in computer science and engineering disciplines; existing designations as a land-grant institution with an extension role; a center of academic excellence in cyber security operations; a proven track record in hosting cyber corps programs; a record of distinction in research cybersecurity; and extensive experience in offering distance education programs and outreach with K-12 programs. The agreement also includes $4,300,000 above the request for the Cybersecurity Education and Training Assistance Program (CETAP), which was proposed for elimination, and $2,500,000 above the request to further expand and initiate cybersecurity education programs, including CETAP, which improve education delivery methods for K-12 students, teachers, counselors and post-secondary institutions and encourage students to pursue cybersecurity careers.
  • Further, the agreement includes $2,500,000 above the request to support CISA’s role with the National Institute of Standards and Technology, National Initiative for Cybersecurity Education Challenge project or for similar efforts to address shortages in the cybersecurity workforce through the development of content and curriculum for colleges, universities, and other higher education institutions.
  • Lastly, the agreement includes $800,000 above the request for a review of CISA’s program to build a national cybersecurity workforce. CISA is directed to enter into a contract for this review with the National Academy of Public Administration, or a similar non-profit organization, within 45 days of the date of enactment of this Act. The review shall assess: whether the partnership models under development by CISA are positioned to be effective and scalable to address current and anticipated needs for a highly capable cybersecurity workforce; whether other existing partnership models, including those used by other agencies and private industry, could usefully augment CISA’s strategy; and the extent to which CISA’s strategy has made progress on workforce development objectives, including excellence, scale, and diversity. A report with the findings of the review shall be provided to the Committees not later than 270 days after the date of enactment of this Act.
  • Cyber QSMO.-To help improve efforts to make strategic cybersecurity services available to federal agencies, the agreement provides $1,514,000 above the request to sustain and enhance prior year investments. As directed in the House report and within the funds provided, CISA is directed to work with the Management Directorate to conduct a crowd-sourced security testing program that uses technology platforms and ethical security researchers to test for vulnerabilities on departmental systems. In addition, not later than 90 days after the date of enactment of this Act, CISA is directed to brief the Committees on opportunities for state and local governments to leverage shared services provided through the Cyber QSMO or a similar capability and to explore the feasibility of executing a pilot program focused on this goal.
  • Cyber Threats to Critical Election Infrastructure.-The briefing required in House Report 116–458 regarding CISA’s efforts related to the 2020 elections shall be delivered not later than 60 days after the date of enactment of this Act. CISA is directed to continue working with SL TT stakeholders to implement election security measures.
  • Cybersecurity Worliforce.-By not later than September 30, 2021, CISA shall provide a joint briefing, in conjunction with the Department of Commerce and other appropriate federal departments and agencies, on progress made to date on each recommendation put forth in Executive Order 13800 and the subsequent “Supporting the Growth and Sustainment of the Nation’s Cybersecurity Workforce” report.
  • Hunt and Incident Response Teams.-The agreement includes an increase of $3,000,000 above fiscal year 2020 funding levels to expand CISA’s threat hunting capabilities.
  • Joint Cyber Planning Office (JCPO).-The agreement provides an increase of $10,568,000 above the request to establish a JCPO to bring together federal and SLTT governments, industry, and international partners to strategically and operationally counter nation-state cyber threats. CISA is directed to brief the Committees not later than 60 days after the date of enactment of this Act on a plan for establishing the JCPO, including a budget and hiring plan; a description of how JCPO will complement and leverage other CISA capabilities; and a strategy for partnering with the aforementioned stakeholders.
  • Multi-State Information Sharing and Analysis Center (MS-ISAC).-The agreement provides $5,148,000 above the request for the MS-ISAC to continue enhancements to SLTT election security support, and furthers ransomware detection and response capabilities, including endpoint detection and response, threat intelligence platform integration, and malicious domain activity blocking.
  • Software Assurance Tools.-Not later than 90 days after the date of enactment of this Act, CISA, in conjunction with the Science and Technology Directorate, is directed to brief the Committees on their collaborative efforts to transition cyber-related research and development initiatives into operational tools that can be used to provide continuous software assurance. The briefing should include an explanation for any completed projects and activities that were not considered viable for practice or were considered operationally self-sufficient. Such briefing shall include software assurance projects, such as the Software Assurance Marketplace.
  • Updated Lifecycle Cost Estimates.–CISA is directed to provide a briefing, not later than 60 days after the date of enactment of this Act, regarding the Continuous Diagnostics and Mitigation (COM) and National Cybersecurity Protection System (NCPS) program lifecycles. The briefing shall clearly describe the projected evolution of both programs by detailing the assumptions that have changed since the last approved program cost and schedule baseline, and by describing the plans to address such changes. In addition, the briefing shall include an analysis of alternatives for aligning vulnerability management, incident response, and NCPS capabilities. Finally, CISA is directed to provide a report not later than 120 days after the date of enactment of this Act with updated five-year program costs and schedules which is congruent with projected capability gaps across federal civilian systems and networks.
  • Vulnerability Management.-The agreement provides $9,452,000 above fiscal year 2020 levels to continue reducing the 12-month backlog in vulnerability assessments. The agreement also provides an increase of $8,000,000 above the request to address the increasing number of identified and reported vulnerabilities in the software and hardware that operates critical infrastructure. This investment will improve capabilities to identify, analyze, and share information about known vulnerabilities and common attack patterns, including through the National Vulnerability Database, and to expand the coordinated responsible disclosure of vulnerabilities.

There are a pair of provisions aimed at the People’s Republic of China (PRC) in Division B (i.e. the FY 2021 Commerce-Justice-Science Appropriations Act):

  • Section 514 prohibits funds for acquisition of certain information systems unless the acquiring department or agency has reviewed and assessed certain risks. Any acquisition of such an information system is contingent upon the development of a risk mitigation strategy and a determination that the acquisition is in the national interest. Each department or agency covered under section 514 shall submit a quarterly report to the Committees on Appropriations describing reviews and assessments of risk made pursuant to this section and any associated findings or determinations.
  • Section 526 prohibits the use of funds by National Aeronautics and Space Administration (NASA), Office of Science and Technology Policy (OSTP), or the National Space Council (NSC) to engage in bilateral activities with China or a Chinese-owned company or effectuate the hosting of official Chinese visitors at certain facilities unless the activities are authorized by subsequent legislation or NASA, OSTP, or NSC have made a certification…

The National Institute of Standards and Technology (NIST) is asked with a number of duties, most of which relate to current or ongoing efforts in artificial intelligence (AI), cybersecurity, and the Internet of Things:

  • Artificial Intelligence (Al). -The agreement includes no less than $6,500,000 above the fiscal year 2020 level to continue NIST’s research efforts related to AI and adopts House language on Data Characterization Standards in Al. House language on Framework for Managing AI Risks is modified to direct NIST to establish a multi-stakeholder process for the development of an Al Risk Management Framework regarding the reliability, robustness, and trustworthiness of Al systems. Further, within 180 days of enactment of this Act, NIST shall establish the process by which it will engage with stakeholders throughout the multi-year framework development process.
  • Cybersecurity.-The agreement includes no less than the fiscal year 2020 enacted level for cybersecurity research, outreach, industry partnerships, and other activities at NIST, including the National Cybersecurity Center of Excellence (NCCoE) and the National Initiative for Cybersecurity Education (NICE). Within the funds provided, the agreement encourages NIST to establish additional NICE cooperative agreements with regional alliances and multi-stakeholder partnerships for cybersecurity workforce and education.
  • Cybersecurity of Genomic Data.-The agreement includes no less than $1,250,000 for NIST and NCCoE to initiate a use case, in collaboration with industry and academia, to research the cybersecurity of personally identifiable genomic data, with a particular focus on better securing deoxyribonucleic acid sequencing techniques, including clustered regularly interspaced short palindromic repeat (CRISPR) technologies, and genomic data storage architectures from cyber threats. NIST and NCCoE should look to partner with entities who have existing capability to research and develop state-of-the-art cybersecurity technologies for the unique needs of genomic and biomedical-based systems.
  • Industrial Internet of Things (IIoT).-The agreement includes no less than the fiscal year 2020 enacted amount for the continued development of an IloT cybersecurity research initiative and to partner, as appropriate, with academic entities and industry to improve the sustainable security of IloT devices in industrial settings.

NIST would receive a modest increase in funding from $1.034 billion to $1.0345 billion from the last fiscal year to the next.

The National Telecommunications and Information Administration (NTIA) would be provided $45.5 million and “the agreement provides (1) up to $7,500,000 for broadband mapping in coordination with the Federal Communications Commission (FCC); (2) no less than the fiscal year 2020 enacted amount for Broadband Programs; (3) $308,000 for Public Safety Communications; and (4) no less than $3,000,000 above the fiscal year 2020 enacted level for Advanced Communications Research.” The agency’s funding for FY 2021 is higher than the last fiscal year at a bit more than $40 million but far less than the Trump Administration’s request of more than $70 million.

Regarding NTIA programmatic language, the bill provides:

  • Further, the agreement directs the additional funds for Advanced Communications Research be used to procure and maintain cutting-edge equipment for research and testing of the next generation of communications technologies, including 5G, as well as to hire staff as needed. The agreement further encourages NTIA to improve the deployment of 5G and spectrum sharing through academic partnerships to accelerate the development of low-cost sensors. For fiscal year 2021, NTIA is directed to follow prior year report language, included in Senate Report 116-127 and adopted in Public Law 116-93, on the following topics: Federal Spectrum Management, Spectrum Management for Science, and the Internet Corporation for Assigned Names and Numbers (ICANN).
  • Spectrum Management System.-The agreement encourages NTIA and the Department to consider alternative proposals to fully fund the needed upgrades to its spectrum management system, including options outside of direct appropriations, and is directed to brief the Committees regarding possible alternative options no later than 90 days after enactment of this Act.
  • Next Generation Broadband in Rural Areas.-NTIA is encouraged to ensure that deployment of last-mile broadband infrastructure is targeted to areas that are currently unserved or underserved, and to utilize public-private partnerships and projects where Federal funding will not exceed 50 percent of a project’s total cost where practicable.
  • National Broadband Map Augmentation.-NTIA is directed to engage with rural and Tribal communities to further enhance the accuracy of the national broadband availability map. NTIA should include in its fiscal year 2022 budget request an update on rural-and Tribal-related broadband availability and access trends, challenges, and Federal actions to achieve equitable access to broadband services in currently underserved communities throughout the Nation. Furthermore, NTIA is encouraged, in coordination with the FCC, to develop and promulgate a standardized process for collecting data from State and local partners.
  • Domain Name Registration.-NTIA is directed, through its position within the Governmental Advisory Committee to work with ICANN to expedite the establishment of a global access model that provides law enforcement, intellectual property rights holders, and third parties with timely access to accurate domain name registration information for legitimate purposes. NTIA is encouraged, as appropriate, to require registrars and registries based in the United States to collect and make public accurate domain name registration information.

The Federal Trade Commission (FTC) would receive $351 million, an increase of $20 million over FY 2020. The final bill includes this policy provision for the FTC to heed:

  • Resources for Data Privacy and Security. -The agreement urges the FTC to conduct a comprehensive internal assessment measuring the agency’s current efforts related to data privacy and security while separately identifying all resource-based needs of the FTC to improve in these areas. The agreement also urges the FTC to provide a report describing the assessment’s findings to the Committees within 180 days of enactment of this Act.

The Federal Communications Commission (FCC) would see a larger increase in funding for agency operations than the FTC, going from $339 million in FY 2020 to $374 million in FY 2021. However, $33 million of the increase is earmarked for implementing the “Broadband DATA Act” (P.L.116-130) along with the $65 million in COVID-19 supplemental funding for the same purpose. The FY 2021 omnibus directs the FCC on a range of policy issues:

  • Broadband Maps.-In addition to adopting the House report language on Broadband Maps, the agreement provides substantial dedicated resources for the FCC to implement the Broadband DATA Act. The FCC is directed to submit a report to the Committees on Appropriations within 90 days of enactment of this Act providing a detailed spending plan for these resources. In addition, the FCC, in coordination with the NTIA, shall outline the specific roles and responsibilities of each agency as it relates to the National Broadband Map and implementation of the Broadband DATA Act. The FCC is directed to report in writing to the Committees every 30 days on the date, amount, and purpose of any new obligation made for broadband mapping and any updates to the broadband mapping spending plan.
  • Lifeline Service. In lieu of the House report language on Lifeline Service, the agreement notes recent action by the FCC to partially waive its rules updating the Lifeline program’s minimum service standard for mobile broadband usage in light of the large increase to the standard that would have gone into effect on Dec. I, 2020, and the increased reliance by Americans on mobile broadband as a result of the pandemic. The FCC is urged to continue to balance the Lifeline program’s goals of accessibility and affordability.
  • 5G Fund and Rural America.-The agreement remains concerned about the feasible deployment of 5G in rural America. Rural locations will likely run into geographic barriers and infrastructure issues preventing the robust deployment of 5G technology, just as they have faced with 4G. The FCC’s proposed 5G Fund fails to provide adequate details or a targeted spend plan on creating seamless coverage in the most rural parts of the Nation. Given these concerns, the FCC is directed to report in writing on: (1) its current and future plans fix prioritizing deployment of 4G coverage in rural areas, (2) its plans for 5G deployment in rural areas, and (3) its plan for improving the mapping and long-term tracking of coverage in rural areas.
  • 6 Gigahertz. -As the FCC has authorized unlicensed use of the 6 gigahertz band, the agreement expects the Commission to ensure its plan does not result in harmful interference to incumbent users or impact critical infrastructure communications systems. The agreement is particularly concerned about the potential effects on the reliability of the electric transmission and distribution system. The agreement expects the FCC to ensure any mitigation technologies are rigorously tested and found to be effective in order to protect the electric transmission system. The FCC is directed to provide a report to the Committees within 90 days of enactment of this Act on its progress in ensuring rigorous testing related to unlicensed use of the 6 gigahertz band. Rural Broadband-The agreement remains concerned that far too many Americans living in rural and economically disadvantaged areas lack access to broadband at speeds necessary to fully participate in the Internet age. The agreement encourages the agency to prioritize projects in underserved areas, where the infrastructure to be installed provides access at download and upload speeds comparable to those available to Americans in urban areas. The agreement encourages the FCC to avoid efforts that could duplicate existing networks and to support deployment of last-mile broadband infrastructure to underserved areas. Further, the agreement encourages the agency to prioritize projects financed through public-private partnerships.
  • Contraband Cell Phones. -The agreement notes continued concern regarding the exploitation of contraband cell phones in prisons and jails nationwide. The agreement urges the FCC to act on the March 24, 2017 Further Notice of Proposed Rulemaking regarding combating contraband wireless devices. The FCC should consider all legally permissible options, including the creation, or use, of “quiet or no service zones,” geolocation-based denial, and beacon technologies to geographically appropriate correctional facilities. In addition, the agreement encourages the FCC to adopt a rules-based approach to cellphone disabling that would require immediate disabling by a wireless carrier upon proper identification of a contraband device. The agreement recommends that the FCC move forward with its suggestion in the Fiscal Year 2019 report to this Committee, noting that “additional field testing of jamming technology will provide a better understanding of the challenges and costs associated with the proper deployment of jamming system.” The agreement urges the FCC to use available funds to coordinate rigorous Federal testing of jamming technology and coordinate with all relevant stakeholders to effectively address this urgent problem.
  • Next-Generation Broadband Networks/or Rural America-Deployment of broadband and telecommunications services in rural areas is imperative to support economic growth and public safety. However, due to geographical challenges facing mobile connectivity and fiber providers, connectivity in certain areas remains challenging. Next generation satellite-based technology is being developed to deliver direct satellite to cellular capability. The FCC is encouraged to address potential regulatory hurdles, to promote private sector development and implementation of innovative, next generation networks such as this, and to accelerate broadband and telecommunications access to all Americans.

$635 million is provided for a Department of Agriculture rural development pilot program, and he Secretary will need to explain how he or she will use authority provided in the last farm bill to expand broadband:

  • The agreement provides $635,000,000 to support the ReConnect pilot program to increase access to broadband connectivity in unserved rural communities and directs the Department to target grants and loans to areas of the country with the largest broadband coverage gaps. These projects should utilize technology that will maximize coverage of broadband with the most benefit to taxpayers and the rural communities served. The agreement notes stakeholder concerns that the ReConnect pilot does not effectively recognize the unique challenges and opportunities that different technologies, including satellite, provide to delivering broadband in noncontiguous States or mountainous terrain and is concerned that providing preference to 100 mbps symmetrical service unfairly disadvantages these communities by limiting the deployment of other technologies capable of providing service to these areas.
  • The Agriculture Improvement Act of 2018 (Public Law 115-334) included new authorities for rural broadband programs that garnered broad stakeholder support as well as bipartisan, bicameral agreement in Congress. Therefore, the Secretary is directed to provide a report on how the Department plans to utilize these authorities to deploy broadband connectivity to rural communities.

In Division M of the package, the “Coronavirus Response and Relief Supplemental Appropriations Act, 2021,” there are provisions related to broadband policy and funding. The bill created a $3.2 billion program to help low-income Americans with internet service and buying devices for telework or distance education. The “Emergency Broadband Benefit Program” is established at the FCC, “under which eligible households may receive a discount of up to $50, or up to $75 on Tribal lands, off the cost of internet service and a subsidy for low-cost devices such as computers and tablets” according to a House Appropriations Committee summary. This funding is far short of what House Democrats wanted. And yet, this program aims to help those on the wrong side of the digital divide during the pandemic.

Moreover, this legislation also establishes two grant programs at the NTIA, designed to help provide broadband on tribal lands and in rural areas. $1 billion is provided for the former and $300 million for the latter with the funds going to tribal and state and local governments to obtain services from private sector providers. The $1 billion for tribal lands allows for greater flexibility in what the funds are ultimately spent on with the $320 million for underserved rural areas being restricted to broadband deployment. Again, these funds are aimed at bridging the disparity in broadband service exposed and exacerbated during the pandemic.

Congress also provided funds for the FCC to reimburse smaller telecommunications providers in removing and replacing risky telecommunications equipment from the People’s Republic of China (PRC). Following the enactment of the “Secure and Trusted Communications Networks Act of 2019” (P.L.116-124) that codified and added to a FCC regulatory effort to address the risks posed by Huawei and ZTE equipment in United States (U.S.) telecommunications networks, there was pressure in Congress to provide the funds necessary to help carriers meet the requirements of the program. The FY 2021 omnibus appropriates $1.9 billion for this program. In another but largely unrelated tranche of funding, the aforementioned $65 million given to the FCC to undertake the “Broadband DATA Act.”

Division Q contains text similar to the “Cybersecurity and Financial System Resilience Act of 2019” (H.R.4458) that would require “the Board of Governors of the Federal Reserve System, Office of the Comptroller of the Currency, Federal Deposit Insurance Corporation, and National Credit Union Administration to annually report on efforts to strengthen cybersecurity by the agencies, financial institutions they regulate, and third-party service providers.”

Division U contains two bills pertaining to technology policy:

  • Title I. The AI in Government Act of 2020. This title codifies the AI Center of Excellence within the General Services Administration to advise and promote the efforts of the federal government in developing innovative uses of artificial intelligence (AI) and competency in the use of AI in the federal government. The section also requires that the Office of Personnel Management identify key skills and competencies needed for federal positions related to AI and establish an occupational series for positions related to AI.
  • Title IX. The DOTGOV Act. This title transfers the authority to manage the .gov internet domain from the General Services Administration to the Cybersecurity and Infrastructure Security Agency (CISA) of the Department of Homeland Security. The .gov internet domain shall be available to any Federal, State, local, or territorial government entity, or other publicly controlled entity, subject to registration requirements established by the Director of CISA and approved by the Director of the Office of Management and Budget.

Division W is the FY 2021 Intelligence Authorization Act with the following salient provisions:

  • Section 323. Report on signals intelligence priorities and requirements. Section 323 requires the Director of National Intelligence (DNI) to submit a report detailing signals intelligence priorities and requirements subject to Presidential Policy Directive-28 (PPD-28) that stipulates “why, whether, when, and how the United States conducts signals intelligence activities.” PPD-28 reformed how the National Security Agency (NSA) and other Intelligence Community (IC) agencies conducted signals intelligence, specifically collection of cellphone and internet data, after former NSA contractor Edward Snowden exposed the scope of the agency’s programs.
  • Section 501. Requirements and authorities to improve education in science, technology, engineering, arts, and mathematics. Section 501 ensures that the Director of the Central Intelligence Agency (CIA) has the legal authorities required to improve the skills in science, technology, engineering, arts, and mathematics (known as STEAM) necessary to meet long-term national security needs. Section 502. Seedling investment in next-generation microelectronics in support of artificial intelligence. Section 502 requires the DNI, acting through the Director of the Intelligence Advanced Research Projects Activity, to award contracts or grants, or enter into other transactions, to encourage microelectronics research.
  • Section 601. Report on attempts by foreign adversaries to build telecommunications and cybersecurity equipment and services for, or to provide them to, certain U.S. Section 601 requires the CIA, NSA, and DIA to submit a joint report that describes the United States intelligence sharing and military posture in Five Eyes countries that currently have or intend to use adversary telecommunications or cybersecurity equipment, especially as provided by China or Russia, with a description of potential vulnerabilities of that information and assessment of mitigation options.
  • Section 602. Report on foreign use of cyber intrusion and surveillance technology. Section 602 requires the DNI to submit a report on the threats posed by foreign governments and foreign entities using and appropriating commercially available cyber intrusion and other surveillance technology.
  • Section 603. Reports on recommendations of the Cyberspace Solarium Commission. Section 603 requires the ODNI and representatives of other agencies to report to Congress their assessment of the recommendations submitted by the Cyberspace Solarium Commission pursuant to Section 1652(j) of the John S. McCain National Defense Authorization Act (NDAA) for Fiscal Year 2019, and to describe actions that each agency expects to take to implement these recommendations.
  • Section 604. Assessment of critical technology trends relating to artificial intelligence, microchips, and semiconductors and related matters. Section 604 requires the DNI to complete an assessment of export controls related to artificial intelligence (AI), microchips, advanced manufacturing equipment, and other AI-enabled technologies, including the identification of opportunities for further cooperation with international partners.
  • Section 605. Combating Chinese influence operations in the United States and strengthening civil liberties protections. Section 605 provides additional requirements to annual reports on Influence Operations and Campaigns in the United States by the Chinese Communist Party (CCP) by mandating an identification of influence operations by the CCP against the science and technology sector in the United States. Section 605 also requires the FBI to create a plan to increase public awareness of influence activities by the CCP. Finally, section 605 requires the FBI, in consultation with the Assistant Attorney General for the Civil Rights and the Chief Privacy and Civil Liberties Officer of the Department of Justice, to develop recommendations to strengthen relationships with communities targeted by the CCP and to build trust with such communities through local and regional grassroots outreach.
  • Section 606. Annual report on corrupt activities of senior officials of the CCP. Section 606 requires the CIA, in coordination with the Department of Treasury’s Office of Intelligence and Analysis and the FBI, to submit to designated congressional committees annually through 2025 a report that describes and assesses the wealth and corruption of senior officials of the CCP, as well as targeted financial measures, including potential targets for sanctions designation. Section 606 further expresses the Sense of Congress that the United States should undertake every effort and pursue every opportunity to expose the corruption and illicit practices of senior officials of the CCP, including President Xi Jinping.
  • Section 607. Report on corrupt activities of Russian and other Eastern European oligarchs. Section 607 requires the CIA, in coordination with the Department of the Treasury’s Office of Intelligence and Analysis and the FBI, to submit to designated congressional committees and the Under Secretary of State for Public Diplomacy, a report that describes the corruption and corrupt or illegal activities among Russian and other Eastern European oligarchs who support the Russian government and Russian President Vladimir Putin, and the impact of those activities on the economy and citizens of Russia. Section 607 further requires the CIA, in coordination with the Department of Treasury’s Office of Intelligence and Analysis, to describe potential sanctions that could be imposed for such activities. Section 608. Report on biosecurity risk and disinformation by the CCP and the PRC. Section 608 requires the DNI to submit to the designated congressional committees a report identifying whether and how CCP officials and the Government of the People’s Republic of China may have sought to suppress or exploit for national advantage information regarding the novel coronavirus pandemic, including specific related assessments. Section 608 further provides that the report shall be submitted in unclassified form, but may have a classified annex.
  • Section 612. Research partnership on activities of People’s Republic of China. Section 612 requires the Director of the NGA to seek to enter into a partnership with an academic or non-profit research institution to carry out joint unclassified geospatial intelligence analyses of the activities of the People’s Republic of China that pose national security risks to the United States, and to make publicly available unclassified products relating to such analyses.

Division Z would tweak a data center energy efficiency and energy savings program overseen by the Secretary of Energy and the Administrator of the Environmental Protection Agency that could impact the Office of Management and Budget’s (OMB) government-wide program. Specifically, “Section 1003 requires the development of a metric for data center energy efficiency, and requires the Secretary of Energy, Administrator of the Environmental Protection Agency (EPA), and Director of the Office of Management and Budget (OMB) to maintain a data center energy practitioner program and open data initiative for federally owned and operated data center energy usage.” There is also language that would require the U.S. government to buy and use more energy-efficient information technology (IT): “each Federal agency shall coordinate with the Director [of OMB], the Secretary, and the Administrator of the Environmental Protection Agency to develop an implementation strategy (including best-practices and measurement and verification techniques) for the maintenance, purchase, and use by the Federal agency of energy-efficient and energy-saving information technologies at or for facilities owned and operated by the Federal agency, taking into consideration the performance goals.”

Division FF contains telecommunications provisions:

  • Section 902. Don’t Break Up the T-Band Act of 2020. Section 902 repeals the requirement for the FCC to reallocate and auction the 470 to 512megahertz band, commonly referred to as the T-band. In certain urban areas, the T-band is utilized by public-safety entities. It also directs the FCC to implement rules to clarify acceptable expenditures on which 9-1- 1 fees can be spent, and creates a strike force to consider how the Federal Government can end 9-1-1 fee diversion.
  • Section 903. Advancing Critical Connectivity Expands Service, Small Business Resources, Opportunities, Access, and Data Based on Assessed Need and Demand (ACCESS BROADBAND) Act. Section 903 establishes the Office of Internet Connectivity and Growth (Office) at the NTIA. This Office would be tasked with performing certain responsibilities related to broadband access, adoption, and deployment, such as performing public outreach to promote access and adoption of high-speed broadband service, and streamlining and standardizing the process for applying for Federal broadband support. The Office would also track Federal broadband support funds, and coordinate Federal broadband support programs within the Executive Branch and with the FCC to ensure unserved Americans have access to connectivity and to prevent duplication of broadband deployment programs.
  • Section 904. Broadband Interagency Coordination Act. Section 904 requires the Federal Communications Commission (FCC), the National Telecommunications and Information Administration (NTIA), and the Department of Agriculture to enter into an interagency agreement to coordinate the distribution of federal funds for broadband programs, to prevent duplication of support and ensure stewardship of taxpayer dollars. The agreement must cover, among other things, the exchange of information about project areas funded under the programs and the confidentiality of such information. The FCC is required to publish and collect public comments about the agreement, including regarding its efficacy and suggested modifications.
  • Section 905. Beat CHINA for 5G Act of 2020. Section 905 directs the President, acting through the Assistant Secretary of Commerce for Communications and Information, to withdraw or modify federal spectrum assignments in the 3450 to 3550 megahertz band, and directs the FCC to begin a system of competitive bidding to permit non-Federal, flexible-use services in a portion or all of such band no later than December 31, 2021.

Section 905 would countermand the White House’s efforts to auction off an ideal part of spectrum for 5G (see here for analysis of the August 2020 announcement). Congressional and a number of Trump Administration stakeholders were alarmed by what they saw as a push to bestow a windfall on a private sector company in the rollout of 5G.

Title XIV of Division FF would allow the FTC to seek civil fines of more than $43,000 per violation during the duration of the public health emergency arising from the pandemic “for unfair and deceptive practices associated with the treatment, cure, prevention, mitigation, or diagnosis of COVID–19 or a government benefit related to COVID-19.”

Finally, Division FF is the vehicle for the “American COMPETES Act” that:

directs the Department of Commerce and the FTC to conduct studies and submit reports on technologies including artificial intelligence, the Internet of Things, quantum computing, blockchain, advanced materials, unmanned delivery services, and 3-D printing. The studies include requirements to survey each industry and report recommendations to help grow the economy and safely implement the technology.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2021. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by forcal35 from Pixabay

Final NDAA Agreement, Part II

There are AI, 5G, and supply chain provisions in the national security policy bill the Armed Services Committee have agreed upon.

So, it appears I failed to include all the technology goodies to be found in the final FY 2021 National Defense Authorization Act (NDAA). And so, I will cover the provisions I missed yesterday in the conference report to accompany the “William M. “Mac” Thornberry National Defense Authorization Act for Fiscal Year 2021” (H.R.6395). For example, there are artificial intelligence (AI), 5G, and supply chain provisions.

Notably, the final bill includes the House Science, Space, and Technology Committee’s “National Artificial Intelligence Initiative Act of 2020” (H.R.6216). In the Joint Explanatory Statement, the conferees asserted:

The conferees believe that artificial intelligence systems have the potential to transform every sector of the United States economy, boosting productivity, enhancing scientific research, and increasing U.S. competitiveness and that the United States government should use this Initiative to enable the benefits of trustworthy artificial intelligence while preventing the creation and use of artificial intelligence systems that behave in ways that cause harm. The conferees further believe that such harmful artificial intelligence systems may include high-risk systems that lack sufficient robustness to prevent adversarial attacks; high-risk systems that harm the privacy or security of users or the general public; artificial general intelligence systems that become self-aware or uncontrollable; and artificial intelligence systems that unlawfully discriminate against protected classes of persons, including on the basis of sex, race, age, disability, color, creed, national origin, or religion. Finally, the conferees believe that the United States must take a whole of government approach to leadership in trustworthy artificial intelligence, including through coordination between the Department of Defense, the Intelligence Community, and the civilian agencies.

H.R.6216 directs the President to establish the National Artificial Intelligence Initiative that would:

  • Ensure the U.S. continues to lead in AI research and development (R&D)
  • Lead efforts throughout the world to develop and use “trustworthy AI systems” in both the public and private sectors
  • Prepare to assist U.S. workers for the coming integration and use of AI throughout the U.S., and
  • Coordinate AI R&D development and demonstration activities across the federal government, including national security agencies.

The President would have a variety of means at his or her discretion in effectuating those goals, including existing authority to ask Congress for funding and to use Executive Office agencies to manage the authority and funding Congress provides.

Big picture, H.R. 6216 would require better coordination of federal AI initiatives, research, and funding, and more involvement in the development of voluntary, consensus-based standards for AI. Much of this would happen through the standing up of a new “National Artificial Intelligence Initiative Office” by the Office of Science and Technology Policy (OSTP) in the White House. This new entity would be the locus of AI activities and programs in the United States’ (U.S.) government with the ultimate goal of ensuring the nation is the world’s foremost developer and user of the new technology.

Moreover, OSTP would “acting through the National Science and Technology Council…establish or designate an Interagency Committee to coordinate Federal programs and activities in support of the Initiative.” This body would “provide for interagency coordination of Federal artificial intelligence research, development, and demonstration activities, development of voluntary consensus standards and guidelines for research, development, testing, and adoption of ethically developed, safe, and trustworthy artificial intelligence systems, and education and training activities and programs of Federal departments and agencies undertaken pursuant to the Initiative.” The committee would need to “develop a strategic plan for AI” within two years and update it every three years thereafter. Moreover, the committee would need to “propose an annually coordinated interagency budget for the Initiative to the Office of Management and Budget (OMB) that is intended to ensure that the balance of funding across the Initiative is sufficient to meet the goals and priorities established for the Initiative.” However, OMB would be under no obligation to take notice of this proposal save for pressure from AI stakeholders in Congress or AI champions in any given Administration. The Secretary of Commerce would create a ‘‘National Artificial Intelligence Advisory Committee” to advise the President and National Artificial Intelligence Initiative Office on a range of AI policy matters. In the bill as added to the House’s FY 2021 NDAA, it was to have been the Secretary of Energy.

Federal agencies would be permitted to award funds to new Artificial Intelligence Research Institutes to pioneer research in any number of AI fields or considerations. The bill does not authorize any set amount of money for this program and instead kicks the decision over to the Appropriations Committees on any funding. The National Institute of Standards and Technology (NIST) must “support measurement research and development of best practices and voluntary standards for trustworthy artificial intelligence systems” and “support measurement research and development of best practices and voluntary standards for trustworthy artificial intelligence systems” among other duties. NIST must “shall work to develop, and periodically update, in collaboration with other public and private sector organizations, including the National Science Foundation and the Department of Energy, a voluntary risk management framework for the trustworthiness of artificial intelligence systems.” NIST would also “develop guidance to facilitate the creation of voluntary data sharing arrangements between industry, federally funded research centers, and Federal agencies for the purpose of advancing artificial intelligence research and technologies.”

The National Science Foundation (NSF) would need to “fund research and education activities in artificial intelligence systems and related fields, including competitive awards or grants to institutions of higher education or eligible non-profit organizations (or consortia thereof).” The Department of Energy must “carry out a cross-cutting research and development program to advance artificial intelligence tools, systems, capabilities, and workforce needs and to improve the reliability of artificial intelligence methods and solutions relevant to the mission of the Department.” This department would also be tasked with advancing “expertise in artificial intelligence and high-performance computing in order to improve health outcomes for veteran populations.”

According to a fact sheet issued by the House Science, Space, and Technology Committee, [t]he legislation will:

  • Formalize interagency coordination and strategic planning efforts in AI research, development, standards, and education through an Interagency Coordination Committee and a coordination office managed by the Office of Science and Technology Policy (OSTP).
  • Create an advisory committee to better inform the Coordination Committee’s strategic plan, track the state of the science around artificial intelligence, and ensure the Initiative is meeting its goals.
  • Create a network of AI institutes, coordinated through the National Science Foundation, that any Federal department of agency could fund to create partnerships between the academia and the public and private sectors to accelerate AI research focused on an economic sector, social sector, or on a cross-cutting AI challenge.
  • Support basic AI measurement research and standards development at the National Institute for Standards and Technology(NIST) and require NIST to create a framework for managing risks associated with AI systems and best practices for sharing data to advance trustworthy AI systems.
  • Support research at the National Science Foundation (NSF) across a wide variety of AI related research areas to both improve AI systems and use those systems to advance other areas of science. This section requires NSF to include an obligation for an ethics statement for all research proposals to ensure researchers are considering, and as appropriate, mitigating potential societal risks in carrying out their research.
  • Support education and workforce development in AI and related fields, including through scholarships and traineeships at NSF.
  • Support AI research and development efforts at the Department of Energy (DOE), utilize DOE computing infrastructure for AI challenges, promote technology transfer, data sharing, and coordination with other Federal agencies, and require an ethics statement for DOE funded research as required at NSF.
  • Require studies to better understand workforce impacts and opportunities created by AI, and identify the computing resources necessary to ensure the United States remains competitive in AI.

A provision would expand the scope of the biannual reports the DOD must submit to Congress on the Joint Artificial Intelligence Center (JAIC) to include the Pentagon’s efforts to develop or contribute to efforts to institute AI standards and more detailed information on uniformed DOD members who serve at the JAIC. Other language would revamp how the Under Secretary of Defense for Research and Engineering shall manage efforts and procurements between the DOD and the private sector on AI and other technology with cutting edge national security applications. The new emphasis of the program would be to buy mature AI to support DOD missions, allowing DOD components to directly use AI and machine learning to address operational problems, speeding up the development, testing, and deployment of AI technology and capabilities, and overseeing and managing any friction between DOD agencies and components over AI development and use. This section also spells out which DOD officials should be involved with this program and how the JAIC fits into the picture. This language and other provisions suggest the DOD may have trouble in coordinating AI activities and managing infighting, at least in the eyes of the Armed Services Committees.

Moreover, the JAIC would be given a new Board of Advisors to advise the Secretary of Defense and JAIC Director on a range of AI issues. However, as the Secretary shall appoint the members of the board, all of whom must be from outside the Pentagon, this organ would seem to be a means of the Office of the Secretary asserting greater control over the JAIC.

And yet, the Secretary is also directed to delegate acquisition authority to the JAIC, permitting it to operate with the same independence as a DOD agency. The JAIC Director will need to appoint an acquisition executive to manage acquisition and policy inside and outside the DOD. $75 million would be authorized a year for these activities, and the Secretary needs to draft and submit an implementation plan to Congress and conduct a demonstration before proceeding.

The DOD must identify five use cases of when AI-enabled systems have improved the functioning of the Department in handling management functions in implementing the National Defense Strategy and then create prototypes and technology pilots to utilize commercially available AI capabilities to bolster the use cases.

Within six months of enactment, the DOD must determine whether it currently has the resources, capability, and know how to ensure that any AI bought has been ethically and responsibly developed. Additionally, the DOD must assess how it can install ethical AI standards in acquisitions and supply chains.

The Secretary is provided the authority to convene a steering committing on emerging technology and national security threats comprised of senior DOD officials to decide on how the Department can best adapt to and buy new technology to ensure U.S. military superiority. This body would also investigate the new technology used by adversaries and how to address and counter any threats. For this steering committee, emerging technology is defined as:

Technology determined to be in an emerging phase of development by the Secretary, including quantum information science and technology, data analytics, artificial intelligence, autonomous technology, advanced materials, software, high performance computing, robotics, directed energy, hypersonics, biotechnology, medical technologies, and such other technology as may be identified by the Secretary.

Not surprisingly, the FY 2021 NDAA has provisions on 5G. Most notably, the Secretary of Defense must assess and mitigate any risks presented by “at-risk” 5G or 6G systems in other nations before a major weapons system or a battalion, squadron, or naval combatant can be based there. The Secretary must take into account any steps the nation is taking to address risk, those steps the U.S. is taking, any agreements in place to mitigate risks, and other steps. This provision names Huawei and ZTE as “at-risk vendors.” This language may be another means by which the U.S. can persuade other nations not to buy and install technology from these People’s Republic of China (PRC) companies.

The Under Secretary of Defense for Research and Engineering and a cross-functional team would need to develop a plan to transition the DOD to 5G throughout the Department and its components. Each military department inside the DOD would get to manage its own 5G acquisition with the caveat that the Secretary would need to establish a telecommunications security program to address 5G security risks in the DOD. The Secretary would also be tasked with conducting a demonstration project to “evaluate the maturity, performance, and cost of covered technologies to provide additional options for providers of fifth-generation wireless network services” for Open RAN (aka oRAN) and “one or more massive multiple-input, multiple-output radio arrays, provided by one or more companies based in the United States, that have the potential to compete favorably with radios produced by foreign companies in terms of cost, performance, and efficiency.”

The service departments would need to submit reports to the Secretary on how they are assessing and mitigating and reporting to the DOD on the following risks to acquisition programs:

  • Technical risks in engineering, software, manufacturing and testing.
  • Integration and interoperability risks, including complications related to systems working across multiple domains while using machine learning and artificial intelligence capabilities to continuously change and optimize system performance.
  • Operations and sustainment risks, including as mitigated by appropriate sustainment planning earlier in the lifecycle of a program, access to technical data, and intellectual property rights.
  • Workforce and training risks, including consideration of the role of contractors as part of the total workforce.
  • Supply chain risks, including cybersecurity, foreign control and ownership of key elements of supply chains, and the consequences that a fragile and weakening defense industrial base, combined with barriers to industrial cooperation with allies and partners, pose for delivering systems and technologies in a trusted and assured manner.

Moreover, “[t]he Under Secretary of Defense for Acquisition and Sustainment, in coordination with the Chief Information Officer of the Department of Defense, shall develop requirements for ap- propriate software security criteria to be included in solicitations for commercial and developmental solutions and the evaluation of bids submitted in response to such solicitations, including a delineation of what processes were or will be used for a secure software development life cycle.”

The Armed Services Committees are directing the Secretary to follow up a report submitted to the President per Executive Order 13806 on strengthening Defense Industrial Base (DIB) manufacturing and supply chain resiliency. The DOD must submit “additional recommendations regarding United States industrial policies….[that] shall consist of specific executive actions, programmatic changes, regulatory changes, and legislative proposals and changes, as appropriate.”

The DOD would also need to submit an annex to an annual report to Congress on “strategic and critical materials, including the gaps and vulnerabilities in supply chains of such materials.”

There is language that would change how the DOD manages the production of microelectronics and related supply chain risk. The Pentagon would also need to investigate how to commercialize its intellectual property for microelectronic R&D. The Department of Commerce would need to “assess the capabilities of the United States industrial base to support the national defense in light of the global nature of the supply chain and significant interdependencies between the United States industrial base and the industrial bases of foreign countries with respect to the manufacture, design, and end use of microelectronics.”

There is a revision of the Secretary of Energy’s authority over supply chain risk administered by the National Nuclear Security Administration (NNSA) that would provide for a “special exclusion action” that would bar the procurement of risky technology for up to two years.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

American and Canadian Agencies Take Differing Approaches On Regulating AI

The outgoing Trump Administration tells agencies to lightly regulate AI; Canada’s privacy regulator calls for strong safeguards and limits on use of AI, including legislative changes.

The Office of Management and Budget (OMB) has issued guidance for federal agencies on how they are to regulate artificial intelligence (AI) not in use by the government. This guidance seeks to align policy across agencies in how they use their existing power to regulate AI according to the Trump Administration’s policy goals. Notably, this memorandum is binding on all federal agencies (including national defense) and even independent agencies such as the Federal Trade Commission (FTC) and Federal Communications Commission (FCC). OMB worked with other stakeholder agencies on this guidance per by Executive Order (EO) 13859, “Maintaining American Leadership in Artificial Intelligence” and issued a draft of the memorandum 11 months ago for comment.

In “Guidance for Regulation of Artificial Intelligence Applications,” OMB “sets out policy considerations that should guide, to the extent permitted by law, regulatory and non-regulatory approaches to AI applications developed and deployed outside of the Federal government.” OMB is directing agencies to take a light touch to regulating AI under its current statutory authorities, being careful to consider costs and benefits and keeping in mind the larger policy backdrop of taking steps to ensure United States (U.S.) dominance in AI in light of competition from the People’s Republic of China (PRC), the European Union, Japan, the United Kingdom, and others. OMB is requiring reports from agencies on how they will use and not use their authority to meet the articulated goals and requirements of this memorandum. However, given the due date for these reports will be well into the next Administration, it is very likely the Biden OMB at least pauses this initiative and probably alters it to meet new policy. It is possible that policy goals to protect privacy, combat algorithmic bias, and protect data are made more prominent in U.S. AI regulation.

As a threshold matter, it bears note that this memorandum uses a definition of statute that is narrower than AI is being popularly discussed. OMB explained that “[w]hile this Memorandum uses the definition of AI recently codified in statute, it focuses on “narrow” (also known as “weak”) AI, which goes beyond advanced conventional computing to learn and perform domain-specific or specialized tasks by extracting information from data sets, or other structured or unstructured sources of information.” Consequently, “[m]ore theoretical applications of “strong” or “general” AI—AI that may exhibit sentience or consciousness, can be applied to a wide variety of cross-domain activities and perform at the level of, or better than a human agent, or has the capacity to self-improve its general cognitive abilities similar to or beyond human capabilities—are beyond the scope of this Memorandum.”

The Trump OMB tells agencies to minimize regulation of AI and take into account how any regulatory action may affect growth and innovation in the field before putting implemented. OMB directs agencies to favor “narrowly tailored and evidence­ based regulations that address specific and identifiable risks” that foster an environment where U.S. AI can flourish. Consequently, OMB bars “a precautionary approach that holds AI systems to an impossibly high standard such that society cannot enjoy their benefits and that could undermine America’s position as the global leader in AI innovation.” Of course, what constitutes “evidence-based regulation” and an “impossibly high standard” are in the eye of the beholder, so this memorandum could be read by the next OMB in ways the outgoing OMB does not agree with. Finally, OMB is pushing agencies to factor potential benefits in any risk calculation, presumably allowing for greater risk of bad outcomes if the potential reward seems high. This would seem to suggest a more hands-off approach on regulating AI.

OMB listed the 10 AI principles agencies must in regulating AI in the private sector:

  • Public trust in AI
  • Public participation
  • Scientific integrity and information quality
  • Risk assessment and management
  • Benefits and costs
  • Flexibility
  • Fairness and non-discrimination
  • Disclosure and transparency
  • Safety and security
  • Interagency coordination

OMB also tells agencies to look at existing federal or state regulation that may prove inconsistent, duplicative, or inconsistent with this federal policy and “may use their authority to address inconsistent, burdensome, and duplicative State laws that prevent the emergence of a national market.”

OMB encouraged agencies to use “non-regulatory approaches” in the event existing regulations are sufficient or the benefits of regulation do not justify the costs. OMB counseled “[i]n these cases, the agency may consider either not taking any action or, instead, identifying non-regulatory approaches that may be appropriate to address the risk posed by certain AI applications” and provided examples of “non-regulatory approaches:”

  • Sector-Specific Policy Guidance or Frameworks
  • Pilot Programs and Experiments
  • Voluntary Consensus Standards
  • Voluntary Frameworks

As noted, the EO under which OMB is acting requires “that implementing agencies with regulatory authorities review their authorities relevant to AI applications and submit plans to OMB on achieving consistency with this Memorandum.” OMB directs:

The agency plan must identify any statutory authorities specifically governing agency regulation of AI applications, as well as collections of AI-related information from regulated entities. For these collections, agencies should describe any statutory restrictions on the collection or sharing of information (e.g., confidential business information, personally identifiable information, protected health information, law enforcement information, and classified or other national security information). The agency plan must also report on the outcomes of stakeholder engagements that identify existing regulatory barriers to AI applications and high-priority AI applications that are within an agency’s regulatory authorities. OMB also requests agencies to list and describe any planned or considered regulatory actions on AI. Appendix B provides a template for agency plans.

Earlier this year, the White House’s Office of Science and Technology Policy (OSTP) released a draft “Guidance for Regulation of Artificial Intelligence Applications,” a draft of this OMB memorandum that would be issued to federal agencies as directed by Executive Order (EO) 13859, “Maintaining American Leadership in Artificial Intelligence.” However, this memorandum is not aimed at how federal agencies use and deploy artificial intelligence (AI) but rather it “sets out policy considerations that should guide, to the extent permitted by law, regulatory and non-regulatory oversight of AI applications developed and deployed outside of the Federal government.” In short, if this draft is issued by OMB as written, federal agencies would need to adhere to the ten principles laid out in the document in regulating AI as part of their existing and future jurisdiction over the private sector. Not surprisingly, the Administration favors a light touch approach that should foster the growth of AI.

EO 13859 sets the AI policy of the government “to sustain and enhance the scientific, technological, and economic leadership position of the United States in AI.” The EO directed OMB and OSTP along with other Administration offices, to craft this draft memorandum for comment. OMB was to “issue a memorandum to the heads of all agencies that shall:

(i) inform the development of regulatory and non-regulatory approaches by such agencies regarding technologies and industrial sectors that are either empowered or enabled by AI, and that advance American innovation while upholding civil liberties, privacy, and American values; and
(ii) consider ways to reduce barriers to the use of AI technologies in order to promote their innovative application while protecting civil liberties, privacy, American values, and United States economic and national security.

A key regulator in a neighbor of the U.S. also weighed in on the proper regulation of AI from the vantage of privacy. The Office of the Privacy Commissioner of Canada (OPC) “released key recommendations…[that] are the result of a public consultation launched earlier this year.” OPC explained that it “launched a public consultation on our proposals for ensuring the appropriate regulation of AI in the Personal Information Protection and Electronic Documents Act (PIPEDA).” OPC’s “working assumption was that legislative changes to PIPEDA are required to help reap the benefits of AI while upholding individuals’ fundamental right to privacy.” It is to be expected that a privacy regulator will see matters differently than a Republican White House, and so it is here. The OPC

In an introductory paragraph, the OPC spelled out the problems and dangers created by AI:

uses of AI that are based on individuals’ personal information can have serious consequences for their privacy. AI models have the capability to analyze, infer and predict aspects of individuals’ behaviour, interests and even their emotions in striking ways. AI systems can use such insights to make automated decisions about individuals, including whether they get a job offer, qualify for a loan, pay a higher insurance premium, or are suspected of suspicious or unlawful behaviour. Such decisions have a real impact on individuals’ lives, and raise concerns about how they are reached, as well as issues of fairness, accuracy, bias, and discrimination. AI systems can also be used to influence, micro-target, and “nudge” individuals’ behaviour without their knowledge. Such practices can lead to troubling effects for society as a whole, particularly when used to influence democratic processes.

The OPC is focused on the potential for AI to be used in a more effective fashion than current data processing to predict, uncover, subvert, and influence the behavior of people in ways not readily apparent. There is also concern for another aspect of AI and other data processing that has long troubled privacy and human rights advocates: the potential for discriminatory treatement.

OPC asserted “an appropriate law for AI would:

  • Allow personal information to be used for new purposes towards responsible AI innovation and for societal benefits;
  • Authorize these uses within a rights based framework that would entrench privacy as a human right and a necessary element for the exercise of other fundamental rights;
  • Create provisions specific to automated decision-making to ensure transparency, accuracy and fairness; and
  • Require businesses to demonstrate accountability to the regulator upon request, ultimately through proactive inspections and other enforcement measures through which the regulator would ensure compliance with the law.

However, the OPC does not entirely oppose the use of AI and is proposing exceptions to the general requirement under Canadian federal law that meaningful consent is required before data processing. The OPC is “recommending a series of new exceptions to consent that would allow the benefits of AI to be better achieved, but within a rights based framework.” OPC stated “[t]he intent is to allow for responsible, socially beneficial innovation, while ensuring individual rights are respected…[and] [w]e recommend exceptions to consent for the use of personal information for research and statistical purposes, compatible purposes, and legitimate commercial interests purposes.” However, the OPC is proposing a number of safeguards:

The proposed exceptions to consent must be accompanied by a number of safeguards to ensure their appropriate use. This includes a requirement to complete a privacy impact assessment (PIA), and a balancing test to ensure the protection of fundamental rights. The use of de-identified information would be required in all cases for the research and statistical purposes exception, and to the extent possible for the legitimate commercial interests exception.

Further, the OPC made the case that enshrining strong privacy rights in Canadian law would not obstruct the development of AI but would, in fact, speed its development:

  • A rights-based regime would not stand in the way of responsible innovation. In fact, it would help support responsible innovation and foster trust in the marketplace, giving individuals the confidence to fully participate in the digital age. In our 2018-2019 Annual Report to Parliament, our Office outlined a blueprint for what a rights-based approach to protecting privacy should entail. This rights-based approach runs through all of the recommendations in this paper.
  • While we propose that the law should allow for uses of AI for a number of new purposes as outlined, we have seen examples of unfair, discriminatory, and biased practices being facilitated by AI which are far removed from what is socially beneficial. Given the risks associated with AI, a rights based framework would help to ensure that it is used in a manner that upholds rights. Privacy law should prohibit using personal information in ways that are incompatible with our rights and values.
  • Another important measure related to this human rights-based approach would be for the definition of personal information in PIPEDA to be amended to clarify that it includes inferences drawn about an individual. This is important, particularly in the age of AI, where individuals’ personal information can be used by organizations to create profiles and make predictions intended to influence their behaviour. Capturing inferred information clearly within the law is key for protecting human rights because inferences can often be drawn about an individual without their knowledge, and can be used to make decisions about them.

The OPC also called for a framework under which people could review and contest automated decisions:

we recommend that individuals be provided with two explicit rights in relation to automated decision-making. Specifically, they should have a right to a meaningful explanation of, and a right to contest, automated decision-making under PIPEDA. These rights would be exercised by individuals upon request to an organization. Organizations should be required to inform individuals of these rights through enhanced transparency practices to ensure individual awareness of the specific use of automated decision-making, as well as of their associated rights. This could include requiring notice to be provided separate from other legal terms.

The OPC also counseled that PIPEDA’s enforcement mechanism and incentives be changed:

PIPEDA should incorporate a right to demonstrable accountability for individuals, which would mandate demonstrable accountability for all processing of personal information. In addition to the measures detailed below, this should be underpinned by a record keeping requirement similar to that in Article 30 of the GDPR. This record keeping requirement would be necessary to facilitate the OPC’s ability to conduct proactive inspections under PIPEDA, and for individuals to exercise their rights under the Act.

The OPC called for the following to ensure “demonstrable accountability:”

  • Integrating privacy and human rights into the design of AI algorithms and models is a powerful way to prevent negative downstream impacts on individuals. It is also consistent with modern legislation, such as the GDPR and Bill 64. PIPEDA should require organizations to design for privacy and human rights by requiring organizations to implement “appropriate technical and organizational measures” that implement PIPEDA requirements prior to and during all phases of collection and processing.
  • In light of the new proposed rights to explanation and contestation, organizations should be required to log and trace the collection and use of personal information in order to adequately fulfill these rights for the complex processing involved in AI. Tracing supports demonstrable accountability as it provides documentation that the regulator could consult through the course of an inspection or investigation, to determine the personal information fed into the AI system, as well as broader compliance.
  • Demonstrable accountability must include a model of assured accountability pursuant to which the regulator has the ability to proactively inspect an organization’s privacy compliance. In today’s world where business models are often opaque and information flows are increasingly complex, individuals are unlikely to file a complaint when they are unaware of a practice that might cause them harm. This challenge will only become more pronounced as information flows gain complexity with the continued development of AI.
  • The significant risks posed to privacy and human rights by AI systems require a proportionally strong regulatory regime. To incentivize compliance with the law, PIPEDA must provide for meaningful enforcement with real consequences for organizations found to be non-compliant. To guarantee compliance and protect human rights, PIPEDA should empower the OPC to issue binding orders and financial penalties.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Photo by Tetyana Kovyrina from Pexels

Further Reading, Other Developments, and Coming Events (11 November)

Further Reading

  • ICE, IRS Explored Using Hacking Tools, New Documents Show” By Joseph Cox — Vice. Federal agencies other than the Federal Bureau of Investigation (FBI) and the Intelligence Community (IC) appear to be interesting in utilizing some of the capabilities offered by the private sector to access devices or networks in the name of investigating cases.
  • China’s tech industry relieved by Biden win – but not relaxed” By Josh Horwitz and Yingzhi Yang — Reuters. While a Biden Administration will almost certainly lower the temperature between Beijing and Washington, the People’s Republic of China is intent on addressing the pressure points used by the Trump Administration to inflict pain on its technology industry.
  • Trump Broke the Internet. Can Joe Biden Fix It?” By Gilad Edelman — WIRED. This piece provides a view of the waterfront in technology policy under a Biden Administration.
  • YouTube is awash with election misinformation — and it isn’t taking it down” By Rebecca Heilweil — Recode. For unexplained reasons, YouTube seems to have avoided the scrutiny facing Facebook and Twitter on their content moderation policies. Whether the lack of scrutiny is a reason is not clear, but the Google owned platform had much more election-related misinformation than the other social media platforms.
  • Frustrated by internet service providers, cities and schools push for more data” By Cyrus Farivar — NBC News. Internet service providers are not helping cities and states identify families eligible for low-cost internet to help children attend school virtually. They have claimed these data are proprietary, so jurisdictions have gotten creative about identifying such families.

Other Developments

  • The Consumer Product Safety Commission’s (CPSC) Office of the Inspector General (OIG) released its annual Federal Information Security Modernization Act (FISMA) audit and found “that although management continues to make progress in implementing the FISMA requirements much work remains to be done.” More particularly, it was “determined that the CPSC has not implemented an effective information security program and practices in accordance with FISMA requirements.” The OIG asserted:
    • The CPSC information security program was not effective because the CPSC has not developed a holistic formal approach to manage information security risks or to effectively utilize information security resources to address previously identified information security deficiencies. Although the CPSC has begun to develop an Enterprise Risk Management (ERM) program to guide risk management practices at the CPSC, explicit guidance and processes to address information security risks and integrate those risks into the broader agency-wide ERM program has not been developed.
    • In addition, the CPSC has not leveraged the relevant information security risk management guidance prescribed by NIST to develop an approach to manage information security risk.
    • Further, as asserted by CPSC personnel, the CPSC has limited resources to operate the information security program and to address the extensive FISMA requirements and related complex cybersecurity challenges.
    • Therefore, the CPSC has not dedicated the resources necessary to fully address these challenges and requirements. The CPSC began addressing previously identified information security deficiencies but was not able to address all deficiencies in FY 2020.
  • The United States (U.S.) Department of Justice (DOJ) announced the seizure of 27 websites allegedly used by Iran’s Islamic Revolutionary Guard Corps (IRGC) “to further a global covert influence campaign…in violation of U.S. sanctions targeting both the Government of Iran and the IRGC.” The DOJ contended:
    • Four of the domains purported to be genuine news outlets but were actually controlled by the IRGC and targeted audiences in the United States, to covertly influence United States policy and public opinion, in violation of the Foreign Agents Registration Act (FARA). The remainder targeted audiences in other parts of the world.  This seizure warrant follows an earlier seizure of 92 domains used by the IRGC for similar purposes.
  • The United Nations (UN) Special Rapporteur on the right to privacy Joseph Cannataci issued his annual report that “constitutes  a  preliminary  assessment  as  the  evidence  base required to reach definitive conclusions on whether privacy-intrusive, anti-COVID-19 measures are necessary and proportionate in a democratic society is not yet available.” Cannataci added “[a] more definitive report is planned for mid-2021, when 16 months of evidence will be available to allow a more accurate assessment.” He “addresse[d]  two  particular  aspects  of  the impact of COVID-19 on the right to privacy: data protection and surveillance.” The Special Rapporteur noted:
    • While the COVID-19 pandemic has generated much debate about the value of contact tracing and reliance upon technology that track citizens and those they encounter, the use of information and technology is not new in managing public health emergencies. What is concerning in some States are reports of how technology is being used and the degree of intrusion and control being exerted over citizens –possibly to little public health effect.
    • The Special Rapporteur concluded:
      • It is far too early to assess definitively whether some COVID-19-related measures might be unnecessary or disproportionate. The Special Rapporteur will continue to monitor the impact of surveillance in epidemiology on the right to privacy and report to the General Assembly in 2021. The main privacy risk lies in the use of non-consensual methods, such as those outlined in the section on hybrid systems of surveillance, which could result in function creep and be used for other purposes that may be privacy intrusive.
      • Intensive and omnipresent technological surveillance is not the panacea for pandemic situations such as COVID-19. This has been especially driven home by those countries in which the use of conventional contact-tracing methods, without recourse to smartphone applications, geolocation or other technologies, has proven to be most effective in countering the spread of COVID-19.
      • If a State decides that technological surveillance is necessary as a response to the global COVID-19 pandemic, it must make sure that, after proving both the necessity and proportionality of the specific measure, it has a law that explicitly provides for such surveillance measures (as in the example of Israel).
      • A State wishing to introduce a surveillance measure for COVID-19 purposes, should not be able to rely on a generic provision in law, such as one stating that the head of the public health authority may “order such other action be taken as he [or she] may consider appropriate”. That does not provide explicit and specific safeguards which are made mandatory both under the provisions of Convention 108 and Convention 108+, and based on the jurisprudence of the European Court of Human Rights. Indeed, if the safeguard is not spelled out in sufficient detail, it cannot be considered an adequate safeguard.
  • The University of Toronto’s Citizen Lab issued its submission to the Government of Canada’s “public consultation on the renewal of its Responsible Business Conduct (RBC) strategy, which is intended to provide guidance to the Government of Canada and Canadian companies active abroad with respect to their business activities.” Citizen Lab addressed “Canadian technology companies and the threat they pose to human rights abroad” and noted two of its reports on Canadian companies whose technologies were used to violate human rights:
    • In 2018, the Citizen Lab released a report documenting Netsweeper installations on public IP networks in ten countries that each presented widespread human rights concerns. This research revealed that Netsweeper technology was used to block: (1) political content sites, including websites linked to political groups, opposition groups, local and foreign news, and regional human rights issues in Bahrain, Kuwait, Yemen, and UAE; (2) LGBTQ content as a result of Netsweeper’s pre-defined ‘Alternative Lifestyles’ content category, as well as Google searches for keywords relating to LGBTQ content (e.g., the words “gay” or “lesbian”) in the UAE, Bahrain, and Yemen; (3) non-pornographic websites under the mis-categorization of sites like the World Health Organization and the Center for Health and Gender Equity as “pornography”; (4) access to news reporting on the Rohingya refugee crisis and violence against Muslims from multiple news outlets for users in India; (5) Blogspot-hosted websites in Kuwait by categorizing them as “viruses” as well as a range of political content from local and foreign news and a website that monitors human rights issues in the region; and (6) websites like Date.com, Gay.com (the Los Angeles LGBT Center), Feminist.org, and others through categorizing them as “web proxies.” 
    • In 2018, the Citizen Lab released a report documenting the use of Sandvine/Procera devices to redirect users in Turkey and Syria to spyware, as well as the use of such devices to hijack the Internet users’ connections in Egypt, redirecting them to revenue-generating content. These examples highlight some of the ways in which this technology can be used for malicious purposes. The report revealed how Citizen Lab researchers identified a series of devices on the networks of Türk Telekom—a large and previously state-owned ISP in Turkey—being used to redirect requests from users in Turkey and Syria who attempted to download certain common Windows applications like antivirus software and web browsers. Through the use of Sandvine/Procera technology, these users were instead redirected to versions of those applications that contained hidden malware. 
    • Citizen Lab made a number of recommendations:
      • Reform Canadian export law:  
        • Clarify that all Canadian exports are subject to the mandatory analysis set out in section 7.3(1) and section 7.4 of the Export and Import Permits Act (EIPA). 
        • Amend section 3(1) the EIPA such that the human rights risks of an exported good or technology provide an explicit basis for export control.
        • Amend the EIPA to include a ‘catch-all’ provision that subjects cyber-surveillance technology to export control, even if not listed on the Export Control List, when there is evidence that the end-use may be connected with internal repression and/or the commission of serious violations of international human rights or international humanitarian law. 
      • Implement mandatory human rights due diligence legislation:
        • Similar to the French duty of vigilance law, impose a human rights due diligence requirement on businesses such that they are required to perform human rights risk assessments, develop mitigation strategies, implement an alert system, and develop a monitoring and public reporting scheme. 
        • Ensure that the mandatory human rights due diligence legislation provides a statutory mechanism for liability where a company fails to conform with the requirements under the law. 
      • Expand and strengthen the Canadian Ombudsperson for Responsible Enterprise (CORE): 
        • Expand the CORE’s mandate to cover technology sector businesses operating abroad.
        • Expand the CORE’s investigatory mandate to include the power to compel companies and executives to produce testimony, documents, and other information for the purposes of joint and independent fact-finding.
        • Strengthen the CORE’s powers to hold companies to account for human rights violations abroad, including the power to impose fines and penalties and to impose mandatory orders.
        • Expand the CORE’s mandate to assist victims to obtain legal redress for human rights abuses. This could include the CORE helping enforce mandatory human rights due diligence requirements, imposing penalties and/or additional statutory mechanisms for redress when requirements are violated.
        • Increase the CORE’s budgetary allocations to ensure that it can carry out its mandate.
  • A week before the United States’ (U.S.) election, the White House’s Office of Science and Technology Policy (OSTP) issued a report titled “Advancing America’s Global Leadership in Science and Technology: Trump Administration Highlights from the Trump Administration’s First Term: 2017-2020,” that highlights the Administration’s purported achievements. OSTP claimed:
    • Over the past four years, President Trump and the entire Administration have taken decisive action to help the Federal Government do its part in advancing America’s global science and technology (S&T) preeminence. The policies enacted and investments made by the Administration have equipped researchers, health professionals, and many others with the tools to tackle today’s challenges, such as the COVID-19 pandemic, and have prepared the Nation for whatever the future holds.

Coming Events

  • On 17 November, the Senate Judiciary Committee will reportedly hold a hearing with Facebook CEO Mark Zuckerberg and Twitter CEO Jack Dorsey on Section 230 and how their platforms chose to restrict The New York Post article on Hunter Biden.
  • On 18 November, the Federal Communications Commission (FCC) will hold an open meeting and has released a tentative agenda:
    • Modernizing the 5.9 GHz Band. The Commission will consider a First Report and Order, Further Notice of Proposed Rulemaking, and Order of Proposed Modification that would adopt rules to repurpose 45 megahertz of spectrum in the 5.850-5.895 GHz band for unlicensed operations, retain 30 megahertz of spectrum in the 5.895-5.925 GHz band for the Intelligent Transportation Systems (ITS) service, and require the transition of the ITS radio service standard from Dedicated Short-Range Communications technology to Cellular Vehicle-to-Everything technology. (ET Docket No. 19-138)
    • Further Streamlining of Satellite Regulations. The Commission will consider a Report and Order that would streamline its satellite licensing rules by creating an optional framework for authorizing space stations and blanket-licensed earth stations through a unified license. (IB Docket No. 18-314)
    • Facilitating Next Generation Fixed-Satellite Services in the 17 GHz Band. The Commission will consider a Notice of Proposed Rulemaking that would propose to add a new allocation in the 17.3-17.8 GHz band for Fixed-Satellite Service space-to-Earth downlinks and to adopt associated technical rules. (IB Docket No. 20-330)
    • Expanding the Contribution Base for Accessible Communications Services. The Commission will consider a Notice of Proposed Rulemaking that would propose expansion of the Telecommunications Relay Services (TRS) Fund contribution base for supporting Video Relay Service (VRS) and Internet Protocol Relay Service (IP Relay) to include intrastate telecommunications revenue, as a way of strengthening the funding base for these forms of TRS and making it more equitable without increasing the size of the Fund itself. (CG Docket Nos. 03-123, 10-51, 12-38)
    • Revising Rules for Resolution of Program Carriage Complaints. The Commission will consider a Report and Order that would modify the Commission’s rules governing the resolution of program carriage disputes between video programming vendors and multichannel video programming distributors. (MB Docket Nos. 20-70, 17-105, 11-131)
    • Enforcement Bureau Action. The Commission will consider an enforcement action.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Photo by Brett Sayles from Pexels

Further Reading, Other Developments, and Coming Events (22 October)

Further Reading

  •  “A deepfake porn Telegram bot is being used to abuse thousands of women” By Matt Burgess — WIRED UK. A bot set loose on Telegram can take pictures of women and, apparently teens, too, and “takes off” their clothing, rendering a naked image of females who never took naked pictures. This seems to be the next iteration in deepfake porn, a problem that will surely get worse until governments legislate against it and technology companies have incentives to locate and take down such material.
  • The Facebook-Twitter-Trump Wars Are Actually About Something Else” By Charlie Warzel — The New York Times. This piece makes the case that there are no easy fixes for American democracy or for misinformation on social media platforms.
  • Facebook says it rejected 2.2m ads for breaking political campaigning rules” — Agence France-Presse. Facebook’s Vice President of Global Affairs and Communications Nick Clegg said the social media giant is employing artificial intelligence and humans to find and remove political advertisements that violate policy in order to avoid a repeat of 2016 where untrue information and misinformation played roles in both Brexit and the election of Donald Trump as President of the United States.
  • Huawei Fallout—Game-Changing New China Threat Strikes At Apple And Samsung” By Zak Doffman — Forbes. Smartphone manufacturers from the People’s Republic of China (PRC) appear ready to step into the projected void caused by the United States (U.S.) strangling off Huawei’s access to chips. Xiaomi and Oppo have already seen sales surge worldwide and are poised to pick up where Huawei is being forced to leave off, perhaps demonstrating the limits of U.S. power to blunt the rise of PRC technology companies.
  • As Local News Dies, a Pay-for-Play Network Rises in Its Place” By Davey Alba and Jack Nicas — The New York Times. With a decline and demise of many local media outlets in the United States, new groups are stepping into the void, and some are politically minded but not transparent about biases. The organization uncovered in this article is nakedly Republican and is running and planting articles at both legitimate and artificial news sites for pay. Sometimes conservative donors pay, sometimes campaigns do. Democrats are engaged in the same activity but apparently to a lesser extent. These sorts of activities will only erode further faith in the U.S. media.
  • Forget Antitrust Laws. To Limit Tech, Some Say a New Regulator Is Needed.” By Steve Lohr — The New York Times. This piece argues that anti-trust enforcement actions are plodding, tending to take years to finish. Consequently, this body of law is inadequate to the task of addressing the market dominance of big technology companies. Instead, a new regulatory body is needed along the lines of those regulating the financial services industries that is more nimble than anti-trust. Given the problems in that industry with respect to regulation, this may not be the best model.
  • “‘Do Not Track’ Is Back, and This Time It Might Work” By Gilad Edelman — WIRED. Looking to utilize the requirement in the “California Consumer Privacy Act” (CCPA) (AB 375) that requires regulated entities to respect and effectuate the use of a one-time opt-out mechanism, a group of entities have come together to build and roll out the Global Privacy Control. In theory, users could download this technical specification to their phones and computers, install it, use it once, and then all websites would be on notice regarding that person’s privacy preferences. Such a means would go to the problem turned up by Consumer Reports recent report on the difficulty of trying to opt out of having one’s personal information sold.
  • EU countries sound alarm about growing anti-5G movement” By Laurens Cerulus — Politico. 15 European Union (EU) nations wrote the European Commission (EC) warning that the nascent anti-5G movement borne of conspiracy thinking and misinformation threatens the Eu’s position vis-à-vis the United States (U.S.) and the People’s Republic of China (PRC). There have been more than 200 documented arson attacks in the EU with the most having occurred in the United Kingdom, France, and the Netherlands. These nations called for a more muscular, more forceful debunking of the lies and misinformation being spread about 5G.
  • Security firms call Microsoft’s effort to disrupt botnet to protect against election interference ineffective” By Jay Greene — The Washington Post. Microsoft seemingly acted alongside the United States (U.S.) Cyber Command to take down and impair the operation of Trickbot, but now cybersecurity experts are questioning how effective Microsoft’s efforts really were. Researchers have shown the Russian operated Trickbot has already stood up operations and has dispersed across servers around the world, showing how difficult it is to address some cyber threats.
  • Governments around the globe find ways to abuse Facebook” By Sara Fischer and Ashley Gold — Axios. This piece puts a different spin on the challenges Facebook faces in countries around the world, especially those that ruthlessly use the platform to spread lies and misinformation than the recent BuzzFeed News article. The new article paints Facebook as the well-meaning company being taken advantage of while the other one portrays a company callous to content moderation except in nations where it causes them political problems such as the United States, the European Union, and other western democracies.

Other Developments

  • The United States (U.S.) Department of Justice’s (DOJ) Cyber-Digital Task Force (Task Force) issued “Cryptocurrency: An Enforcement Framework,” that “provides a comprehensive overview of the emerging threats and enforcement challenges associated with the increasing prevalence and use of cryptocurrency; details the important relationships that the Department of Justice has built with regulatory and enforcement partners both within the United States government and around the world; and outlines the Department’s response strategies.” The Task Force noted “[t]his document does not contain any new binding legal requirements not otherwise already imposed by statute or regulation.” The Task Force summarized the report:
    • [I]n Part I, the Framework provides a detailed threat overview, cataloging the three categories into which most illicit uses of cryptocurrency typically fall: (1) financial transactions associated with the commission of crimes; (2) money laundering and the shielding of legitimate activity from tax, reporting, or other legal requirements; and (3) crimes, such as theft, directly implicating the cryptocurrency marketplace itself. 
    • Part II explores the various legal and regulatory tools at the government’s disposal to confront the threats posed by cryptocurrency’s illicit uses, and highlights the strong and growing partnership between the Department of Justice and the Securities and Exchange Commission, the Commodity Futures Commission, and agencies within the Department of the Treasury, among others, to enforce federal law in the cryptocurrency space.
    • Finally, the Enforcement Framework concludes in Part III with a discussion of the ongoing challenges the government faces in cryptocurrency enforcement—particularly with respect to business models (employed by certain cryptocurrency exchanges, platforms, kiosks, and casinos), and to activity (like “mixing” and “tumbling,” “chain hopping,” and certain instances of jurisdictional arbitrage) that may facilitate criminal activity.    
  • The White House’s Office of Science and Technology Policy (OSTP) has launched a new website for the United States’ (U.S.) quantum initiative and released a report titled “Quantum Frontiers: Report On Community Input To The Nation’s Strategy For Quantum Information Science.” The Quantum Initiative flows from the “National Quantum Initiative Act” (P.L. 115-368) “to  provide  for  a  coordinated  Federal  program  to  accelerate  quantum  research  and  development  for  the  economic and national security of the United States.” The OSTP explained that the report “outlines eight frontiers that contain core problems with fundamental questions confronting quantum information science (QIS) today:
    • Expanding Opportunities for Quantum Technologies to Benefit Society
    • Building the Discipline of Quantum Engineering
    • Targeting Materials Science for Quantum Technologies
    • Exploring Quantum Mechanics through Quantum Simulations
    • Harnessing Quantum Information Technology for Precision Measurements
    • Generating and Distributing Quantum Entanglement for New Applications
    • Characterizing and Mitigating Quantum Errors
    • Understanding the Universe through Quantum Information
    • OSTP asserted “[t]hese frontier areas, identified by the QIS research community, are priorities for the government, private sector, and academia to explore in order to drive breakthrough R&D.”
  • The New York Department of Financial Services (NYDFS) published its report on the July 2020 Twitter hack during which a team of hacker took over a number of high-profile accounts (e.g. Barack Obama, Kim Kardashian West, Jeff Bezos, and Elon Musk) in order to perpetrate a cryptocurrency scam. The NYDFS has jurisdiction over cryptocurrencies and companies dealing in this item in New York. The NYDFS found that the hackers used the most basic means to acquire permission to take over accounts. The NYDFS explained:
    • Given that Twitter is a publicly traded, $37 billion technology company, it was surprising how easily the Hackers were able to penetrate Twitter’s network and gain access to internal tools allowing them to take over any Twitter user’s account. Indeed, the Hackers used basic techniques more akin to those of a traditional scam artist: phone calls where they pretended to be from Twitter’s Information Technology department. The extraordinary access the Hackers obtained with this simple technique underscores Twitter’s cybersecurity vulnerability and the potential for devastating consequences. Notably, the Twitter Hack did not involve any of the high-tech or sophisticated techniques often used in cyberattacks–no malware, no exploits, and no backdoors.
    • The implications of the Twitter Hack extend far beyond this garden-variety fraud. There are well-documented examples of social media being used to manipulate markets and interfere with elections, often with the simple use of a single compromised account or a group of fake accounts.In the hands of a dangerous adversary, the same access obtained by the Hackers–the ability to take control of any Twitter users’ account–could cause even greater harm.
    • The Twitter Hack demonstrates the need for strong cybersecurity to curb the potential weaponization of major social media companies. But our public institutions have not caught up to the new challenges posed by social media. While policymakers focus on antitrust and content moderation problems with large social media companies, their cybersecurity is also critical. In other industries that are deemed critical infrastructure, such as telecommunications, utilities, and finance, we have established regulators and regulations to ensure that the public interest is protected. With respect to cybersecurity, that is what is needed for large, systemically important social media companies.
    • The NYDFS recommended the cybersecurity measures cryptocurrency companies in New York should implement to avoid similar hacks, including its own cybersecurity regulations that bind its regulated entities in New York. The NYDFS also called for a national regulator to address the lack of a dedicated regulator of Twitter and other massive social media platforms. The NYDFS asserted:
      • Social media companies currently have no dedicated regulator. They are subject to the same general oversight applicable to other companies. For instance, the SEC’s regulations for all public companies apply to public social media companies, and antitrust and related laws and regulations enforced by the Department of Justice and the FTC apply to social media companies as they do to all companies. Social media companies are also subject to generally applicable laws, such as the California Consumer Privacy Act and the New York SHIELD Act. The European Union’s General Data Protection Regulation, which regulates the storage and use of personal data, also applies to social media entities doing business in Europe.
      • But there are no regulators that have the authority to uniformly regulate social media platforms that operate over the internet, and to address the cybersecurity concerns identified in this Report. That regulatory vacuum must be filled.
      • A useful starting point is to create a “systemically important” designation for large social media companies, like the designation for critically important bank and non-bank financial institutions. In the wake of the 2007-08 financial crisis, Congress established a new regulatory framework for financial institutions that posed a systemic threat to the financial system of the United States. An institution could be designated as a Systemically Important Financial Institution (“SIFI”) “where the failure of or a disruption to the functioning of a financial market utility or the conduct of a payment, clearing, or settlement activity could create, or increase, the risk of significant liquidity or credit problems spreading among financial institutions or markets and thereby threaten the stability of the financial system of the United States.”
      • The risks posed by social media to our consumers, economy, and democracy are no less grave than the risks posed by large financial institutions. The scale and reach of these companies, combined with the ability of adversarial actors who can manipulate these systems, require a similarly bold and assertive regulatory approach.
      • The designation of an institution as a SIFI is made by the Financial Stability Oversight Council (“FSOC”), which Congress established to “identify risks to the financial stability of the United States” and to provide enhanced supervision of SIFIs.[67] The FSOC also “monitors regulatory gaps and overlaps to identify emerging sources of systemic risk.” In determining whether a financial institution is systemically important, the FSOC considers numerous factors including: the effect that a failure or disruption to an institution would have on financial markets and the broader financial system; the nature of the institution’s transactions and relationships; the nature, concentration, interconnectedness, and mix of the institution’s activities; and the degree to which the institution is regulated.
      • An analogue to the FSOC should be established to identify systemically important social media companies. This new Oversight Council should evaluate the reach and impact of social media companies, as well as the society-wide consequences of a social media platform’s misuse, to determine which companies they should designate as systemically important. Once designated, those companies should be subject to enhanced regulation, such as through the provision of “stress tests” to evaluate the social media companies’ susceptibility to key threats, including cyberattacks and election interference.
      • Finally, the success of such oversight will depend on the establishment of an expert agency to oversee designated social media companies. Systemically important financial companies designated by the FSOC are overseen by the Federal Reserve Board, which has a long-established and deep expertise in banking and financial market stability. A regulator for systemically important social media would likewise need deep expertise in areas such as technology, cybersecurity, and disinformation. This expert regulator could take various forms; it could be a completely new agency or could reside within an established agency or at an existing regulator.
  • The Government Accountability Office (GAO) evaluated how well the Trump Administration has been implementing the “Open, Public, Electronic and Necessary Government Data Act of 2018” (OPEN Government Data Act) (P.L. 115-435). As the GAO explained, this statute “requires federal agencies to publish their information as open data using standardized, nonproprietary formats, making data available to the public open by default, unless otherwise exempt…[and] codifies and expands on existing federal open data policy including the Office of Management and Budget’s (OMB) memorandum M-13-13 (M-13-13), Open Data Policy—Managing Information as an Asset.”
    • The GAO stated
      • To continue moving forward with open government data, the issuance of OMB implementation guidance should help agencies develop comprehensive inventories of their data assets, prioritize data assets for publication, and decide which data assets should or should not be made available to the public.
      • Implementation of this statutory requirement is critical to agencies’ full implementation and compliance with the act. In the absence of this guidance, agencies, particularly agencies that have not previously been subject to open data policies, could fall behind in meeting their statutory timeline for implementing comprehensive data inventories.
      • It is also important for OMB to meet its statutory responsibility to biennially report on agencies’ performance and compliance with the OPEN Government Data Act and to coordinate with General Services Administration (GSA) to improve the quality and availability of agency performance data that could inform this reporting. Access to this information could inform Congress and the public on agencies’ progress in opening their data and complying with statutory requirements. This information could also help agencies assess their progress and improve compliance with the act.
    • The GAO made three recommendations:
      • The Director of OMB should comply with its statutory requirement to issue implementation guidance to agencies to develop and maintain comprehensive data inventories. (Recommendation 1)
      • The Director of OMB should comply with the statutory requirement to electronically publish a report on agencies’ performance and compliance with the OPEN Government Data Act. (Recommendation 2)
      • The Director of OMB, in collaboration with the Administrator of GSA, should establish policy to ensure the routine identification and correction of errors in electronically published performance information. (Recommendation 3)
  • The United States’ (U.S.) National Security Agency (NSA) issued a cybersecurity advisory titled “Chinese State-Sponsored Actors Exploit Publicly Known Vulnerabilities,” that “provides Common Vulnerabilities and Exposures (CVEs) known to be recently leveraged, or scanned-for, by Chinese state-sponsored cyber actors to enable successful hacking operations against a multitude of victim networks.” The NSA recommended a number of mitigations generally for U.S. entities, including:
    • Keep systems and products updated and patched as soon as possible after patches are released.
    • Expect that data stolen or modified (including credentials, accounts, and software) before the device was patched will not be alleviated by patching, making password changes and reviews of accounts a good practice.
    • Disable external management capabilities and set up an out-of-band management network.
    • Block obsolete or unused protocols at the network edge and disable them in device configurations.
    • Isolate Internet-facing services in a network Demilitarized Zone (DMZ) to reduce the exposure of the internal network.
    • Enable robust logging of Internet-facing services and monitor the logs for signs of compromise.
    • The NSA then proceeded to recommend specific fixes.
    • The NSA provided this policy backdrop:
      • One of the greatest threats to U.S. National Security Systems (NSS), the U.S. Defense Industrial Base (DIB), and Department of Defense (DOD) information networks is Chinese state-sponsored malicious cyber activity. These networks often undergo a full array of tactics and techniques used by Chinese state-sponsored cyber actors to exploit computer networks of interest that hold sensitive intellectual property, economic, political, and military information. Since these techniques include exploitation of publicly known vulnerabilities, it is critical that network defenders prioritize patching and mitigation efforts.
      • The same process for planning the exploitation of a computer network by any sophisticated cyber actor is used by Chinese state-sponsored hackers. They often first identify a target, gather technical information on the target, identify any vulnerabilities associated with the target, develop or re-use an exploit for those vulnerabilities, and then launch their exploitation operation.
  • Belgium’s data protection authority (DPA) (Autorité de protection des données in French or Gegevensbeschermingsautoriteit in Dutch) (APD-GBA) has reportedly found that the Transparency & Consent Framework (TCF) developed by the Interactive Advertising Bureau (IAB) violates the General Data Protection Regulation (GDPR). The Real-Time Bidding (RTB) system used for online behavioral advertising allegedly transmits the personal information of European Union residents without their consent even before a popup appears on their screen asking for consent. The APD-GBA is the lead DPA in the EU in investigating the RTB and will likely now circulate their findings and recommendations to other EU DPAs before any enforcement will commence.
  • None Of Your Business (noyb) announced “[t]he Irish High Court has granted leave for a “Judicial Review” against the Irish Data Protection Commission (DPC) today…[and] [t]he legal action by noyb aims to swiftly implement the [Court of Justice for the European Union (CJEU)] Decision prohibiting Facebook’s” transfer of personal data from the European Union to the United States (U.S.) Last month, after the DPC directed Facebook to stop transferring the personal data of EU citizens to the U.S., the company filed suit in the Irish High Court to stop enforcement of the order and succeeded in staying the matter until the court rules on the merits of the challenge.
    • noyb further asserted:
      • Instead of making a decision in the pending procedure, the DPC has started a second, new investigation into the same subject matter (“Parallel Procedure”), as widely reported (see original reporting by the WSJ). No logical reasons for the Parallel Procedure was given, but the DPC has maintained that Mr Schrems will not be heard in this second case, as he is not a party in this Parallel Procedure. This Paralell procedure was criticised by Facebook publicly (link) and instantly blocked by a Judicial Review by Facebook (see report by Reuters).
      • Today’s Judicial Review by noyb is in many ways the counterpart to Facebook’s Judicial Review: While Facebook wants to block the second procedure by the DPC, noyb wants to move the original complaints procedure towards a decision.
      • Earlier this summer, the CJEU struck down the adequacy decision for the agreement between the EU and (U.S. that had provided the easiest means to transfer the personal data of EU citizens to the U.S. for processing under the General Data Protection Regulation (GDPR) (i.e. the EU-U.S. Privacy Shield). In the case known as Schrems II, the CJEU also cast doubt on whether standard contractual clauses (SCC) used to transfer personal data to the U.S. would pass muster given the grounds for finding the Privacy Shield inadequate: the U.S.’s surveillance regime and lack of meaningful redress for EU citizens. Consequently, it has appeared as if data protection authorities throughout the EU would need to revisit SCCs for transfers to the U.S., and it appears the DPC was looking to stop Facebook from using its SCC. Facebook is apparently arguing in its suit that it will suffer “extremely significant adverse effects” if the DPC’s decision is implemented.
  • Most likely with the aim of helping British chances for an adequacy decision from the European Union (EU), the United Kingdom’s Information Commissioner’s Office (ICO) published guidance that “discusses the right of access [under the General Data Protection Regulation (GDPR)] in detail.” The ICO explained “is aimed at data protection officers (DPOs) and those with specific data protection responsibilities in larger organisations…[but] does not specifically cover the right of access under Parts 3 and 4 of the Data Protection Act 2018.”
    • The ICO explained
      • The right of access, commonly referred to as subject access, gives individuals the right to obtain a copy of their personal data from you, as well as other supplementary information.
  • The report the House Education and Labor Ranking Member requested from the Government Accountability Office (GAO) on the data security and data privacy practices of public schools. Representative Virginia Foxx (R-NC) asked the GAO “to review the security of K-12 students’ data. This report examines (1) what is known about recently reported K-12 cybersecurity incidents that compromised student data, and (2) the characteristics of school districts that experienced these incidents.” Strangely, the report did have GAO’s customary conclusions or recommendations. Nonetheless, the GAO found:
    • Ninety-nine student data breaches reported from July 1, 2016 through May 5, 2020 compromised the data of students in 287 school districts across the country, according to our analysis of K-12 Cybersecurity Resource Center (CRC) data (see fig. 3). Some breaches involved a single school district, while others involved multiple districts. For example, an attack on a vendor system in the 2019-2020 school year affected 135 districts. While information about the number of students affected was not available for every reported breach, examples show that some breaches affected thousands of students, for instance, when a cybercriminal accessed 14,000 current and former students’ personally identifiable information (PII) in one district.
    • The 99 reported student data breaches likely understate the number of breaches that occurred, for different reasons. Reported incidents sometimes do not include sufficient information to discern whether data were breached. We identified 15 additional incidents in our analysis of CRC data in which student data might have been compromised, but the available information was not definitive. In addition, breaches can go undetected for some time. In one example, the personal information of hundreds of thousands of current and former students in one district was publicly posted for 2 years before the breach was discovered.
    • The CRC identified 28 incidents involving videoconferences from April 1, 2020 through May 5, 2020, some of which disrupted learning and exposed students to harm. In one incident, 50 elementary school students were exposed to pornography during a virtual class. In another incident in a different district, high school students were targeted with hate speech during a class, resulting in the cancellation that day of all classes using the videoconferencing software. These incidents also raise concerns about the potential for violating students’ privacy. For example, one district is reported to have instructed teachers to record their class sessions. Teachers said that students’ full names were visible to anyone viewing the recording.
    • The GAO found gaps in the protection and enforcement of student privacy by the United States government:
      • [The Department of] Education is responsible for enforcing Family Educational Rights and Privacy Act (FERPA), which addresses the privacy of PII in student education records and applies to all schools that receive funds under an applicable program administered by Education. If parents or eligible students believe that their rights under FERPA have been violated, they may file a formal complaint with Education. In response, Education is required to take appropriate actions to enforce and deal with violations of FERPA. However, because the department’s authority under FERPA is directly related to the privacy of education records, Education’s security role is limited to incidents involving potential violations under FERPA. Further, FERPA amendments have not directly addressed educational technology use.
      • The “Children’s Online Privacy Protection Act” (COPPA) requires the Federal Trade Commission (FTC) to issue and enforce regulations concerning children’s privacy. The COPPA Rule, which took effect in 2000 and was later amended in 2013, requires operators of covered websites or online services that collect personal information from children under age 13 to provide notice and obtain parental consent, among other things. COPPA generally applies to the vendors who provide educational technology, rather than to schools directly. However, according to FTC guidance, schools can consent on behalf of parents to the collection of students’ personal information if such information is used for a school-authorized educational purpose and for no other commercial purpose.
  • Upturn, an advocacy organization that “advances equity and justice in the design, governance, and use of technology,” has released a report showing that United States (U.S.) law enforcement agencies have multiple means of hacking into encrypted or protected smartphones. There have long been the means and vendors available in the U.S. and abroad for breaking into phones despite the claims of a number of nations like the Five Eyes (U.S., the United Kingdom, Australia, Canada, and New Zealand) that default end-to-end encryption was a growing problem that allowed those preying on children and engaged in terrorism to go undetected. In terms of possible bias, Upturn is “is supported by the Ford Foundation, the Open Society Foundations, the John D. and Catherine T. MacArthur Foundation, Luminate, the Patrick J. McGovern Foundation, and Democracy Fund.”
    • Upturn stated:
      • Every day, law enforcement agencies across the country search thousands of cellphones, typically incident to arrest. To search phones, law enforcement agencies use mobile device forensic tools (MDFTs), a powerful technology that allows police to extract a full copy of data from a cellphone — all emails, texts, photos, location, app data, and more — which can then be programmatically searched. As one expert puts it, with the amount of sensitive information stored on smartphones today, the tools provide a “window into the soul.”
      • This report documents the widespread adoption of MDFTs by law enforcement in the United States. Based on 110 public records requests to state and local law enforcement agencies across the country, our research documents more than 2,000 agencies that have purchased these tools, in all 50 states and the District of Columbia. We found that state and local law enforcement agencies have performed hundreds of thousands of cellphone extractions since 2015, often without a warrant. To our knowledge, this is the first time that such records have been widely disclosed.
    • Upturn argued:
      • Law enforcement use these tools to investigate not only cases involving major harm, but also for graffiti, shoplifting, marijuana possession, prostitution, vandalism, car crashes, parole violations, petty theft, public intoxication, and the full gamut of drug-related offenses. Given how routine these searches are today, together with racist policing policies and practices, it’s more than likely that these technologies disparately affect and are used against communities of color.
      • We believe that MDFTs are simply too powerful in the hands of law enforcement and should not be used. But recognizing that MDFTs are already in widespread use across the country, we offer a set of preliminary recommendations that we believe can, in the short-term, help reduce the use of MDFTs. These include:
        • banning the use of consent searches of mobile devices,
        • abolishing the plain view exception for digital searches,
        • requiring easy-to-understand audit logs,
        • enacting robust data deletion and sealing requirements, and
        • requiring clear public logging of law enforcement use.

Coming Events

  • The Federal Communications Commission (FCC) will hold an open commission meeting on 27 October, and the agency has released a tentative agenda:
    • Restoring Internet Freedom Order Remand – The Commission will consider an Order on Remand that would respond to the remand from the U.S. Court of Appeals for the D.C. Circuit and conclude that the Restoring Internet Freedom Order promotes public safety, facilitates broadband infrastructure deployment, and allows the Commission to continue to provide Lifeline support for broadband Internet access service. (WC Docket Nos. 17-108, 17-287, 11- 42)
    • Establishing a 5G Fund for Rural America – The Commission will consider a Report and Order that would establish the 5G Fund for Rural America to ensure that all Americans have access to the next generation of wireless connectivity. (GN Docket No. 20-32)
    • Increasing Unlicensed Wireless Opportunities in TV White Spaces – The Commission will consider a Report and Order that would increase opportunities for unlicensed white space devices to operate on broadcast television channels 2-35 and expand wireless broadband connectivity in rural and underserved areas. (ET Docket No. 20-36)
    • Streamlining State and Local Approval of Certain Wireless Structure Modifications – The Commission will consider a Report and Order that would further accelerate the deployment of 5G by providing that modifications to existing towers involving limited ground excavation or deployment would be subject to streamlined state and local review pursuant to section 6409(a) of the Spectrum Act of 2012. (WT Docket No. 19-250; RM-11849)
    • Revitalizing AM Radio Service with All-Digital Broadcast Option – The Commission will consider a Report and Order that would authorize AM stations to transition to an all-digital signal on a voluntary basis and would also adopt technical specifications for such stations. (MB Docket Nos. 13-249, 19-311)
    • Expanding Audio Description of Video Content to More TV Markets – The Commission will consider a Report and Order that would expand audio description requirements to 40 additional television markets over the next four years in order to increase the amount of video programming that is accessible to blind and visually impaired Americans. (MB Docket No. 11-43)
    • Modernizing Unbundling and Resale Requirements – The Commission will consider a Report and Order to modernize the Commission’s unbundling and resale regulations, eliminating requirements where they stifle broadband deployment and the transition to next- generation networks, but preserving them where they are still necessary to promote robust intermodal competition. (WC Docket No. 19-308)
    • Enforcement Bureau Action – The Commission will consider an enforcement action.
  • The Senate Commerce, Science, and Transportation Committee will hold a hearing on 28 October regarding 47 U.S.C. 230 titled “Does Section 230’s Sweeping Immunity Enable Big Tech Bad Behavior?” with testimony from:
    • Jack Dorsey, Chief Executive Officer of Twitter;
    • Sundar Pichai, Chief Executive Officer of Alphabet Inc. and its subsidiary, Google; and 
    • Mark Zuckerberg, Chief Executive Officer of Facebook.
  • On 29 October, the Federal Trade Commission (FTC) will hold a seminar titled “Green Lights & Red Flags: FTC Rules of the Road for Business workshop” that “will bring together Ohio business owners and marketing executives with national and state legal experts to provide practical insights to business and legal professionals about how established consumer protection principles apply in today’s fast-paced marketplace.”
  • On 10 November, the Senate Commerce, Science, and Transportation Committee will hold a hearing to consider nominations, including Nathan Simington’s to be a Member of the Federal Communications Commission.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Mehmet Turgut Kirkgoz from Pixabay

Pending Legislation In U.S. Congress, Part VI

At this point, Congress is just looking to organize U.S. AI efforts, maximize resources, and better understand the field.

Today, let us survey bills on artificial intelligence (AI), an area of growing interest and concern among Democratic and Republican Members. Lawmakers and staff have been grappling with this new technology, and, at this point, are looking to study and foster its development, particularly in maintain the technological dominance of the United States (U.S.) There are some bills that may get enacted this year. However, any legislative action would play out against extensive executive branch AI efforts. In any event, Congress does not seem close to passing legislation that would regulate the technology and is looking to rely on existing statutes and regulators (e.g. the Federal Trade Commission’s powers to police unfair and deceptive practices.)

The bill with the best chances of enactment at present is the “National Artificial Intelligence Initiative Act of 2020” (H.R.6216), which was added to the “William M. (Mac) Thornberry National Defense Authorization Act for Fiscal Year 2021” (H.R.6395), a bill that has other mostly defense related AI provisions.

Big picture, H.R. 6216 would require better coordination of federal AI initiatives, research, and funding, and more involvement in the development of voluntary, consensus-based standards for AI. Much of this would happen through the standing up of a new “National Artificial Intelligence Initiative Office” by the Office of Science and Technology Policy (OSTP) in the White House. This new entity would be the locus of AI activities and programs in the United States’ (U.S.) government with the ultimate goal of ensuring the nation is the world’s foremost developer and user of the new technology.

Moreover, OSTP would “acting through the National Science and Technology Council…establish or designate an Interagency Committee to coordinate Federal programs and activities in support of the Initiative.” This body would “provide for interagency coordination of Federal artificial intelligence research, development, and demonstration activities, development of voluntary consensus standards and guidelines for research, development, testing, and adoption of ethically developed, safe, and trustworthy artificial intelligence systems, and education and training activities and programs of Federal departments and agencies undertaken pursuant to the Initiative.” The committee would need to “develop a strategic plan for AI” within two years and update it every three years thereafter. Moreover, the committee would need to “propose an annually coordinated interagency budget for the Initiative to the Office of Management and Budget (OMB) that is intended to ensure that the balance of funding across the Initiative is sufficient to meet the goals and priorities established for the Initiative.” However, OMB would be under no obligation to take notice of this proposal save for pressure from AI stakeholders in Congress or AI champions in any given Administration. The Secretary of Energy would create a ‘‘National Artificial Intelligence Advisory Committee” to advise the President and National Artificial Intelligence Initiative Office on a range of AI policy matters.

Federal agencies would be permitted to award funds to new Artificial Intelligence Research Institutes to pioneer research in any number of AI fields or considerations. The bill does not authorize any set amount of money for this program and instead kicks the decision over to the Appropriations Committees on any funding. The National Institute of Standards and Technology (NIST) must “support measurement research and development of best practices and voluntary standards for trustworthy artificial intelligence systems” and “support measurement research and development of best practices and voluntary standards for trustworthy artificial intelligence systems” among other duties. NIST must “shall work to develop, and periodically update, in collaboration with other public and private sector organizations, including the National Science Foundation and the Department of Energy, a voluntary risk management framework for the trustworthiness of artificial intelligence systems.” NIST would also “develop guidance to facilitate the creation of voluntary data sharing arrangements between industry, federally funded research centers, and Federal agencies for the purpose of advancing artificial intelligence research and technologies.”

The National Science Foundation (NSF) would need to “fund research and education activities in artificial intelligence systems and related fields, including competitive awards or grants to institutions of higher education or eligible non-profit organizations (or consortia thereof).” The Department of Energy must “carry out a cross-cutting research and development program to advance artificial intelligence tools, systems, capabilities, and workforce needs and to improve the reliability of artificial intelligence methods and solutions relevant to the mission of the Department.” This department would also be tasked with advancing “expertise in artificial intelligence and high-performance computing in order to improve health outcomes for veteran populations.”

According to a fact sheet issued by the House Science, Space, and Technology Committee, [t]he legislation will:

  • Formalize interagency coordination and strategic planning efforts in AI research, development, standards, and education through an Interagency Coordination Committee and a coordination office managed by the Office of Science and Technology Policy (OSTP).
  • Create an advisory committee to better inform the Coordination Committee’s strategic plan, track the state of the science around artificial intelligence, and ensure the Initiative is meeting its goals.
  • Create a network of AI institutes, coordinated through the National Science Foundation, that any Federal department of agency could fund to create partnerships between the academia and the public and private sectors to accelerate AI research focused on an economic sector, social sector, or on a cross-cutting AI challenge.
  • Support basic AI measurement research and standards development at the National Institute for Standards and Technology(NIST) and require NIST to create a framework for managing risks associated with AI systems and best practices for sharing data to advance trustworthy AI systems.
  • Support research at the National Science Foundation (NSF) across a wide variety of AI related research areas to both improve AI systems and use those systems to advance other areas of science. This section requires NSF to include an obligation for an ethics statement for all research proposals to ensure researchers are considering, and as appropriate, mitigating potential societal risks in carrying out their research.
  • Support education and workforce development in AI and related fields, including through scholarships and traineeships at NSF.
  • Support AI research and development efforts at the Department of Energy (DOE), utilize DOE computing infrastructure for AI challenges, promote technology transfer, data sharing, and coordination with other Federal agencies, and require an ethics statement for DOE funded research as required at NSF.
  • Require studies to better understand workforce impacts and opportunities created by AI, and identify the computing resources necessary to ensure the United States remains competitive in AI.

 As mentioned, the House’s FY 2021 NDAA has a number of other AI provisions, including:

  • Section 217–Modification of Joint Artificial Intelligence Research, Development, and Transition Activities. This section would amend section 238 of the John S. McCain National Defense Authorization Act for Fiscal Year 2019 (Public Law 115-232) by assigning responsibility for the Joint Artificial Intelligence Center (JAIC) to the Deputy Secretary of Defense and ensure data access and visibility for the JAIC.
  • Section 224–Board of Directors for the Joint Artificial Intelligence Center. This section would direct the Secretary of Defense to create and resource a Board of Directors for the Joint Artificial Intelligence Center (JAIC), comprised of senior Department of Defense officials, as well as civilian directors not employed by the Department of Defense. The objective would be to have a standing body over the JAIC that can bring governmental and non-governmental experts together for the purpose of assisting the Department of Defense in correctly integrating and operationalizing artificial intelligence technologies.
  • Section 242–Training for Human Resources Personnel in Artificial Intelligence and Related Topics. This section would direct the Secretary of Defense to develop and implement a program to provide human resources personnel with training in the fields of software development, data science, and artificial intelligence, as such fields relate to the duties of such personnel, not later 1 year after the date of the enactment of this Act.
  • Section 248–Acquisition of Ethically and Responsibly Developed Artificial Intelligence Technology. This section would direct the Secretary of Defense, acting through the Board of Directors of the Joint Artificial Intelligence Center, to conduct an assessment to determine whether the Department of Defense has the ability to ensure that any artificial intelligence technology acquired by the Department is ethically and responsibly developed.
  • Section 805–Acquisition Authority of the Director of the Joint Artificial Intelligence Center. This section would authorize the Director of the Joint Artificial Intelligence Center with responsibility for the development, acquisition, and sustainment of artificial intelligence technologies, services, and capabilities through fiscal year 2025.

The “FUTURE of Artificial Intelligence Act of 2020” (S.3771) was marked up and reported out of the Senate Commerce, Science, and Transportation Committee in July 2020. This bill would generally “require the Secretary of Commerce to establish the Federal Advisory Committee on the Development and Implementation of Artificial Intelligence” to advise the department on a range of AI related matters, including competitiveness, workforce, education, ethics training and development, the open sharing of data and research, international cooperation, legal and civil rights, government efficiency, and others. Additionally, a subcommittee will be empaneled to focus on the intersection of AI and law enforcement and national security issues. 18 months after enactment, this committee must submit its findings in a report to Congress and the Department of Commerce. A bill with the same titled has been introduced in the House (H.R.7559) but has not been acted upon. This bill would “require the Director of the National Science Foundation, in consultation with the Director of the Office of Science and Technology Policy, to establish an advisory committee to advise the President on matters relating to the development of artificial intelligence.”

The same day S.3771 was marked up, the committee took up another AI bill: the “Advancing Artificial Intelligence Research Act of 2020” (S.3891) that would “require the Director of the National Institute of Standards and Technology (NIST) to advance the development of technical standards for artificial intelligence, to establish the National Program to Advance Artificial Intelligence Research, to promote research on artificial intelligence at the National Science Foundation” (NSF). $250 million a year would be authorized for NIST to distribute for AI research. NIST would also need to establish at least six AI research institutes. The NSF would “establish  a  pilot  program to assess the feasibility and advisability of awarding  grants  for  the  conduct  of  research in rapidly evolving, high priority topics.”

In early November 2019, the Senate Homeland Security & Governmental Affairs Committee marked up the “AI in Government Act of 2019” (S.1363) that would establish an AI Center of Excellence in the General Services Administration (GSA) to:

  • promote the efforts of the Federal Government in developing innovative uses of and acquiring artificial intelligence technologies by the Federal Government;
  • improve cohesion and competency in the adoption and use of artificial intelligence within the Federal Government

The bill stipulates that both of these goals would be pursued “for the purposes of benefitting the public and enhancing the productivity and efficiency of Federal Government operations.”

The Office of Management and Budget (OMB) must “issue a memorandum to the head of each agency that shall—

  • inform the development of policies regarding Federal acquisition and use by agencies regarding technologies that are empowered or enabled by artificial intelligence;
  • recommend approaches to remove barriers for use by agencies of artificial intelligence technologies in order to promote the innovative application of those technologies while protecting civil liberties, privacy, civil rights, and economic and national security; and
  • identify best practices for identifying, assessing, and mitigating any discriminatory impact or bias on the basis of any classification protected under Federal nondiscrimination laws, or any unintended consequence of the use of artificial intelligence by the Federal Government.”

OMB is required to coordinate the drafting of this memo with the Office of Science and Technology Policy, GSA, other relevant agencies, and other key stakeholders.

This week, the House passed its version of S.1363, the “AI in Government Act of 2019” (H.R.2575), by voice vote sending it over to the Senate.

In September 2019, the House sent another AI bill to the Senate where it has not been taken up. The “Advancing Innovation to Assist Law Enforcement Act” (H.R.2613) would task the Financial Crimes Enforcement Network (FinCEN) with studying

  • the status of implementation and internal use of emerging technologies, including AI, digital identity technologies, blockchain technologies, and other innovative technologies within FinCEN;
  • whether AI, digital identity technologies, blockchain technologies, and other innovative technologies can be further leveraged to make FinCEN’s data analysis more efficient and effective; and
  • how FinCEN could better utilize AI, digital identity technologies, blockchain technologies, and other innovative technologies to more actively analyze and disseminate the information it collects and stores to provide investigative leads to Federal, State, Tribal, and local law enforcement, and other Federal agencies…and better support its ongoing investigations when referring a case to the Agencies.”

All of these bills are being considered against a backdrop of significant Trump Administration action on AI, using existing authority to manage government operations. The Administration sees AI as playing a key role in ensuring and maintaining U.S. dominance in military affairs and in other realms.

Most recently, OMB and the Office of Science and Technology Policy (OSTP) released their annual guidance to United States department and agencies to direct their budget requests for FY 2022 with respect to research and development (R&D). OMB and OSTP explained:

For FY2022, the five R&D budgetary priorities in this memorandum ensure that America remains at the global forefront of science and technology (S&T) discovery and innovation. The Industries of the Future (IotF) -artificial intelligence (AI), quantum information sciences (QIS), advanced communication networks/5G, advanced manufacturing, and biotechnology-remain the Administration’s top R&D priority.

Specifically, regarding AI, OMB and OSTP stated

Artificial Intelligence: Departments and agencies should prioritize research investments consistent with the Executive Order (EO) 13859 on Maintaining American Leadership in Artificial Intelligence and the 2019 update of the National Artificial Intelligence Research and Development Strategic Plan. Transformative basic research priorities include research on ethical issues of AI, data-efficient and high performance machine learning (ML) techniques, cognitive AI, secure and trustworthy Al, scalable and robust AI, integrated and interactive AI, and novel AI hardware. The current pandemic highlights the importance of use-inspired AI research for healthcare, including AI for discovery of therapeutics and vaccines; Al-based search of publications and patents for scientific insights; and Al for improved imaging, diagnosis, and data analysis. Beyond healthcare, use-inspired AI research for scientific and engineering discovery across many domains can help the Nation address future crises. AI infrastructure investments are prioritized, including national institutes and testbeds for AI development, testing, and evaluation; data and model resources for AI R&D; and open knowledge networks. Research is also prioritized for the development of AI measures, evaluation methodologies, and standards, including quantification of trustworthy AI in dimensions of accuracy, fairness, robustness, explainability, and transparency.

In February 2020, OSTP published the “American Artificial Intelligence Initiative: Year One Annual Report” in which the agency claimed “the Trump Administration has made critical progress in carrying out this national strategy and continues to make United States leadership in [artificial intelligence] (AI) a top priority.” OSTP asserted that “[s]ince the signing of the EO, the United States has made significant progress on achieving the objectives of this national strategy…[and] [t]his document provides both a summary of progress and a continued long-term vision for the American AI Initiative.” However, some agencies were working on AI-related initiatives independently of the EO, but the White House has folded those into the larger AI strategy it is pursuing. Much of the document recites already announced developments and steps.

However, OSTP seems to reference a national AI strategy that differs a bit from the one laid out in EO 13859 and appears to represent the Administration’s evolved thinking on how to address AI across a number of dimensions in the form of “key policies and practices:”

1)  Invest in AI research and development: The United States must promote Federal investment in AI R&D in collaboration with industry, academia, international partners and allies, and other non- Federal entities to generate technological breakthroughs in AI. President Trump called for a 2-year doubling of non-defense AI R&D in his fiscal year (FY) 2021 budget proposal, and in 2019 the Administration updated its AI R&D strategic plan, developed the first progress report describing the impact of Federal R&D investments, and published the first-ever reporting of government-wide non-defense AI R&D spending.

2)  Unleash AI resources: The United States must enhance access to high-quality Federal data, models, and computing resources to increase their value for AI R&D, while maintaining and extending safety, security, privacy, and confidentiality protections. The American AI Initiative called on Federal agencies to identify new opportunities to increase access to and use of Federal data and models. In 2019, the White House Office of Management and Budget established the Federal Data Strategy as a framework for operational principles and best practices around how Federal agencies use and manage data. 

3) Remove barriers to AI innovation: The United States must reduce barriers to the safe development, testing, deployment, and adoption of AI technologies by providing guidance for the governance of AI consistent with our Nation’s values and by driving the development of appropriate AI technical standards. As part of the American AI Initiative, The White House published for comment the proposed United States AI Regulatory Principles, the first AI regulatory policy that advances innovation underpinned by American values and good regulatory practices. In addition, the National Institute of Standards and Technology (NIST) issued the first-ever strategy for Federal engagement in the development of AI technical standards. 

4) Train an AI-ready workforce: The United States must empower current and future generations of American workers through apprenticeships; skills programs; and education in science, technology, engineering, and mathematics (STEM), with an emphasis on computer science, to ensure that American workers, including Federal workers, are capable of taking full advantage of the opportunities of AI. President Trump directed all Federal agencies to prioritize AI-related apprenticeship and job training programs and opportunities. In addition to its R&D focus, the National Science Foundation’s new National AI Research Institutes program will also contribute to workforce development, particularly of AI researchers. 

5) Promote an international environment supportive of American AI innovation: The United States must engage internationally to promote a global environment that supports American AI research and innovation and opens markets for American AI industries while also protecting our technological advantage in AI. Last year, the United States led historic efforts at the Organisation for Economic Cooperation and Development (OECD) to develop the first international consensus agreements on fundamental principles for the stewardship of trustworthy AI. The United States also worked with its international partners in the G7 and G20 to adopt similar AI principles. 

6) Embrace trustworthy AI for government services and missions: The United States must embrace technology such as artificial intelligence to improve the provision and efficiency of government services to the American people and ensure its application shows due respect for our Nation’s values, including privacy, civil rights, and civil liberties. The General Services Administration established an AI Center of Excellence to enable Federal agencies to determine best practices for incorporating AI into their organizations. 

Also in February 2020, the Department of Defense (DOD) announced in a press release that it “officially adopted a series of ethical principles for the use of Artificial Intelligence today following recommendations provided to Secretary of Defense Dr. Mark T. Esper by the Defense Innovation Board last October.” The DOD claimed “[t]he adoption of AI ethical principles aligns with the DOD AI strategy objective directing the U.S. military lead in AI ethics and the lawful use of AI systems.” The Pentagon added “[t]he DOD’s AI ethical principles will build on the U.S. military’s existing ethics framework based on the U.S. Constitution, Title 10 of the U.S. Code, Law of War, existing international treaties and longstanding norms and values.” The DOD stated “[t]he DOD Joint Artificial Intelligence Center (JAIC) will be the focal point for coordinating implementation of AI ethical principles for the department.”

The DOD explained that “[t]hese principles will apply to both combat and non-combat functions and assist the U.S. military in upholding legal, ethical and policy commitments in the field of AI…[and] encompass five major areas:

  • Responsible. DOD personnel will exercise appropriate levels of judgment and care, while remaining responsible for the development, deployment, and use of AI capabilities.
  • Equitable. The Department will take deliberate steps to minimize unintended bias in AI capabilities.
  • Traceable. The Department’s AI capabilities will be developed and deployed such that relevant personnel possess an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including with transparent and auditable methodologies, data sources, and design procedure and documentation.
  • Reliable. The Department’s AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire life-cycles.
  • Governable. The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.

It bears note that the DOD’s recitation of these five AI Ethics differs from those drafted by the Defense Innovation Board. Notably, in “Equitable,” the Defense Innovation Board also included that the “DOD should take deliberate steps to avoid unintended bias in the development and deployment of combat or non-combat AI systems that would inadvertently cause harm to persons” (emphasis added.) Likewise, in “Governable,” the Board recommended that “DOD AI systems should be designed and engineered to fulfill their intended function while possessing the ability to detect and avoid unintended harm or disruption, and for human or automated disengagement or deactivation of deployed systems that demonstrate unintended escalatory or other behavior (emphasis added.)

Additionally, the DOD has declined, at least at this time, to adopt the recommendations made by the Board regarding the use of AI:

1. Formalize these principles via official DOD channels. 

2. Establish a DOD-wide AI Steering Committee. 

3. Cultivate and grow the field of AI engineering. 

4. Enhance DOD training and workforce programs. 

5. Invest in research on novel security aspects of AI. 

6. Invest in research to bolster reproducibility. 

7. Define reliability benchmarks. 

8. Strengthen AI test and evaluation techniques. 

9. Develop a risk management methodology. 

10. Ensure proper implementation of AI ethics principles. 

11. Expand research into understanding how to implement AI ethics principles.

12. Convene an annual conference on AI safety, security, and robustness. 

In January 2020 OMB rand OSTP requested comments on a draft “Guidance for Regulation of Artificial Intelligence Applications” that would be issued to federal agencies as directed by EO 13859. OMB listed the 10 AI principles agencies must in regulating AI in the private sector, some of which have some overlap with the DOD’s Ethics:

  • Public trust in AI
  • Public participation
  • Scientific integrity and information quality
  • Risk assessment and management
  • Benefits and costs
  • Flexibility
  • Fairness and non-discrimination
  • Disclosure and transparency
  • Safety and security
  • Interagency coordination


OSTP explained how the ten AI principles should be used:

Consistent with law, agencies should take into consideration the following principles when formulating regulatory and non-regulatory approaches to the design, development, deployment, and operation of AI applications, both general and sector-specific. These principles, many of which are interrelated, reflect the goals and principles in Executive Order 13859. Agencies should calibrate approaches concerning these principles and consider case-specific factors to optimize net benefits. Given that many AI applications do not necessarily raise novel issues, these considerations also reflect longstanding Federal regulatory principles and practices that are relevant to promoting the innovative use of AI. Promoting innovation and growth of AI is a high priority of the United States government. Fostering innovation and growth through forbearing from new regulations may be appropriate. Agencies should consider new regulation only after they have reached the decision, in light of the foregoing section and other considerations, that Federal regulation is necessary.

In November 2019 , the National Security Commission on Artificial Intelligence (NSCAI) released its interim report and explained that “[b]etween now and the publication of our final report, the Commission will pursue answers to hard problems, develop concrete recommendations on “methods and means” to integrate AI into national security missions, and make itself available to Congress and the executive branch to inform evidence-based decisions about resources, policy, and strategy.” The Commission released its initial report in July that laid out its work plan. 

In July 2020, NSCAI published its Second Quarter Recommendations, a compilation of policy proposals made this quarter. NSCAI said it is still on track to release its final recommendations in March 2021. The NSCAI asserted

The recommendations are not a comprehensive follow-up to the interim report or first quarter memorandum. They do not cover all areas that will be included in the final report. This memo spells out recommendations that can inform ongoing deliberations tied to policy, budget, and legislative calendars. But it also introduces recommendations designed to build a new framework for pivoting national security for the artificial intelligence (AI) era.

In August 2019, NIST published “U.S. LEADERSHIP IN AI: A Plan for Federal Engagement in Developing Technical Standards and Related Tools” as required by EO 13859. The EO directed the Secretary of Commerce, through NIST, to issue “a plan for Federal engagement in the development of technical standards and related tools in support of reliable, robust, and trustworthy systems that use AI technologies” that must include:

(A) Federal priority needs for standardization of AI systems development and deployment;
(B)  identification  of  standards  development  entities  in  which  Federal  agencies  should  seek  membership  with  the  goal  of  establishing  or  supporting United States technical leadership roles; and

(C) opportunities for and challenges to United States leadership in standardization related to AI technologies.

NIST’s AI plan meets those requirements in the broadest of strokes and will require much from the Administration and agencies to be realized, including further steps required by the EO.

Finally, all these Trump Administration efforts are playing out at the same global processes are as well. In late May 2019, the Organization for Economic Cooperation and Development (OECD) adopted recommendations from the OECD Council on Artificial Intelligence (AI), and non-OECD members Argentina, Brazil, Colombia, Costa Rica, Peru and Romania also pledged to adhere to the recommendations. Of course, OECD recommendations have no legal binding force on any nation, but standards articulated by the OECD are highly respected and sometime do form the basis for nations’ approaches on an issue like the 1980 OECD recommendations on privacy. Moreover, the National Telecommunications and Information Administration (NTIA) signaled the Trump Administration’s endorsement of the OECD effort. In February 2020, the European Commission (EC) released its latest policy pronouncement on artificial intelligence, “On Artificial Intelligence – A European approach to excellence and trust,” in which the Commission articulates its support for “a regulatory and investment oriented approach with the twin objective of promoting the uptake of AI and of addressing the risks associated with certain uses of this new technology.” The EC stated that “[t]he purpose of this White Paper is to set out policy options on how to achieve these objectives…[but] does not address the development and use of AI for military purposes.”

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Gerd Altmann from Pixabay

Further Reading, Other Developments, and Coming Events (19 August)

Coming Events

  • The United States’ Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) announced that its third annual National Cybersecurity Summit “will be held virtually as a series of webinars every Wednesday for four weeks beginning September 16 and ending October 7:”
    • September 16: Key Cyber Insights
    • September 23: Leading the Digital Transformation
    • September 30: Diversity in Cybersecurity
    • October 7: Defending our Democracy
    • One can register for the event here.
  • The Senate Judiciary Committee’s Antitrust, Competition Policy & Consumer Rights Subcommittee will hold a hearing on 15 September titled “Stacking the Tech: Has Google Harmed Competition in Online Advertising?.” In their press release, Chair Mike Lee (R-UT) and Ranking Member Amy Klobuchar (D-MN) asserted:
    • Google is the dominant player in online advertising, a business that accounts for around 85% of its revenues and which allows it to monetize the data it collects through the products it offers for free. Recent consumer complaints and investigations by law enforcement have raised questions about whether Google has acquired or maintained its market power in online advertising in violation of the antitrust laws. News reports indicate this may also be the centerpiece of a forthcoming antitrust lawsuit from the U.S. Department of Justice. This hearing will examine these allegations and provide a forum to assess the most important antitrust investigation of the 21st century.
  • On 22 September, the Federal Trade Commission (FTC) will hold a public workshop “to examine the potential benefits and challenges to consumers and competition raised by data portability.” By 21 August, the FTC “is seeking comment on a range of issues including:
    • How are companies currently implementing data portability? What are the different contexts in which data portability has been implemented?
    • What have been the benefits and costs of data portability? What are the benefits and costs of achieving data portability through regulation?
    • To what extent has data portability increased or decreased competition?
    • Are there research studies, surveys, or other information on the impact of data portability on consumer autonomy and trust?
    • Does data portability work better in some contexts than others (e.g., banking, health, social media)? Does it work better for particular types of information over others (e.g., information the consumer provides to the business vs. all information the business has about the consumer, information about the consumer alone vs. information that implicates others such as photos of multiple people, comment threads)?
    • Who should be responsible for the security of personal data in transit between businesses? Should there be data security standards for transmitting personal data between businesses? Who should develop these standards?
    • How do companies verify the identity of the requesting consumer before transmitting their information to another company?
    • How can interoperability among services best be achieved? What are the costs of interoperability? Who should be responsible for achieving interoperability?
    • What lessons and best practices can be learned from the implementation of the data portability requirements in the GDPR and CCPA? Has the implementation of these requirements affected competition and, if so, in what ways?”
  • The Federal Communications Commission (FCC) will hold an open meeting on 30 September, but an agenda is not available at this time.

Other Developments

  • The United States (U.S.) Department of Commerce tightened its chokehold on Huawei’s access to United States’ semiconductors and chipsets vital to its equipment and services. This rule follows a May rule that significantly closed off Huawei’s access to the point that many analysts are projecting the People’s Republic of China company will run out of these crucial technologies sometime next year without a suitable substitute, meaning the company may not be able to sell its smartphone and other leading products. In its press release, the department asserted the new rule “further restricts Huawei from obtaining foreign made chips developed or produced from U.S. software or technology to the same degree as comparable U.S. chips.”
    • Secretary of Commerce Wilbur Ross argued “Huawei and its foreign affiliates have extended their efforts to obtain advanced semiconductors developed or produced from U.S. software and technology in order to fulfill the policy objectives of the Chinese Communist Party.” He contended “[a]s we have restricted its access to U.S. technology, Huawei and its affiliates have worked through third parties to harness U.S. technology in a manner that undermines U.S. national security and foreign policy interests…[and] his multi-pronged action demonstrates our continuing commitment to impede Huawei’s ability to do so.”
    • The Department of Commerce’s Bureau of Industry and Security (BIS) stated in the final rule that it is “making three sets of changes to controls for Huawei and its listed non-U.S. affiliates under the Export Administration Regulations (EAR):
      • First, BIS is adding additional non-U.S. affiliates of Huawei to the Entity List because they also pose a significant risk of involvement in activities contrary to the national security or foreign policy interests of the United States.
      • Second, this rule removes a temporary general license for Huawei and its non-U.S. affiliates and replaces those provisions with a more limited authorization that will better protect U.S. national security and foreign policy interests.
      • Third, in response to public comments, this final rule amends General Prohibition Three, also known as the foreign-produced direct product rule, to revise the control over certain foreign-produced items recently implemented by BIS.”
    • BIS claimed “[t]hese revisions promote U.S. national security by limiting access to, and use of, U.S. technology to design and produce items outside the United States by entities that pose a significant risk of involvement in activities contrary to the national security or foreign policy interests of the United States.”
    • One technology analyst claimed “[t]he U.S. moves represent a significant tightening of restrictions over Huawei’s ability to procure semiconductors…[and] [t]hat puts into significant jeopardy its ability to continue manufacturing smartphones and base stations, which are its core products.”
  • The Office of Management and Budget (OMB) and the Office of Science and Technology Policy (OSTP) have released their annual guidance to United States department and agencies to direct their budget requests for FY 2022 with respect to research and development (R&D). OMB explained:
  • For FY2022, the five R&D budgetary priorities in this memorandum ensure that America remains at the global forefront of science and technology (S&T) discovery and innovation. The Industries of the Future (IotF) -artificial intelligence (AI), quantum information sciences (QIS), advanced communication networks/5G, advanced manufacturing, and biotechnology-remain the Administration’s top R&D priority. This includes fulfilling President Trump’s commitment to double non-defense AI and QIS funding by FY2022:
    • American Public Health Security and Innovation
    • American Leadership in the Industries of the Future and Related Technologies
    • American Security
    • American Energy and Environmental Leadership
    • American Space Leadership
  • In light of the significant health and economic disruption caused by the COVID-19 pandemic, the FY2022 memorandum includes a new R&D priority aimed at American Public Health Security and Innovation. This priority brings under a single, comprehensive umbrella biomedical and biotechnology R&D aimed at responding to the pandemic and ensuring the U.S. S&T enterprise is maximally prepared for any health-related threats.
  • Lastly, this memorandum also describes/our high-priority crosscutting actions. These actions include research and related strategies that underpin the five R&D priorities and ensure departments and agencies deliver maximum return on investment to the American people:
    • Build the S&T Workforce of the Future
    • Optimize Research Environments and Results
    • Facilitate Multisector Partnerships and Technology Transfer
    • Leverage the Power of Data
  • Despite the Trump Administration touting its R&D priorities and achievements, the non-partisan Congressional Research Service noted
    • President Trump’s budget request for FY2021 includes approximately $142.2 billion for research and development (R&D) for FY 2021, $13.8 billion (8.8%) below the FY2020 enacted level of $156.0 billion. In constant FY 2020 dollars, the President’s FY 2021 R&D request would result in a decrease of $16.6 billion (10.6%) from the FY 2020 level.
  • Two key chairs of subcommittees of the Senate Commerce, Science, and Transportation Committee are pressing the Federal Trade Commission (FTC) to investigate TikTok’s data collection and processing practices. This Committee has primary jurisdiction over the FTC in the Senate and is a key stakeholder on data and privacy issues.
    • In their letter, Consumer Protection Subcommittee Chair Jerry Moran (R-KS) and Communications, Technology, Innovation Chair John Thune (R-SD) explained they are “are seeking specific answers from the FTC related to allegations from a Wall Street Journal article that described TikTok’s undisclosed collection and transmission of unique persistent identifiers from millions of U.S. consumers until November 2019…[that] also described questionable activity by the company as it relates to the transparency of these data collection activities, and the letter seeks clarity on these practices.”
    • Moran and Thune asserted “there are allegations that TikTok discretely collected media access control (MAC) addresses, commonly used for advertisement targeting purposes, through Google Android’s operating system under an “unusual layer of encryption” through November 2019.” They said “[g]iven these reports and their potential relevancy to the “Executive Order on Addressing the Threat Posed by TikTok,” we urge the Federal Trade Commission (FTC) to investigate the company’s consumer data collection and processing practices as they relate to these accusations and other possible harmful activities posed to consumers.”
    • If the FTC were to investigate, find wrongdoing, and seek civil fines against TikTok, the next owner may be left to pay as the White House’s order to ByteDance to sell the company within three months will almost certainly be consummated before any FTC action is completed.
  • Massachusetts Attorney General Maura Healey (D) has established a “Data Privacy and Security Division within her office to protect consumers from the surge of threats to the privacy and security of their data in an ever-changing digital economy.” Healey has been one of the United States’ more active attorneys general on data privacy and technology issues, including her suit and settlement with Equifax for its massive data breach.
    • Her office explained:
      • The Data Privacy and Security Division investigates online threats and the unfair or deceptive collection, use, and disclosure of consumers’ personal data through digital technologies. The Division aims to empower consumers in the digital economy, ensure that companies are protecting consumers’ personal data from breach, protect equal and open access to the internet, and protect consumers from data-driven technologies that unlawfully deny them fair access to socioeconomic opportunities. The Division embodies AG Healey’s commitment to continue and grow on this critical work and ensure that data-driven technologies operate lawfully for the benefit of all consumers.
  • A California appeals court ruled that Amazon can be held liable for defective products their parties sell on its website. The appellate court reversed the trial court which held Amazon could not be liable.
    • The appeals court recited the facts of the case:
      • Plaintiff Angela Bolger bought a replacement laptop computer battery on Amazon, the popular online shopping website operated by defendant Amazon.com, LLC. The Amazon listing for the battery identified the seller as “E-Life, ”a fictitious name used on Amazon by Lenoge Technology (HK) Ltd. (Lenoge). Amazon charged Bolger for the purchase, retrieved the laptop battery from its location in an Amazon warehouse, prepared the battery for shipment in Amazon-branded packaging, and sent it to Bolger. Bolger alleges the battery exploded several months later, and she suffered severe burns as a result.
      • Bolger sued Amazon and several other defendants, including Lenoge. She alleged causes of action for strict products liability, negligent products liability, breach of implied warranty, breach of express warranty, and “negligence/negligent undertaking.”
    • The appeals court continued:
      • Amazon moved for summary judgment. It primarily argued that the doctrine of strict products liability, as well as any similar tort theory, did not apply to it because it did not distribute, manufacture, or sell the product in question. It claimed its website was an “online marketplace” and E-Life (Lenoge) was the product seller, not Amazon. The trial court agreed, granted Amazon’s motion, and entered judgment accordingly.
      • Bolger appeals. She argues that Amazon is strictly liable for defective products offered on its website by third-party sellers like Lenoge. In the circumstances of this case, we agree.
  • The National Institute of Standards and Technology (NIST) issued Special Publication 800-207, “Zero Trust Architecture,” that posits a different conceptual model for an organization’s cybersecurity than perimeter security. NIST claimed:
    • Zero trust security models assume that an attacker is present in the environment and that an enterprise-owned environment is no different—or no more trustworthy—than any nonenterprise-owned environment. In this new paradigm, an enterprise must assume no implicit trust and continually analyze and evaluate the risks to its assets and business functions and then enact protections to mitigate these risks. In zero trust, these protections usually involve minimizing access to resources (such as data and compute resources and applications/services) to only those subjects and assets identified as needing access as well as continually authenticating and authorizing the identity and security posture of each access request.
    • A zero trust architecture (ZTA) is an enterprise cybersecurity architecture that is based on zero trust principles and designed to prevent data breaches and limit internal lateral movement. This publication discusses ZTA, its logical components, possible deployment scenarios, and threats. It also presents a general road map for organizations wishing to migrate to a zero trust design approach and discusses relevant federal policies that may impact or influence a zero trust architecture.
    • ZT is not a single architecture but a set of guiding principles for workflow, system design and operations that can be used to improve the security posture of any classification or sensitivity level [FIPS199]. Transitioning to ZTA is a journey concerning how an organization evaluates risk in its mission and cannot simply be accomplished with a wholesale replacement of technology. That said, many organizations already have elements of a ZTA in their enterprise infrastructure today. Organizations should seek to incrementally implement zero trust principles, process changes, and technology solutions that protect their data assets and business functions by use case. Most enterprise infrastructures will operate in a hybrid zero trust/perimeter-based mode while continuing to invest in IT modernization initiatives and improve organization business processes.
  • The United Kingdom’s Government Communications Headquarters’ (GCHQ) National Cyber Security Centre (NCSC) released “Cyber insurance guidance” “for organisations of all sizes who are considering purchasing cyber insurance…not intended to be a comprehensive cyber insurance buyers guide, but instead focuses on the cyber security aspects of cyber insurance.” The NCSC stated “[i]f you are considering cyber insurance, these questions can be used to frame your discussions…[and] [t]his guidance focuses on standalone cyber insurance policies, but many of these questions may be relevant to cyber insurance where it is included in other policies.”

Further Reading

  • I downloaded Covidwise, America’s first Bluetooth exposure-notification app. You should, too.” By Geoffrey Fowler – The Washington Post. The paper’s technology columnist blesses the Apple/Google Bluetooth exposure app and claims it protects privacy. One person on Twitter pointed out the Android version will not work unless location services are turned on, which is contrary to the claims made by Google and Apple, an issue the New York Times investigated last month. A number of European nations have pressed Google to remove this feature, and a Google spokesperson claimed the Android Bluetooth tracing capability did not use location services, begging the question why the prompt appears. Moreover, one of the apps Fowler names has had its own privacy issues as detailed by The Washington Post in May. As it turns out Care19, a contact tracing app developed when the governor of North Dakota asked a friend who had designed a app for football fans to meet up, is violating its own privacy policy according to Jumbo, the maker of privacy software. Apparently, Care19 shares location and personal data with FourSquare when used on iPhones. Both Apple and state officials are at a loss to explain how this went unnoticed when the app was scrubbed for technical and privacy problems before being rolled out.
  • Truss leads China hawks trying to derail TikTok’s London HQ plan” By Dan Sabbagh – The Guardian. ByteDance’s plan to establish a headquarters in London is now under attack by members of the ruling Conservative party for the company’s alleged role in persecuting the Uighur minority in Xinjiang. ByteDance has been eager to move to London and also eager to avoid the treatment that another tech company from the People’s Republic of China has gotten in the United Kingdom (UK): Huawei. Nonetheless, this decision may turn political as the government’s reversal on Huawei and 5G did. Incidentally, if Microsoft does buy part of TikTok, it would be buying operations in four of the five Five Eyes nations but not the UK.
  • Human Rights Commission warns government over ‘dangerous’ use of AI” By Fergus Hunter – The Sydney Morning Herald. A cautionary tale regarding the use of artificial intelligence and algorithms in government decision-making. While this article nominally pertains to Australia’s Human Rights Commission advice to the country’s government, it is based, in large part, on a scandal in which an automated process illegally collected $721 million AUD from welfare beneficiaries. In the view of the Human Rights Commission, decision-making by humans is still preferable and more accurate than automated means.
  • The Attack That Broke Twitter Is Hitting Dozens of Companies” By Andy Greenberg – WIRED. In the never-ending permutations of hacking, the past has become the present because the Twitter hackers use phone calls to talk their way into gaining access to a number of high-profile accounts (aka phone spear phishing.) Other companies are suffering the same onslaught, proving the axiom that people may be the weakest link in cybersecurity. However, the phone calls are based on exacting research and preparation as hackers scour the internet for information on their targets and the companies themselves. A similar hack was reportedly executed by the Democratic People’s Republic of Korea (DPRK) against Israeli defense firms.
  • Miami Police Used Facial Recognition Technology in Protester’s Arrest” By Connie Fossi and Phil Prazan – NBC Miami. The Miami Police Department used Clearview AI to identify a protestor that allegedly injured an officer but did not divulge this fact to the accused or her attorney. The department’s policy on facial recognition technology bars officers from making arrests solely on the basis of identification through such a system. Given the error rates many facial recognition systems have experienced with identifying minorities and the use of masks during the pandemic, which further decreases accuracy, it is quite likely people will be wrongfully accused and convicted using this technology.
  • Big Tech’s Domination of Business Reaches New Heights” By Peter Eavis and Steve Lohr – The New York Times. Big tech has gotten larger, more powerful, and more indispensable in the United States (U.S.) during the pandemic, and one needs to go back to the railroads in the late 19th Century to find comparable companies. It is an open question whether their size and influence will change much no matter who is president of the U.S. next year.
  • License plate tracking for police set to go nationwide” By Alfred Ng – c/net. A de facto national license plate reader may soon be activated in the United States (U.S.). Flock Safety unveiled the “Total Analytics Law Officers Network,” (TALON) that will link its systems of cameras in more than 700 cities, allowing police departments to track cars across multiple jurisdictions. As the U.S. has no national laws regulating the use of this and other similar technologies, private companies may set policy for the country in the short term.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Further Reading, Other Developments, and Coming Events (17 August)

Here are Coming Events, Other Developments, and Further Reading.

Coming Events

  • On 18 August, the National Institute of Standards and Technology (NIST) will host the “Bias in AI Workshop, a virtual event to develop a shared understanding of bias in AI, what it is, and how to measure it.”
  • The United States’ Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) announced that its third annual National Cybersecurity Summit “will be held virtually as a series of webinars every Wednesday for four weeks beginning September 16 and ending October 7:”
    • September 16: Key Cyber Insights
    • September 23: Leading the Digital Transformation
    • September 30: Diversity in Cybersecurity
    • October 7: Defending our Democracy
    • One can register for the event here.
  • The Senate Judiciary Committee’s Antitrust, Competition Policy & Consumer Rights Subcommittee will hold a hearing on 15 September titled “Stacking the Tech: Has Google Harmed Competition in Online Advertising?.” In their press release, Chair Mike Lee (R-UT) and Ranking Member Amy Klobuchar (D-MN) asserted:
    • Google is the dominant player in online advertising, a business that accounts for around 85% of its revenues and which allows it to monetize the data it collects through the products it offers for free. Recent consumer complaints and investigations by law enforcement have raised questions about whether Google has acquired or maintained its market power in online advertising in violation of the antitrust laws. News reports indicate this may also be the centerpiece of a forthcoming antitrust lawsuit from the U.S. Department of Justice. This hearing will examine these allegations and provide a forum to assess the most important antitrust investigation of the 21st century.
  • On 22 September, the Federal Trade Commission (FTC) will hold a public workshop “to examine the potential benefits and challenges to consumers and competition raised by data portability.” By 21 August, the FTC “is seeking comment on a range of issues including:
    • How are companies currently implementing data portability? What are the different contexts in which data portability has been implemented?
    • What have been the benefits and costs of data portability? What are the benefits and costs of achieving data portability through regulation?
    • To what extent has data portability increased or decreased competition?
    • Are there research studies, surveys, or other information on the impact of data portability on consumer autonomy and trust?
    • Does data portability work better in some contexts than others (e.g., banking, health, social media)? Does it work better for particular types of information over others (e.g., information the consumer provides to the business vs. all information the business has about the consumer, information about the consumer alone vs. information that implicates others such as photos of multiple people, comment threads)?
    • Who should be responsible for the security of personal data in transit between businesses? Should there be data security standards for transmitting personal data between businesses? Who should develop these standards?
    • How do companies verify the identity of the requesting consumer before transmitting their information to another company?
    • How can interoperability among services best be achieved? What are the costs of interoperability? Who should be responsible for achieving interoperability?
    • What lessons and best practices can be learned from the implementation of the data portability requirements in the GDPR and CCPA? Has the implementation of these requirements affected competition and, if so, in what ways?”
  • The Federal Communications Commission (FCC) will hold an open meeting on 30 September, but an agenda is not available at this time.

Other Developments

  • On 14 August, the California Office of Administrative Law (OAL) approved the Attorney General’s proposed final regulations to implement the California Consumer Privacy Act (CCPA) (A.B.375) and they took effect that day. The Office of the Attorney General (OAG) had requested expedited review so the regulations may become effective on 1 July as required by the CCPA. With respect to the substance, the final regulations are very similar to the third round of regulations circulated for comment in March, in part, in response to legislation passed and signed into law last fall that modified the CCPA.
    • The OAL released an Addendum to the Final Statement of Reasons and explained
      • In addition to withdrawing certain provisions for additional consideration, the OAG has made the following non-substantive changes for accuracy, consistency, and clarity. Changes to the original text of a regulation are non-substantive if they clarify without materially altering the requirements, rights, responsibilities, conditions, or prescriptions contained in the original text.
    • For further reading on the third round of proposed CCPA regulations, see this issue of the Technology Policy Update, for the second round, see here, and for the first round, see here. Additionally, to read more on the legislation signed into law last fall, modifying the CCPA, see this issue.
    • Additionally, Californians for Consumer Privacy have succeeded in placing the “California Privacy Rights Act” (CPRA) on the November 2020 ballot. This follow on statute to the CCPA could again force the legislature into making a deal that would revamp privacy laws in California as happened when the CCPA was added to the ballot in 2018. It is also possible this statute remains on the ballot and is added to California’s laws. In either case, much of the CCPA and its regulations may be moot or in effect for only the few years it takes for a new privacy regulatory structure to be established as laid out in the CPRA. See here for more detail.
  • In a proposed rule issued for comment, the Federal Communications Commission (FCC) explained it is taking “further steps to protect the nation’s communications networks from potential security threats as the [FCC] integrates provisions of the recently enacted Secure and Trusted Communications Networks Act of 2019 (Secure Networks Act) (P.L. 116-124) into its existing supply chain rulemaking proceeding….[and] seeks comment on proposals to implement further Congressional direction in the Secure Networks Act.” Comments are due by 31 August.
    • The FCC explained
      • The concurrently adopted Declaratory Ruling finds that the 2019 Supply Chain Order, 85 FR 230, January 3, 2020, satisfies the Secure Networks Act’s requirement that the Commission prohibit the use of funds for covered equipment and services. The Commission now seeks comment on sections 2, 3, 5, and 7 of the Secure Networks Act, including on how these provisions interact with our ongoing efforts to secure the communications supply chain. As required by section 2, the Commission proposes several processes by which to publish a list of covered communications equipment and services. Consistent with sections 3, 5, and 7 of the Secure Networks Act, the Commission proposes to (1) ban the use of federal subsidies for any equipment or services on the new list of covered communications equipment and services; (2) require that all providers of advanced communications service report whether they use any covered communications equipment and services; and (3) establish regulations to prevent waste, fraud, and abuse in the proposed reimbursement program to remove, replace, and dispose of insecure equipment.
    • The agency added
      • The Commission also initially designated Huawei Technologies Company (Huawei) and ZTE Corporation (ZTE) as covered companies for purposes of this rule, and it established a process for designating additional covered companies in the future. Additionally, last month, the Commission’s Public Safety and Homeland Security Bureau issued final designations of Huawei and ZTE as covered companies, thereby prohibiting the use of USF funds on equipment or services produced or provided by these two suppliers.
      • The Commission takes further steps to protect the nation’s communications networks from potential security threats as it integrates provisions of the recently enacted Secure Networks Act into the Commission’s existing supply chain rulemaking proceeding. The Commission seeks comment on proposals to implement further Congressional direction in the Secure Networks Act.
  • The White House’s Office of Science & Technology Policy (OSTP) released a request for information (RFI) “[o]n behalf of the National Science and Technology Council’s (NSTC) Subcommittee on Resilience Science and Technology (SRST), OSTP requests input from all interested parties on the development of a National Research and Development Plan for Positioning, Navigation, and Timing (PNT) Resilience.” OSTP stated “[t]he plan will focus on the research and development (R&D) and pilot testing needed to develop additional PNT systems and services that are resilient to interference and manipulation and that are not dependent upon global navigation satellite systems (GNSS)…[and] will also include approaches to integrate and use multiple PNT services for enhancing resilience. The input received on these topics will assist the Subcommittee in developing recommendations for prioritization of R&D activities.”
    • Executive Order 13905, Strengthening National Resilience Through Responsible Use of Positioning, Navigation, and Timing Services, was issued on February 12, 2020, and President Donald Trump explained the policy basis for the initiative:
      • It is the policy of the United States to ensure that disruption or manipulation of PNT services does not undermine the reliable and efficient functioning of its critical infrastructure. The Federal Government must increase the Nation’s awareness of the extent to which critical infrastructure depends on, or is enhanced by, PNT services, and it must ensure critical infrastructure can withstand disruption or manipulation of PNT services. To this end, the Federal Government shall engage the public and private sectors to identify and promote the responsible use of PNT services.
    • In terms of future steps under the EO, the President directed the following:
      • The Departments of Defense, Transportation, and Homeland Security must use the PNT profiles in updates to the Federal Radionavigation Plan.
      • The Department of Homeland Security must “develop a plan to test the vulnerabilities of critical infrastructure systems, networks, and assets in the event of disruption and manipulation of PNT services. The results of the tests carried out under that plan shall be used to inform updates to the PNT profiles…”
      • The heads of Sector-Specific Agencies (SSAs) and the heads of other executive departments and agencies (agencies) coordinating with the Department of Homeland Security, must “develop contractual language for inclusion of the relevant information from the PNT profiles in the requirements for Federal contracts for products, systems, and services that integrate or utilize PNT services, with the goal of encouraging the private sector to use additional PNT services and develop new robust and secure PNT services. The heads of SSAs and the heads of other agencies, as appropriate, shall update the requirements as necessary.”
      • the Federal Acquisition Regulatory Council, in consultation with the heads of SSAs and the heads of other agencies, as appropriate, shall incorporate the [contractual language] into Federal contracts for products, systems, and services that integrate or use PNT services.
      • The Office of Science and Technology Policy (OSTP) must “coordinate the development of a national plan, which shall be informed by existing initiatives, for the R&D and pilot testing of additional, robust, and secure PNT services that are not dependent on global navigation satellite systems (GNSS).”
  • An ideologically diverse bipartisan group of Senators wrote the official at the United States Department of Justice in charge of the antitrust division and the chair of the Federal Trade Commission (FTC) “regarding allegations of potentially anticompetitive practices and conduct by online platforms toward content creators and emerging competitors….[that] stemmed from a recent Wall Street Journal report that Alphabet Inc., the parent company of Google and YouTube, has designed Google Search to specifically give preference to YouTube and other Google-owned video service providers.”
    • The Members asserted
      • There is no public insight into how Google designs its algorithms, which seem to deliver up preferential search results for YouTube and other Google video products ahead of other competitive services. While a company favoring its own products, in and of itself, may not always constitute illegal anticompetitive conduct, the Journal further reports that a significant motivation behind this action was to “give YouTube more leverage in business deals with content providers seeking traffic for their videos….” This exact conduct was the topic of a Senate Antitrust Subcommittee hearing led by Senators Lee and Klobuchar in March this year.
    • Senators Thom Tillis (R-NC), Mike Lee (R-UT), Amy Klobuchar (D-MN), Richard Blumenthal (D-CT), Marsha Blackburn (R-TN), Josh Hawley (R-MO), Elizabeth Warren (D-MA), Mazie Hirono (D-HI), Cory Booker (D-NJ) and Ted Cruz (R-TX) signed the letter.
  • The National Security Agency (NSA) and the Federal Bureau of Investigation (FBI) released a “Cybersecurity Advisory [and a fact sheet and FAQ] about previously undisclosed Russian malware” “called Drovorub, designed for Linux systems as part of its cyber espionage operations.” The NSA and FBI asserted “[t]he Russian General Staff Main Intelligence Directorate (GRU) 85th Main Special Service Center (GTsSS) military unit 26165” developed and deployed the malware. The NSA and FBI stated the GRU and GTsSS are “sometimes publicly associated with APT28, Fancy Bear, Strontium, and a variety of other identities as tracked by the private sector.”
    • The agencies contended
      • Drovorub represents a threat to National Security Systems, Department of Defense, and Defense Industrial Base customers that use Linux systems. Network defenders and system administrators can find detection strategies, mitigation techniques, and configuration recommendations in the advisory to reduce the risk of compromise.
  • The United States Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) published Cybersecurity Best Practices for Operating Commercial Unmanned Aircraft Systems (UAS) “a companion piece to CISA’s Foreign Manufactured UASs Industry Alert,…[to] assist in standing up a new UAS program or securing an existing UAS program, and is intended for information technology managers and personnel involved in UAS operations.” CISA cautioned that “[s]imilar to other cybersecurity guidelines and best practices, the identified best practices can aid critical infrastructure operators to lower the cybersecurity risks associated with the use of UAS, but do not eliminate all risk.”
  • The United States Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) released the “Identity, Credential, and Access Management (ICAM) Value Proposition Suite of documents in collaboration with SAFECOM and the National Council of Statewide Interoperability Coordinators (NCSWIC), Office of the Director of National Intelligence (ODNI), and Georgia Tech Research Institute (GTRI)…[that] introduce[] ICAM concepts, explores federated ICAM use-cases, and highlights the potential benefits for the public safety community:”
    • ICAM Value Proposition Overview
      • This document provides a high-level summary of federated ICAM benefits and introduces domain-specific scenarios covered by other documents in the suite.
    • ICAM Value Proposition Scenario: Drug Response
      • This document outlines federated ICAM use cases and information sharing benefits for large-scale drug overdose epidemic (e.g., opioid, methamphetamine, and cocaine) prevention and response.

Further Reading

  • Trump’s Labor Chief Accused of Intervening in Oracle Pay Bias Case” By Noam Scheiber, David McCabe and Maggie Haberman – The New York Times. In the sort of conduct that is apparently the norm across the Trump Administration, there are allegations that the Secretary of Labor intervened in departmental litigation to help a large technology firm aligned with President Donald Trump. Starting in the Obama Administration and continuing into the Trump Administration, software and database giant Oracle was investigated, accused, and sued for paying non-white, non-male employees significantly less in violation of federal and state law. Estimates of Oracle’s liability ranged between $300-800 million, and litigators in the Department of Labor were seeking $400 million and had taken the case to trial. Secretary Eugene Scalia purportedly stepped in and lowered the dollar amount to $40 million and the head litigator is being offered a transfer from Los Angeles to Chicago in a division in which she has no experience. Oracle’s CEO Safra Catz and Chair Larry Ellison have both supported the President more enthusiastically and before other tech company heads engaged.
  • Pentagon wins brief waiver from government’s Huawei ban” By Joe Gould – Defense News. A Washington D.C. trade publication is reporting the Trump Administration is using flexibility granted by Congress to delay the ban on contractors using Huawei, ZTE, and other People’s Republic of China (PRC) technology for the Department of Defense. Director of National Intelligence John Ratcliffe granted the waiver at the request of Under Secretary of Defense for Acquisition and Sustainment Ellen Lord, claiming:
    • You stated that DOD’s statutory requirement to provide for the military forces needed to deter war and protect the security of out country is critically important to national security. Therefore, the procurement of goods and services in support of DOD’s statutory mission is also in the national security interests of the United States.
    • Section 889 of the “John S. McCain National Defense Authorization Act (NDAA) for FY 2019” (P.L. 115-232) requires agencies to remove this equipment and systems and also not to contract with private sector entities that use such equipment and services. It is the second part of the ban the DOD and its contractors are getting a reprieve from for an interim rule putting in place such a ban was issued last month.
  • DOD’s IT supply chain has dozens of suppliers from China, report finds” By Jackson Barnett – fedscoop. A data analytics firm, Govini, analyzed a sample of prime contracts at the Department of Defense (DOD) and found a surge in the presence of firms from the People’s Republic of China (PRC) in the supply chains in the software and information technology (IT) sectors. This study has obvious relevance to the previous article on banning PRC equipment and services in DOD supply chains.
  • Facebook algorithm found to ‘actively promote’ Holocaust denial” by Mark Townsend – The Guardian. A British counter-hate organization, the Institute for Strategic Dialogue (ISD), found that Facebook’s algorithms lead people searching for the Holocaust to denial sites and posts. The organization found the same problem on Reddit, Twitter, and YouTube, too. ISD claimed:
    • Our findings show that the actions taken by platforms can effectively reduce the volume and visibility of this type of antisemitic content. These companies therefore need to ask themselves what type of platform they would like to be: one that earns money by allowing Holocaust denial to flourish, or one that takes a principled stand against it.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Foundry Co from Pixabay