Further Reading, Other Development, and Coming Events (4 January 2021)

Further Reading

  • Microsoft Says Russian Hackers Viewed Some of Its Source Code” By Nicole Perlroth — The New York Times. The Sluzhba vneshney razvedki Rossiyskoy Federatsii’s (SVR) hack keeps growing and growing with Microsoft admitting its source code was viewed through an employee account. It may be that authorized Microsoft resellers were one of the vectors by which the SVR accessed SolarWinds, FireEye, and ultimately a number of United States (U.S.) government agencies. Expect more revelations to come about the scope and breadth of entities and systems the SVR compromised.
  • In 2020, we reached peak Internet. Here’s what worked — and what flopped.” By Geoffrey Fowler — The Washington Post. The newspaper’s tech columnist reviews the technology used during the pandemic and what is likely to stay with us when life returns to some semblance of normal.
  • Facebook Says It’s Standing Up Against Apple For Small Businesses. Some Of Its Employees Don’t Believe It.” By Craig Silverman and Ryan Mac — BuzzFeed News. Again, two of the best-sourced journalists when it comes to Facebook have exposed employee dissent within the social media and advertising giant, and this time over the company’s advertising blitz positioning it as the champion of small businesses that allegedly stand to be hurt when Apple rolls out iOS 14 that will allow users to block the type of tracking across apps and the internet Facebook thrives on. The company’s PR campaign stands in contrast to the anecdotal stories about errors that harmed and impeded small companies in using Facebook to advertise and sell products and services to cusstomers.
  • SolarWinds hack spotlights a thorny legal problem: Who to blame for espionage?” By Tim Starks — cyberscoop. This piece previews possible and likely inevitable litigation to follow from the SolarWinds hack, including possible securities action on the basis of fishy dumps of stock by executive, breach of contract, and negligence for failing to patch and address vulnerabilities in a timely fashion. Federal and state regulators will probably get on the field, too. But this will probably take years to play out as Home Depot settled claims arising from its 2014 breach with state attorneys general in November 2020.
  • The Tech Policies the Trump Administration Leaves Behind” By Aaron Boyd — Nextgov. A look back at the good, the bad, and the ugly of the Trump Administration’s technology policies, some of which will live on in the Biden Administration.

Other Developments

  • In response to the SolarWinds hack, the Federal Bureau of Investigation (FBI), the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA), and the Office of the Director of National Intelligence (ODNI) issued a joint statement indicating that the process established in Pursuant to Presidential Policy Directive (PPD) 41, an Obama Administration policy has been activated and a Cyber Unified Coordination Group (UCG) has been formed “to coordinate a whole-of-government response to this significant cyber incident.” The agencies explained “[t]he UCG is intended to unify the individual efforts of these agencies as they focus on their separate responsibilities.”
    • In PPD-41 it is explained that a UCG “shall serve as the primary method for coordinating between and among Federal agencies in response to a significant cyber incident as well as for integrating private sector partners into incident response efforts, as appropriate.” Moreover, “[t]he Cyber UCG is intended to result in unity of effort and not to alter agency authorities or leadership, oversight, or command responsibilities.”
  • Following the completion of its “in-depth” investigation, the European Commission (EC) cleared Google’s acquisition of Fitbit with certain conditions, removing a significant hurdle for the American multinational in buying the wearable fitness tracker company. In its press release, the EC explained that after its investigation, “the Commission had concerns that the transaction, as initially notified, would have harmed competition in several markets.” To address and allay concerns, Google bound itself for ten years to a set of commitments that can be unilaterally extended by the EC and will be enforced, in part, by the appointment of a trustee to oversee compliance.
    • The EC was particularly concerned about:
      • Advertising: By acquiring Fitbit, Google would acquire (i) the database maintained by Fitbit about its users’ health and fitness; and (ii) the technology to develop a database similar to that of Fitbit. By increasing the already vast amount of data that Google could use for the personalisation of ads, it would be more difficult for rivals to match Google’s services in the markets for online search advertising, online display advertising, and the entire “ad tech” ecosystem. The transaction would therefore raise barriers to entry and expansion for Google’s competitors for these services to the detriment of advertisers, who would ultimately face higher prices and have less choice.
      • Access to Web Application Programming Interface (‘API’) in the market for digital healthcare: A number of players in this market currently access health and fitness data provided by Fitbit through a Web API, in order to provide services to Fitbit users and obtain their data in return. The Commission found that following the transaction, Google might restrict competitors’ access to the Fitbit Web API. Such a strategy would come especially at the detriment of start-ups in the nascent European digital healthcare space.
      • Wrist-worn wearable devices: The Commission is concerned that following the transaction, Google could put competing manufacturers of wrist-worn wearable devices at a disadvantage by degrading their interoperability with Android smartphones.
    • As noted, Google made a number of commitments to address competition concerns:
      • Ads Commitment:
        • Google will not use for Google Ads the health and wellness data collected from wrist-worn wearable devices and other Fitbit devices of users in the EEA, including search advertising, display advertising, and advertising intermediation products. This refers also to data collected via sensors (including GPS) as well as manually inserted data.
        • Google will maintain a technical separation of the relevant Fitbit’s user data. The data will be stored in a “data silo” which will be separate from any other Google data that is used for advertising.
        • Google will ensure that European Economic Area (‘EEA’) users will have an effective choice to grant or deny the use of health and wellness data stored in their Google Account or Fitbit Account by other Google services (such as Google Search, Google Maps, Google Assistant, and YouTube).
      • Web API Access Commitment:
        • Google will maintain access to users’ health and fitness data to software applications through the Fitbit Web API, without charging for access and subject to user consent.
      • Android APIs Commitment:
        • Google will continue to license for free to Android original equipment manufacturers (OEMs) those public APIs covering all current core functionalities that wrist-worn devices need to interoperate with an Android smartphone. Such core functionalities include but are not limited to, connecting via Bluetooth to an Android smartphone, accessing the smartphone’s camera or its GPS. To ensure that this commitment is future-proof, any improvements of those functionalities and relevant updates are also covered.
        • It is not possible for Google to circumvent the Android API commitment by duplicating the core interoperability APIs outside the Android Open Source Project (AOSP). This is because, according to the commitments, Google has to keep the functionalities afforded by the core interoperability APIs, including any improvements related to the functionalities, in open-source code in the future. Any improvements to the functionalities of these core interoperability APIs (including if ever they were made available to Fitbit via a private API) also need to be developed in AOSP and offered in open-source code to Fitbit’s competitors.
        • To ensure that wearable device OEMs have also access to future functionalities, Google will grant these OEMs access to all Android APIs that it will make available to Android smartphone app developers including those APIs that are part of Google Mobile Services (GMS), a collection of proprietary Google apps that is not a part of the Android Open Source Project.
        • Google also will not circumvent the Android API commitment by degrading users experience with third party wrist-worn devices through the display of warnings, error messages or permission requests in a discriminatory way or by imposing on wrist-worn devices OEMs discriminatory conditions on the access of their companion app to the Google Play Store.
  • The United States (U.S.) Department of Health and Human Services’ (HHS) Office of Civil Rights (OCR) has proposed a major rewrite of the regulations governing medical privacy in the U.S. As the U.S. lacks a unified privacy regime, the proposed changes would affect on those entities in the medical sector subject to the regime, which is admittedly many such entities. Nevertheless, it is almost certain the Biden Administration will pause this rulemaking and quite possibly withdraw it should it prove crosswise with the new White House’s policy goals.
    • HHS issued a notice of proposed rulemaking “to modify the Standards for the Privacy of Individually Identifiable Health Information (Privacy Rule) under the Health Insurance Portability and Accountability Act of 1996 (HIPAA) and the Health Information Technology for Economic and Clinical Health Act of 2009 (HITECH Act).”
      • HHS continued:
        • The Privacy Rule is one of several rules, collectively known as the HIPAA Rules, that protect the privacy and security of individuals’ medical records and other protected health information (PHI), i.e., individually identifiable health information maintained or transmitted by or on behalf of HIPAA covered entities (i.e., health care providers who conduct covered health care transactions electronically, health plans, and health care clearinghouses).
        • The proposals in this NPRM support the Department’s Regulatory Sprint to Coordinated Care (Regulatory Sprint), described in detail below. Specifically, the proposals in this NPRM would amend provisions of the Privacy Rule that could present barriers to coordinated care and case management –or impose other regulatory burdens without sufficiently compensating for, or offsetting, such burdens through privacy protections. These regulatory barriers may impede the transformation of the health care system from a system that pays for procedures and services to a system of value-based health care that pays for quality care.
    • In a press release, OCR asserted:
      • The proposed changes to the HIPAA Privacy Rule include strengthening individuals’ rights to access their own health information, including electronic information; improving information sharing for care coordination and case management for individuals; facilitating greater family and caregiver involvement in the care of individuals experiencing emergencies or health crises; enhancing flexibilities for disclosures in emergency or threatening circumstances, such as the Opioid and COVID-19 public health emergencies; and reducing administrative burdens on HIPAA covered health care providers and health plans, while continuing to protect individuals’ health information privacy interests.
  • The Federal Trade Commission (FTC) has used its powers to compel selected regulated entities to provide requested information in asking that “nine social media and video streaming companies…provide data on how they collect, use, and present personal information, their advertising and user engagement practices, and how their practices affect children and teens.” The TFTC is using its Section 6(b) authority to compel the information from Amazon.com, Inc., ByteDance Ltd., which operates the short video service TikTok, Discord Inc., Facebook, Inc., Reddit, Inc., Snap Inc., Twitter, Inc., WhatsApp Inc., and YouTube LLC. Failure to respond can result in the FTC fining a non-compliant entity.
    • The FTC claimed in its press release it “is seeking information specifically related to:
      • how social media and video streaming services collect, use, track, estimate, or derive personal and demographic information;
      • how they determine which ads and other content are shown to consumers;
      • whether they apply algorithms or data analytics to personal information;
      • how they measure, promote, and research user engagement; and
      • how their practices affect children and teens.
    • The FTC explained in its sample order:
      • The Commission is seeking information concerning the privacy policies, procedures, and practices of Social Media and Video Streaming Service providers, Including the method and manner in which they collect, use, store, and disclose Personal Information about consumers and their devices. The Special Report will assist the Commission in conducting a study of such policies, practices, and procedures.
  • The United States (U.S.) Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) supplemented its Emergency Directive 21-01 to federal civilian agencies in response to the Sluzhba vneshney razvedki Rossiyskoy Federatsii’s (SVR) hack via SolarWinds. In an 18 December update, CISA explained:
    • This section provides additional guidance on the implementation of CISA Emergency Directive (ED) 21-01, to include an update on affected versions, guidance for agencies using third-party service providers, and additional clarity on required actions.
    •  In a 30 December update, CISA stated:
      • Specifically, all federal agencies operating versions of the SolarWinds Orion platform other than those identified as “affected versions” below are required to use at least SolarWinds Orion Platform version 2020.2.1HF2. The National Security Agency (NSA) has examined this version and verified that it eliminates the previously identified malicious code. Given the number and nature of disclosed and undisclosed vulnerabilities in SolarWinds Orion, all instances that remain connected to federal networks must be updated to 2020.2.1 HF2 by COB December 31, 2020. CISA will follow up with additional supplemental guidance, to include further clarifications and hardening requirements.
  • Australia’s Attorney-General’s Department published an unclassified version of the four volumes of the “Report of the Comprehensive Review of the Legal Framework of the National Intelligence Community,” an “examination of the legislative framework underpinning the National Intelligence Community (NIC)…the first and largest since the Hope Royal Commissions considered the Australian Intelligence Community (AIC) in the 1970s and 1980s.” Ultimately, the authors of the report concluded:
    • We do not consider the introduction of a common legislative framework, in the form of a single Act governing all or some NIC agencies, to be a practical, pragmatic or proportionate reform. It would be unlikely that the intended benefits of streamlining and simplifying NIC legislation could be achieved due to the diversity of NIC agency functions—from intelligence to law enforcement, regulatory and policy—and the need to maintain differences in powers, immunities and authorising frameworks. The Review estimates that reform of this scale would cost over $200million and take up to 10years to complete. This would be an impractical and disproportionate undertaking for no substantial gain. In our view, the significant costs and risks of moving to a single, consolidated Act clearly outweigh the limited potential benefits.
    • While not recommending a common legislative framework for the entire NIC, some areas of NIC legislation would benefit from simplification and modernisation. We recommend the repeal of the TIA Act, Surveillance Devices Act 2004(SD Act) and parts of the Australian Security Intelligence Organisation Act 1979 (ASIO Act), and their replacement with a single new Act governing the use of electronic surveillance powers—telecommunications interception, covert access to stored communications, computers and telecommunications data, and the use of optical, listening and tracking devices—under Commonwealth law.
  • The National Institute of Standards and Technology (NIST) released additional materials to supplement a major rewrite of a foundational security guidance document. NIST explained “[n]ew supplemental materials for NIST Special Publication (SP) 800-53 Revision 5, Security and Privacy Controls for Information Systems and Organizations, are available for download to support the December 10, 2020 errata release of SP 800-53 and SP 800-53B, Control Baselines for Information Systems and Organizations.” These supplemental materials include:
    • A comparison of the NIST SP 800-53 Revision 5 controls and control enhancements to Revision 4. The spreadsheet describes the changes to each control and control enhancement, provides a brief summary of the changes, and includes an assessment of the significance of the changes.  Note that this comparison was authored by The MITRE Corporation for the Director of National Intelligence (DNI) and is being shared with permission by DNI.
    • Mapping of the Appendix J Privacy Controls (Revision 4) to Revision 5. The spreadsheet supports organizations using the privacy controls in Appendix J of SP 800-53 Revision 4 that are transitioning to the integrated control catalog in Revision 5.
    • Mappings between NIST SP 800-53 and other frameworks and standards. The mappings provide organizations a general indication of SP 800-53 control coverage with respect to other frameworks and standards. When leveraging the mappings, it is important to consider the intended scope of each publication and how each publication is used; organizations should not assume equivalency based solely on the mapping tables because mappings are not always one-to-one and there is a degree of subjectivity in the mapping analysis.
  • Via a final rule, the Department of Defense (DOD) codified “the National Industrial Security Program Operating Manual (NISPOM) in regulation…[that] establishes requirements for the protection of classified information disclosed to or developed by contractors, licensees, grantees, or certificate holders (hereinafter referred to as contractors) to prevent unauthorized disclosure.” The DOD stated “[i]n addition to adding the NISPOM to the Code of Federal Regulations (CFR), this rule incorporates the requirements of Security Executive Agent Directive (SEAD) 3, “Reporting Requirements for Personnel with Access to Classified Information or Who Hold a Sensitive Position.” The DOD stated “SEAD 3 requires reporting by all contractor cleared personnel who have been granted eligibility for access to classified information.”
    • The DOD added “[t]his NISPOM rule provides for a single nation-wide implementation plan which will, with this rule, include SEAD 3 reporting by all contractor cleared personnel to report specific activities that may adversely impact their continued national security eligibility, such as reporting of foreign travel and foreign contacts.”
    • The DOD explained “NISP Cognizant Security Agencies (CSAs) shall conduct an analysis of such reported activities to determine whether they pose a potential threat to national security and take appropriate action.”
    • The DOD added that “the rule also implements the provisions of Section 842 of Public Law 115-232, which removes the requirement for a covered National Technology and Industrial Base (NTIB) entity operating under a special security agreement pursuant to the NISP to obtain a national interest determination as a condition for access to proscribed information.”
  • An advisory committee housed at the United States (U.S.) Department of Homeland Security (DHS) is calling for the White House to quickly “operationalize intelligence in a classified space with senior executives and cyber experts from most critical entities in the energy, financial services, and communications sectors working directly with intelligence analysts and other government staff.” In their report, the President’s National Infrastructure Advisory Council (NIAC) proposed the creation of a Critical Infrastructure Command Center (CICC) to “provid[e] real-time collaboration between government and industry…[and] take direct action and provide tactical solutions to mitigate, remediate,  and deter threats.” NIAC urged the President to “direct relevant federal agencies to support the private sector in executing the concept, including identifying the required government staff…[and] work with Congress to ensure the appropriate authorities are established to allow the CICC to fully realize its operational functionality.” NIAC recommended “near-term actions to implement the CICC concept:
    • 1.The President should direct the relevant federal agencies to support the private sector in rapidly standing up the CICC concept with the energy, financial services, and communications sectors:
      • a. Within 90 days the private sector will identify the executives who will lead execution of the CICC concept and establish governing criteria (including membership, staffing and rotation, and other logistics).
      • b. Within 120 days the CICC sector executives will identify and assign the necessary CICC staff from the private sector.
      • c. Within 90 days an appropriate venue to house the operational component will be identified and the necessary agreements put in place.
    • 2. The President should direct the Intelligence Community and other relevant government agencies to identify and co-locate the required government staff counterparts to enable the direct coordination required by the CICC. This staff should be pulled from the IC, SSAs, and law enforcement.
    • 3. The President, working with Congress, should establish the appropriate authorities and mission for federal agencies to directly share intelligence with critical infrastructure companies, along with any other authorities required for the CICC concept to be fully successful (identified in Appendix A).
    • 4. Once the CICC concept is fully operational (within 180 days), the responsible executives should deliver a report to the NSC and the NIAC demonstrating how the distinct capabilities of the CICC have been achieved and the impact of the capabilities to date. The report should identify remaining gaps in resources, direction, or authorities.

Coming Events

  • On 13 January, the Federal Communications Commission (FCC) will hold its monthly open meeting, and the agency has placed the following items on its tentative agenda “Bureau, Office, and Task Force leaders will summarize the work their teams have done over the last four years in a series of presentations:
    • Panel One. The Commission will hear presentations from the Wireless Telecommunications Bureau, International Bureau, Office of Engineering and Technology, and Office of Economics and Analytics.
    • Panel Two. The Commission will hear presentations from the Wireline Competition Bureau and the Rural Broadband Auctions Task Force.
    • Panel Three. The Commission will hear presentations from the Media Bureau and the Incentive Auction Task Force.
    • Panel Four. The Commission will hear presentations from the Consumer and Governmental Affairs Bureau, Enforcement Bureau, and Public Safety and Homeland Security Bureau.
    • Panel Five. The Commission will hear presentations from the Office of Communications Business Opportunities, Office of Managing Director, and Office of General Counsel.
  • On 27 July, the Federal Trade Commission (FTC) will hold PrivacyCon 2021.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2021. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by opsa from Pixabay

Further Reading, Other Developments, and Coming Events (14 December)

Further Reading

  • Russian Hackers Broke Into Federal Agencies, U.S. Officials Suspect” By David Sanger — The New York Times.; “Russian government hackers are behind a broad espionage campaign that has compromised U.S. agencies, including Treasury and Commerce” By Ellen Nakashima and Craig Timberg — The Washington Post; “Suspected Russian hackers spied on U.S. Treasury emails – sources” By Chris Bing — Reuters. Apparently, Sluzhba vneshney razvedki Rossiyskoy Federatsii (SVR), the Russian Federation’s Foreign Intelligence Service, has exploited a vulnerability in SolarWinds’ update system used by many United States (U.S.) government systems, Fortune 500 companies, and the U.S.’ top ten largest telecommunications companies. Reportedly, APT29 (aka Cozy Bear) has had free reign in the email systems of the Departments of the Treasury and Commerce among other possible victims. The hackers may have also accessed a range of other entities around the world using the same SolarWind system. Moreover, these penetrations may be related to the recently announced theft of hacking tools a private firm, FireEye, used to test clients’ systems.
  • Hackers steal Pfizer/BioNTech COVID-19 vaccine data in Europe, companies say” By Jack Stubbs — Reuters. The European Union’s (EU) agency that oversees and approve medications has been hacked, and documents related to one of the new COVID-19 vaccines may have been stolen. The European Medicines Agency (EMA) was apparently penetrated, and materials related to Pfizer and BioNTech’s vaccine were exfiltrated. The scope of the theft is not yet known, but this is the latest in many attempts to hack into the entities conducting research on the virus and potential vaccines.
  • The AI Girlfriend Seducing China’s Lonely Men” By Zhang Wanqing — Sixth Tone. A chat bot powered by artificial intelligence that some men in the People’s Republic of China (PRC) are using extensively raises all sorts of ethical and privacy issues. Lonely people have turned to this AI technology and have confided their deepest feelings, which are stored by the company. It seems like a matter of time until these data are mined for commercial value or hacked. Also, the chatbot has run afoul of PRC’s censorship policies. Finally, is this a preview of the world to come, much like the 2013 film, Her, in which humans have relationships with AI beings?
  • YouTube will now remove videos disputing Joe Biden’s election victory” By Makena Kelly — The Verge. The Google subsidiary announced that because the safe harbor deadline has been reached and a sufficient number of states have certified President-elect Joe Biden, the platform will begin taking down misleading election videos. This change in policy may have come about, in part, because of pressure from Democrats in Congress about what they see as Google’s lackluster efforts to find and remove lies, misinformation, and disinformation about the 2020 election.
  • Lots of people are gunning for Google. Meet the man who might have the best shot.” By Emily Birnbaum — Protocol. Colorado Attorney General Phil Weiser may be uniquely qualified to lead state attorneys general on a second antitrust and anti-competition action against Google given his background as a law professor steeped in antitrust and his background in the Department of Justice and White House during the Obama Administration.

Other Developments

  • Cybersecurity firm, FireEye, revealed it was “attacked by a highly sophisticated threat actor, one whose discipline, operational security, and techniques lead us to believe it was a state-sponsored attack” according to CEO Kevin Mandia. This hacking may be related to vast penetration of United States (U.S.) government systems revealed over the weekend. Mandia stated FireEye has “found that the attacker targeted and accessed certain Red Team assessment tools that we use to test our customers’ security…[that] mimic the behavior of many cyber threat actors and enable FireEye to provide essential diagnostic security services to our customers.” Mandia claimed none of these tools were zero-day exploits. FireEye is “proactively releasing methods and means to detect the use of our stolen Red Team tools…[and] out of an abundance of caution, we have developed more than 300 countermeasures for our customers, and the community at large, to use in order to minimize the potential impact of the theft of these tools.
    • Mandia added:
      • Consistent with a nation-state cyber-espionage effort, the attacker primarily sought information related to certain government customers. While the attacker was able to access some of our internal systems, at this point in our investigation, we have seen no evidence that the attacker exfiltrated data from our primary systems that store customer information from our incident response or consulting engagements, or the metadata collected by our products in our dynamic threat intelligence systems. If we discover that customer information was taken, we will contact them directly.
      • Based on my 25 years in cyber security and responding to incidents, I’ve concluded we are witnessing an attack by a nation with top-tier offensive capabilities. This attack is different from the tens of thousands of incidents we have responded to throughout the years. The attackers tailored their world-class capabilities specifically to target and attack FireEye. They are highly trained in operational security and executed with discipline and focus. They operated clandestinely, using methods that counter security tools and forensic examination. They used a novel combination of techniques not witnessed by us or our partners in the past.
      • We are actively investigating in coordination with the Federal Bureau of Investigation and other key partners, including Microsoft. Their initial analysis supports our conclusion that this was the work of a highly sophisticated state-sponsored attacker utilizing novel techniques.    
  • The United States’ (U.S.) Department of Justice filed suit against Facebook for “tactics that discriminated against U.S. workers and routinely preferred temporary visa holders (including H-1B visa holders) for jobs in connection with the permanent labor certification (PERM) process.” The DOJ is asking for injunction to stop Facebook from engaging in the alleged conduct, civil penalties, and damages for workers harmed by this conduct.
    • The DOJ contended:
      • The department’s lawsuit alleges that beginning no later than Jan. 1, 2018 and lasting until at least Sept. 18, 2019, Facebook employed tactics that discriminated against U.S. workers and routinely preferred temporary visa holders (including H-1B visa holders) for jobs in connection with the PERM process. Rather than conducting a genuine search for qualified and available U.S. workers for permanent positions sought by these temporary visa holders, Facebook reserved the positions for temporary visa holders because of their immigration status, according to the complaint. The complaint also alleges that Facebook sought to channel jobs to temporary visa holders at the expense of U.S. workers by failing to advertise those vacancies on its careers website, requiring applicants to apply by physical mail only, and refusing to consider any U.S. workers who applied for those positions. In contrast, Facebook’s usual hiring process relies on recruitment methods designed to encourage applications by advertising positions on its careers website, accepting electronic applications, and not pre-selecting candidates to be hired based on a candidate’s immigration status, according to the lawsuit.
      • In its investigation, the department determined that Facebook’s ineffective recruitment methods dissuaded U.S. workers from applying to its PERM positions. The department concluded that, during the relevant period, Facebook received zero or one U.S. worker applicants for 99.7 percent of its PERM positions, while comparable positions at Facebook that were advertised on its careers website during a similar time period typically attracted 100 or more applicants each. These U.S. workers were denied an opportunity to be considered for the jobs Facebook sought to channel to temporary visa holders, according to the lawsuit. 
      • Not only do Facebook’s alleged practices discriminate against U.S. workers, they have adverse consequences on temporary visa holders by creating an employment relationship that is not on equal terms. An employer that engages in the practices alleged in the lawsuit against Facebook can expect more temporary visa holders to apply for positions and increased retention post-hire. Such temporary visa holders often have limited job mobility and thus are likely to remain with their company until they can adjust status, which for some can be decades.
      • The United States’ complaint seeks civil penalties, back pay on behalf of U.S. workers denied employment at Facebook due to the alleged discrimination in favor of temporary visa holders, and other relief to ensure Facebook stops the alleged violations in the future. According to the lawsuit, and based on the department’s nearly two-year investigation, Facebook’s discrimination against U.S. workers was intentional, widespread, and in violation of a provision of the Immigration and Nationality Act (INA), 8 U.S.C. § 1324b(a)(1), that the Department of Justice’s Civil Rights Division enforces. 
  • A trio of consumer authority regulators took the lead in coming into agreement with Apple to add “a new section to each app’s product page in its App Store, containing key information about the data the app collects and an accessible summary of the most important information from the privacy policy.” The United Kingdom’s UK’s Competition and Markets Authority (CMA), the Netherlands Authority for Consumers and Markets and the Norwegian Consumer Authority led the effort that “ongoing work from the International Consumer Protection and Enforcement Network (ICPEN), involving 27 of its consumer authority members across the world.” The three agencies explained:
    • Consumer protection authorities, including the CMA, became concerned that people were not being given clear information on how their personal data would be used before choosing an app, including on whether the app developer would share their personal data with a third party. Without this information, consumers are unable to compare and choose apps based on how they use personal data.
  • Australia’s Council of Financial Regulators (CFR) has released a Cyber Operational Resilience Intelligence-led Exercises (CORIE) framework “to test and demonstrate the cyber maturity and resilience of institutions within the Australian financial services industry.”

Coming Events

  • On 15 December, the Senate Judiciary Committee’s Intellectual Property Subcommittee will hold a hearing titled “The Role of Private Agreements and Existing Technology in Curbing Online Piracy” with these witnesses:
    • Panel I
      • Ms. Ruth Vitale, Chief Executive Officer, CreativeFuture
      • Mr. Probir Mehta, Head of Global Intellectual Property and Trade Policy, Facebook, Inc.
      • Mr. Mitch Glazier, Chairman and CEO, Recording Industry Association of America
      • Mr. Joshua Lamel, Executive Director, Re:Create
    • Panel II
      • Ms. Katherine Oyama, Global Director of Business Public Policy, YouTube
      • Mr. Keith Kupferschmid, Chief Executive Officer, Copyright Alliance
      • Mr. Noah Becker, President and Co-Founder, AdRev
      • Mr. Dean S. Marks, Executive Director and Legal Counsel, Coalition for Online Accountability
  • The Senate Armed Services Committee’s Cybersecurity Subcommittee will hold a closed briefing on Department of Defense Cyber Operations on 15 December with these witnesses:
    • Mr. Thomas C. Wingfield, Deputy Assistant Secretary of Defense for Cyber Policy, Office of the Under Secretary of Defense for Policy
    • Mr. Jeffrey R. Jones, Vice Director, Command, Control, Communications and Computers/Cyber, Joint Staff, J-6
    • Ms. Katherine E. Arrington, Chief Information Security Officer for the Assistant Secretary of Defense for Acquisition, Office of the Under Secretary of Defense for Acquisition and Sustainment
    • Rear Admiral Jeffrey Czerewko, United States Navy, Deputy Director, Global Operations, J39, J3, Joint Staff
  • The Senate Banking, Housing, and Urban Affairs Committee’s Economic Policy Subcommittee will conduct a hearing titled “US-China: Winning the Economic Competition, Part II” on 16 December with these witnesses:
    • The Honorable Will Hurd, Member, United States House of Representatives;
    • Derek Scissors, Resident Scholar, American Enterprise Institute;
    • Melanie M. Hart, Ph.D., Senior Fellow and Director for China Policy, Center for American Progress; and
    • Roy Houseman, Legislative Director, United Steelworkers (USW).
  • On 17 December the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency’s (CISA) Information and Communications Technology (ICT) Supply Chain Risk Management (SCRM) Task Force will convene for a virtual event, “Partnership in Action: Driving Supply Chain Security.”

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Photo by stein egil liland from Pexels

Further Reading, Other Developments, and Coming Events (3 November)

Further Reading

  • How Facebook and Twitter plan to handle election day disinformation” By Sam Dean — The Los Angeles Times; “How Big Tech is planning for election night” By Heather Kelly — The Washington Post; and “What to Expect From Facebook, Twitter and YouTube on Election Day” By Mike Isaac, Kate Conger and Daisuke Wakabayashi — The New York Times. Social media platforms have prepared for today and will use a variety of measures to combat lies, disinformation, and misinformation from sources foreign and domestic. Incidentally, my read is that these tech companies are more worried as measured by resource allocation about problematic domestic content.
  • QAnon received earlier boost from Russian accounts on Twitter, archives show” By Joseph Menn — Reuters. Researchers have delved into Twitter’s archive of taken down or suspended Russian accounts and are now claiming that the Russian Federation was pushing and amplifying the QAnon conspiracy in the United States (U.S.) as early as November 2017. This revised view of Russian involvement differs from conclusions reached in August that Russia played a small but significant role in the proliferation of the conspiracy.
  • Facial recognition used to identify Lafayette Square protester accused of assault” By Justin Jouvenal and Spencer S. Hsu — The Washington Post. When police were firing pepper balls and rolling gas canisters towards protestors in Lafayette Park in Washington, D.C. on 1 June 2020, a protestor was filmed assaulting two officers. A little known and not publicly revealed facial recognition technology platform available to many federal, state, and local law enforcement agencies in the Capital area resulted in a match with the footage. Now, a man stands accused of crimes during a Black Lives Matter march, and civil liberties and privacy advocates are objecting to the use of the National Capital Region Facial Recognition Investigative Leads System (NCRFRILS) on a number of grounds, including the demonstrated weakness of these systems to accurately identify people of color, the fact it has been used but not disclosed to a number of defendants, and the potential chilling effect it will have on people going to protests. Law enforcement officials claim there are strict privacy and process safeguards and an identification alone cannot be used as the basis of an arrest.
  • CISA’s political independence from Trump will be an Election Day asset” By Joseph Marks — The Washington Post. The United States’ (U.S.) Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA) has established its bona fides with nearly all stakeholders inside and outside the U.S. government since its creation in 2018. Many credit its Director, Christopher Krebs, and many also praise the agency for conveying information and tips about cyber and other threats without incurring the ire of a White House and President antithetical to any hint of Russian hacking or interference in U.S. elections. However, today and the next few days may prove the agency’s metal as it will be in uncharted territory trying to tamp down fear about unfounded rumors and looking to discredit misinformation and disinformation in close to real time.
  • Trump allies, largely unconstrained by Facebook’s rules against repeated falsehoods, cement pre-election dominance” By Isaac Stanley-Becker and Elizabeth Dwoskin — The Washington Post. Yet another article showing that if Facebook has a bias, it is against liberal viewpoints and content, for conservative material is allowed even if it breaks the rules of the platform. Current and former employees and an analysis support this finding. The Trump Family have been among those who have benefitted from the kid gloves used by the company regarding posts that would have likely resulted in consequences from other users. However, smaller conservative outlets or less prominent conservative figures have faced consequences for content that violates Facebook’s standards, however.

Other Developments

  • The European Union’s (EU) Executive Vice-President Margrethe Vestager made a speech titled “Building trust in technology,” in which she previewed one long awaited draft EU law on technology and another to address antitrust and anti-competitive practices of large technology companies. Vestager stated “in just a few weeks, we plan to publish two draft laws that will help to create a more trustworthy digital world.” Both drafts are expected on 2 December and represent key pieces of the new EU leadership’s Digital Strategy, the bloc’s initiative to update EU laws to account for changes in technology since the beginning of the century. The Digital Services Act will address and reform the legal treatment of both online commerce and online content. The draft Digital Markets Act would give the European Commission (EC) more tools to combat the same competition and market dominance issues the United States (U.S.), Australia, and other nations are starting to tackle visa vis companies like Apple, Amazon, Facebook, and Google.
    • Regarding the Digital Services Act, Vestager asserted:
      • One part of those rules will be the Digital Services Act, which will update the E-Commerce Directive, and require digital services to take more responsibility for dealing with illegal content and dangerous products. 
      • Those services will have to put clear and simple procedures in place, to deal with notifications about illegal material on their platforms. They’ll have to make it harder for dodgy traders to use the platform, by checking sellers’ IDs before letting them on the platform. And they’ll also have to provide simple ways for users to complain, if they think their material should not have been removed – protecting the right of legitimate companies to do business, and the right of individuals to freedom of expression. 
      • Those new responsibilities will help to keep Europeans just as safe online as they are in the physical world. They’ll protect legitimate businesses, which follow the rules, from being undercut by others who sell cheap, dangerous products. And by applying the same standards, all over Europe, they’ll make sure every European can rely on the same protection – and that digital businesses of all sizes can easily operate throughout Europe, without having to meet the costs of complying with different rules in different EU countries. 
      • The new rules will also require digital services – especially the biggest platforms – to be open about the way they shape the digital world that we see. They’ll have to report on what they’ve done to take down illegal material. They’ll have to tell us how they decide what information and products to recommend to us, and which ones to hide – and give us the ability to influence those decisions, instead of simply having them made for us. And they’ll have to tell us who’s paying for the ads that we see, and why we’ve been targeted by a certain ad.  
      • But to really give people trust in the digital world, having the right rules in place isn’t enough. People also need to know that those rules really work – that even the biggest companies will actually do what they’re supposed to. And to make sure that happens, there’s no substitute for effective enforcement.  
      • And effective enforcement is a vital part of the draft laws that we’ll propose in December. For instance, the Digital Services Act will improve the way national authorities cooperate, to make sure the rules are properly enforced, throughout the EU. Our proposal won’t change the fundamental principle, that digital services should be regulated by their home country. But it will set up a permanent system of cooperation that will help those regulators work more effectively, to protect consumers all across Europe. And it will give the EU power to step in, when we need to, to enforce the rules against some very large platforms.
    • Vestager also discussed the competition law the EC hopes to enact:
      • So, to keep our markets fair and open to competition, it’s vital that we have the right toolkit in place. And that’s what the second set of rules we’re proposing – what we call the Digital Markets Act – is for. 
      • That proposal will have two pillars. The first of those pillars will be a clear list of dos and don’ts for big digital gatekeepers, based on our experience with the sorts of behaviour that can stop markets working well. 
      • For instance, the decisions that gatekeepers take, about how to rank different companies in search results, can make or break businesses in dozens of markets that depend on the platform. And if platforms also compete in those markets themselves, they can use their position as player and referee to help their own services succeed, at the expense of their rivals. For instance, gatekeepers might manipulate the way that they rank different businesses, to show their own services more visibly than their rivals’. So, the proposal that we’ll put forward in a few weeks’ time will aim to ban this particular type of unfair self-preferencing. 
      • We also know that these companies can collect a lot of data about companies that rely on their platform – data which they can then use, to compete against those very same companies in other markets. That can seriously damage fairness in these markets – which is why our proposal aims to ban big gatekeepers from misusing their business users’ data in that way. 
      • These clear dos and don’ts will allow us to act much faster and more effectively, to tackle behaviour that we know can stop markets working well. But we also need to be ready for new situations, where digitisation creates deep, structural failures in the way our markets work.  
      • Once a digital company gets to a certain size, with the big network of users and the huge collections of data that brings, it can be very hard for anyone else to compete – even if they develop a much better service. So, we face a constant risk that big companies will succeed in pushing markets to a tipping point, sending them on a rapid, unstoppable slide towards monopoly – and creating yet another powerful gatekeeper. 
      • One way to deal with risks like this would be to stop those emerging gatekeepers from locking users into their platform. That could mean, for instance, that those gatekeepers would have to make it easier for users to switch platform, or to use more than one service. That would keep the market open for competition, by making it easier for innovative rivals to compete. But right now, we don’t have the power to take this sort of action when we see these risks arising. 
      • It can also be difficult to deal with large companies which use the power of their platforms again and again, to take over one related market after another. We can deal with that issue with a series of cases – but the risk is that we’ll always find ourselves playing catch-up, while platforms move from market to market, using the same strategies to drive out their rivals. 
      • The risk, though, is that we’ll have a fragmented system, with different rules in different EU countries. That would make it hard to tackle huge platforms that operate throughout Europe, and to deal with other problems that you find in digital markets in many EU countries. And it would mean that businesses and consumers across Europe can’t all rely on the same protection. 
      • That’s why the second pillar of the Digital Markets Act would put a harmonised market investigation framework in place across the single market, giving us the power to tackle market failures like this in digital markets, and stop new ones from emerging. That would give us a harmonised set of rules that would allow us to investigate certain structural problems in digital markets. And if necessary, we could take action to make these markets contestable and competitive.
  • California Governor Gavin Newsom (D) vetoed one of the bills sent to him to amend the “California Consumer Privacy Act” (AB 375) last week. In mid-October, he signed two bills that amended the CCPA but one will only take effect if the “California Privacy Rights Act” (CPRA) (Ballot Initiative 24) is not enacted by voters in the November election. Moreover, if the CPRA is enacted via ballot, then the two other statutes would likely become dead law as the CCPA and its amendments would become moot once the CPRA becomes effective in 2023.
    • Newsom vetoed AB 1138 that would amend the recently effective “Parent’s Accountability and Child Protection Act” would bar those under the age of 13 from opening a social media account unless the platform got the explicit consent from their parents. Moreover, “[t]he bill would deem a business to have actual knowledge of a consumer’s age if it willfully disregards the consumer’s age.” In his veto message, Newsom explained that while he agrees with the spirit of the legislation, it would create unnecessary confusion and overlap with federal law without any increase in protection for children. He signaled an openness to working with the legislature on this issue, however.
    • Newsom signed AB 1281 that would extend the carveout for employers to comply with the CCPA from 1 January 2021 to 1 January 2022. The CCPA “exempts from its provisions certain information collected by a business about a natural person in the course of the natural person acting as a job applicant, employee, owner, director, officer, medical staff member, or contractor, as specified…[and also] exempts from specified provisions personal information reflecting a written or verbal communication or a transaction between the business and the consumer, if the consumer is a natural person who is acting as an employee, owner, director, officer, or contractor of a company, partnership, sole proprietorship, nonprofit, or government agency and whose communications or transaction with the business occur solely within the context of the business conducting due diligence regarding, or providing or receiving a product or service to or from that company, partnership, sole proprietorship, nonprofit, or government agency.” AB 1281 “shall become operative only” if the CPRA is not approved by voters.
    • Newsom also signed AB 713 that would:
      • except from the CCPA information that was deidentified in accordance with specified federal law, or was derived from medical information, protected health information, individually identifiable health information, or identifiable private information, consistent with specified federal policy, as provided.
      • except from the CCPA a business associate of a covered entity, as defined, that is governed by federal privacy, security, and data breach notification rules if the business associate maintains, uses, and discloses patient information in accordance with specified requirements.
      • except information that is collected for, used in, or disclosed in research, as defined.
      • additionally prohibit a business or other person from reidentifying information that was deidentified, unless a specified exception is met.
      • beginning January 1, 2021, require a contract for the sale or license of deidentified information to include specified provisions relating to the prohibition of reidentification, as provided.
  • The Government Accountability Office (GAO) published a report requested by the chair of a House committee on a Federal Communications Commission (FCC) program to provide subsidies for broadband providers to provide service for High cost areas, typically for people in rural or hard to reach areas. Even though the FCC has provided nearly $5 billion in 2019 through the Universal Service Fund’s (USF) high cost program, the agency lack data to determine whether the goals of the program are being met. House Energy and Commerce Committee Chair Frank Pallone Jr (D-NJ) had asked for the assessment, which could well form the basis for future changes to how the FCC funds and sets conditions for use of said funds for broadband.
    • The GAO noted:
      • According to stakeholders GAO interviewed, FCC faces three key challenges to accomplish its high-cost program performance goals: (1) accuracy of FCC’s broadband deployment data, (2) broadband availability on tribal lands, and (3) maintaining existing fixed-voice infrastructure and attaining universal mobile service. For example, although FCC adopted a more precise method of collecting and verifying broadband availability data, stakeholders expressed concern the revised data would remain inaccurate if carriers continue to overstate broadband coverage for marketing and competitive reasons. Overstating coverage impairs FCC’s efforts to promote universal voice and broadband since an area can become ineligible for high-cost support if a carrier reports that service already exists in that area. FCC has also taken actions to address the lack of broadband availability on tribal lands, such as making some spectrum available to tribes for wireless broadband in rural areas. However, tribal stakeholders told GAO that some tribes are unable to secure funding to deploy the infrastructure necessary to make use of spectrum for wireless broadband purposes.
    • The GAO concluded:
      • The effective use of USF resources is critical given the importance of ensuring that Americans have access to voice and broadband services. Overall, FCC’s high-cost program performance goals and measures are often not conducive to providing FCC with high-quality performance information. The performance goals lack measurable or quantifiable bases upon which numeric targets can be set. Further, while FCC’s performance measures met most of the key attributes of effective measures, the measures often lacked linkage, clarity, and objectivity. Without such clarity and specific desired outcomes, FCC lacks performance information that could help FCC make better-informed decisions about how to allocate the program’s resources to meet ongoing and emerging challenges to ensuring universal access to voice and broadband services. Furthermore, the absence of public reporting of this information leaves stakeholders, including Congress, uncertain about the program’s effectiveness to deploy these services.
    • The GAO recommended that
      • The Chairman of FCC should revise the high-cost performance goals so that they are measurable and quantifiable.
      • The Chairman of FCC should ensure high-cost performance measures align with key attributes of successful performance measures, including ensuring that measures clearly link with performance goals and have specified targets.
      • The Chairman of FCC should ensure the high-cost performance measure for the goal of minimizing the universal service contribution burden on consumers and businesses takes into account user-fee leading practices, such as equity and sustainability considerations.
      • The Chairman of FCC should publicly and periodically report on the progress it has made for its high-cost program’s performance goals, for example, by including relevant performance information in its Annual Broadband Deployment Report or the USF Monitoring Report.
  • An Irish court has ruled that the Data Protection Commission (DPC) must cover the legal fees of Maximillian Schrems for litigating and winning his case in which the Court of Justice of the European Union (CJEU) struck down the adequacy decision underpinning the European Union-United States Privacy Shield (aka Schrems II). It has been estimated that Schrems’ legal costs could total between €2-5 million for filing a complaint against Facebook, the low end of which would represent 10% of the DPC’s budget this year. In addition, the DPC has itself spent almost €3 million litigating the case.
  • House Transportation and Infrastructure Chair Peter DeFazio (D-OR) and Ranking Member Sam Graves (R-MO) asked that the Government Accountability Office (GAO) review the implications of the Federal Communications Commission’s (FCC) 2019 proposal to allow half of the Safety Band (i.e. 5.9 GHz spectrum) to be used for wireless communications even though the United States (U.S.) Department of Transportation weighed in against this decision. DeFazio and Graves are asking for a study of “the safety implications of sharing more than half of the 5.9 GHz spectrum band” that is currently “reserved exclusively for transportation safety purposes.” This portion of the spectrum was set aside in 1999 for connected vehicle technologies, according to DeFazio and Graves, and given the rise of automobile technology that requires spectrum, they think the FCC’s proposed proceeding “may significantly affect the efficacy of current and future applications of vehicle safety technologies.” DeFazio and Graves asked the GAO to review the following:
    • What is the current status of 5.9 GHz wireless transportation technologies, both domestically and internationally?
    • What are the views of industry and other stakeholders on the potential uses of these technologies, including scenarios where the 5.9 GHz spectrum is shared among different applications?
    • In a scenario in which the 5.9 GHz spectrum band is shared among different applications, what effect would this have on the efficacy, including safety, of current wireless transportation technology deployments?
    • What options exist for automakers and highway management agencies to ensure the safe deployment of connected vehicle technologies in a scenario in which the 5.9 GHz spectrum band is shared among different applications?
    • How, if at all, have DOT and FCC assessed current and future needs for 5.9 GHz spectrum in transportation applications? 
    • How, if at all, have DOT and FCC coordinated to develop a federal spectrum policy for connected vehicles?
  • The Harvard Law School’s Cyberlaw Clinic and the Electronic Frontier Foundation (EFF) have published “A Researcher’s Guide to Some Legal Risks of Security Research,” which is timely given the case before the Supreme Court of the United States to determine whether the “Computer Fraud and Abuse Act” (CFAA). The Cyberlaw Clinic and EFF stated:
    • Just last month, over 75 prominent security researchers signed a letter urging the Supreme Court not to interpret the CFAA, the federal anti-hacking/computer crime statute, in a way that would criminalize swaths of valuable security research.
    • In the report, the Cyberlaw Clinic and EFF explained:
      • This guide overviews broad areas of potential legal risk related to security research, and the types of security research likely implicated. We hope it will serve as a useful starting point for concerned researchers and others. While the guide covers what we see as the main areas of legal risk for security researchers, it is not exhaustive.
    • In Van Buren v. United States, the Court will consider the question of “[w]hether a person who is authorized to access information on a computer for certain purposes violates Section 1030(a)(2) of the Computer Fraud and Abuse Act if he accesses the same information for an improper purpose.” In this case, the defendant was a police officer who took money as part of a sting operation to illegally use his access to Georgia’s database of license plates to obtain information about a person. The Eleventh Circuit Court of Appeals denied his appeal of his conviction under the CFAA per a previous ruling in that circuit that “a defendant violates the CFAA not only when he obtains information that he has no “rightful[]” authorization whatsoever to acquire, but also when he obtains information “for a nonbusiness purpose.”
    • As the Cyberlaw Clinic and EFF noted in the report:
      • Currently, courts are divided about whether the statute’s prohibition on “exceed[ing] authorized access” applies to people who have authorization to access data (for some purpose), but then access it for a (different) purpose that violates a contractual terms of service or computer use policy. Some courts have found that the CFAA covers activities that do not circumvent any technical access barriers, from making fake profiles that violate Facebook’s terms of service to running a search on a government database without permission. Other courts, disagreeing with the preceding approach, have held that a verbal or contractual prohibition alone cannot render access punishable under the CFAA.
  • The United Kingdom’s Institution of Engineering and Technology (IET) and the National Cyber Security Centre (NCSC) have published “Code of Practice: Cyber Security and Safety” to help engineers develop technological and digital products with both safety and cybersecurity in mind. The IET and NCSC explained:
    • The aim of this Code is to help organizations accountable for safety-related systems manage cyber security vulnerabilities that could lead to hazards. It does this by setting out principles that when applied will improve the interaction between the disciplines of system safety and cyber security, which have historically been addressed as distinct activities.
    • The objectives for organizations that follow this Code are: (a) to manage the risk to safety from any insecurity of digital technology; (b) to be able to provide assurance that this risk is acceptable; and (c) where there is a genuine conflict between the controls used to achieve safety and security objectives, to ensure that these are properly resolved by the appropriate authority.
    • It should be noted that the focus of this Code is on the safety and security of digital technology, but it is recognized that addressing safety and cyber security risk is not just a technological issue: it involves people, process, physical and technological aspects. It should also be noted that whilst digital technology is central to the focus, other technologies can be used in pursuit of a cyber attack. For example, the study of analogue emissions (electromagnetic, audio, etc.) may give away information being digitally processed and thus analogue interfaces could be used to provide an attack surface.

Coming Events

  • On 10 November, the Senate Commerce, Science, and Transportation Committee will hold a hearing to consider nominations, including Nathan Simington’s to be a Member of the Federal Communications Commission.
  • On 17 November, the Senate Judiciary Committee will reportedly hold a hearing with Facebook CEO Mark Zuckerberg and Twitter CEO Jack Dorsey on Section 230 and how their platforms chose to restrict The New York Post article on Hunter Biden.
  • On 18 November, the Federal Communications Commission (FCC) will hold an open meeting and has released a tentative agenda:
    • Modernizing the 5.9 GHz Band. The Commission will consider a First Report and Order, Further Notice of Proposed Rulemaking, and Order of Proposed Modification that would adopt rules to repurpose 45 megahertz of spectrum in the 5.850-5.895 GHz band for unlicensed operations, retain 30 megahertz of spectrum in the 5.895-5.925 GHz band for the Intelligent Transportation Systems (ITS) service, and require the transition of the ITS radio service standard from Dedicated Short-Range Communications technology to Cellular Vehicle-to-Everything technology. (ET Docket No. 19-138)
    • Further Streamlining of Satellite Regulations. The Commission will consider a Report and Order that would streamline its satellite licensing rules by creating an optional framework for authorizing space stations and blanket-licensed earth stations through a unified license. (IB Docket No. 18-314)
    • Facilitating Next Generation Fixed-Satellite Services in the 17 GHz Band. The Commission will consider a Notice of Proposed Rulemaking that would propose to add a new allocation in the 17.3-17.8 GHz band for Fixed-Satellite Service space-to-Earth downlinks and to adopt associated technical rules. (IB Docket No. 20-330)
    • Expanding the Contribution Base for Accessible Communications Services. The Commission will consider a Notice of Proposed Rulemaking that would propose expansion of the Telecommunications Relay Services (TRS) Fund contribution base for supporting Video Relay Service (VRS) and Internet Protocol Relay Service (IP Relay) to include intrastate telecommunications revenue, as a way of strengthening the funding base for these forms of TRS and making it more equitable without increasing the size of the Fund itself. (CG Docket Nos. 03-123, 10-51, 12-38)
    • Revising Rules for Resolution of Program Carriage Complaints. The Commission will consider a Report and Order that would modify the Commission’s rules governing the resolution of program carriage disputes between video programming vendors and multichannel video programming distributors. (MB Docket Nos. 20-70, 17-105, 11-131)
    • Enforcement Bureau Action. The Commission will consider an enforcement action.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Pexels from Pixabay

Further Reading and Other Developments (13 June)

First things first, if you would like to receive my Technology Policy Update, email me. You can find some of these Updates from 2019 and 2020 here.

Other Developments

  • The University of Toronto’s Citizen Lab alleged that an Indian information technology (IT) firm has been running a hacking for hire operation possibly utilized by multinationals to target non-profits, journalists, and advocacy groups:
    • Dark Basin is a hack-for-hire group that has targeted thousands of individuals and hundreds of institutions on six continents. Targets include advocacy groups and journalists, elected and senior government officials, hedge funds, and multiple industries.
    • Dark Basin extensively targeted American nonprofits, including organisations working on a campaign called #ExxonKnew, which asserted that ExxonMobil hid information about climate change for decades.
    • We also identify Dark Basin as the group behind the phishing of organizations working on net neutrality advocacy, previously reported by the Electronic Frontier Foundation.
  • The Massachusetts Institute of Technology (MIT) and the University of Michigan (UM) “released a report on the security of OmniBallot, an Internet voting and ballot delivery system produced by Democracy Live…[that] has been deployed in Delaware, West Virginia, and other jurisdictions.” MIT and UM “The full technical report contains detailed recommendations for jurisdictions, but here’s what individual voters can do to help reduce risks to their security and privacy:
    • Your safest option is to avoid using OmniBallot. Either vote in person or request a mail-in absentee ballot, if you can. Mail-in ballots are a reasonably safe option, provided you check them for accuracy and adhere to all relevant deadlines.
    • If you can’t do that, your next-safest option is to use OmniBallot to download a blank ballot and print it, mark it by hand, and mail it back or drop it off. Always double-check that you’ve marked your ballot correctly, and confirm the mailing address with your local jurisdiction. 
    • If you are unable to mark your ballot by hand, OmniBallot can let you mark it on-screen. However, this option (as used in Delaware and West Virginia) will send your identity and secret ballot selections over the Internet to Democracy Live’s servers even if you return your ballot through the mail. This increases the risk that your choices may be exposed or manipulated, so we recommend that voters only use online marking as a last resort. If you do mark your ballot online, be sure to print it, carefully check that the printout is marked the way you intended, and physically return it.
    • If at all possible, do not return your ballot through OmniBallot’s website or by email or fax. These return modes cause your vote to be transmitted over the Internet, or via networks attached to the Internet, exposing the election to a critical risk that votes will be changed, at wide scale, without detection. Recent recommendations from DHS, the bi-parisan findings of the Senate Intelligence Committee, and the consensus of the National Academies of Science, Engineering, and Medicine accord with our assessment that returning ballots online constitutes a severe security risk.
  • The “Justice in Policing Act of 2020” (H.R.7120/S.3912) was introduced this week in response to the protests and disparate policing practices towards African Americans primarily and would bar the use of facial recognition technology for body cameras, patrol car cameras, or other cameras authorized and regulated under the bill. The House Oversight and Reform Committee has held a series of hearings this Congress on facial recognition technology, with Members on both sides of the aisle saying they want legislation regulating the government’s use of it. As of yet, no such legislation has been introduced. Facial recognition technology language was also a major factor in privacy legislation dying last year in Washington state and was outright removed to avoid the same fate this year.
  • The Government Accountability Office (GAO) released “ELECTRONIC HEALTH RECORDS: Ongoing Stakeholder Involvement Needed in the Department of Veterans Affairs’ Modernization Effort” a week after Secretary of Veterans Affairs Robert Wilkie informed the House Appropriations Committee that the electronic health record rollout has been paused due to COVID-19. Nevertheless, the GAO concluded:
    • VA met its schedule for making the needed system configuration decisions that would enable the department to implement its new EHR system at the first VA medical facility, which was planned for July 2020. In addition, VA has formulated a schedule for making the remaining EHR system configuration decisions before implementing the system at additional facilities planned for fall 2020. VA’s EHRM program was generally effective in establishing decisionmaking procedures that were consistent with applicable federal standards for internal control.
    • However, VA did not always ensure the involvement of relevant stakeholders, including medical facility clinicians and staff, in the system configuration decisions. Specifically, VA did not always clarify terminology and include adequate detail in descriptions of local workshop sessions to medical facility clinicians and staff to ensure relevant representation at local workshop meetings. Participation of such stakeholders is critical to ensuring that the EHR system is configured to meet the needs of clinicians and support the delivery of clinical care.
  • The GAO recommended
    • For implementation of the EHR system at future VA medical facilities, we recommend that the Secretary of VA direct the EHRM Executive Director to clarify terminology and include adequate detail in descriptions of local workshop sessions to facilitate the participation of all relevant stakeholders including medical facility clinicians and staff. (Recommendation 1)
  • Europol and the European Union Intellectual Property Office released a report to advise law enforcement agencies and policymakers “in the shape of a case book and presents case examples showing how intellectual property (IP) crime is linked to other forms of criminality, including money laundering, document fraud, cybercrime, fraud, drug production and trafficking and terrorism.”
  • The New York University Stern Center for Business and Human Rights released its latest report on social media titled “Who Moderates the Social Media Giants? A Call to End Outsourcing” that calls for major reforms in how these companies moderate content so as to improve the online ecosystem and the conditions, pay, and efficiacy of those actually doing the work. The report claimed “[d]espite the centrality of content moderation, however, major social media companies have marginalized the people who do this work, outsourcing the vast majority of it to third-party vendors…[and] [a] close look at this situation reveals three main problems:
    • In some parts of the world distant from Silicon Valley, the marginalization of content moderation has led to social media companies paying inadequate attention to how their platforms have been misused to stoke ethnic and religious violence. This has occurred in places ranging from Myanmar to Ethiopia. Facebook, for example, has expanded into far-flung markets, seeking to boost its user-growth numbers, without having sufficient moderators in place who understand local languages and cultures.
    • The peripheral status of moderators undercuts their receiving adequate counseling and medical care for the psychological side effects of repeated exposure to toxic online content. Watching the worst social media has to offer leaves many moderators emotionally debilitated. Too often, they don’t get the support or benefits they need and deserve.
    • The frequently chaotic outsourced environments in which moderators work impinge on their decisionmaking. Disputes with quality-control reviewers consume time and attention and contribute to a rancorous atmosphere.
  • The National Institute of Standards and Technology (NIST) “requests review and comments on the four-volume set of documents: Special Publication (SP) 800-63-3 Digital Identity Guidelines, SP 800-63A Enrollment and Identity Proofing, SP 800-63B Authentication and Lifecycle Management, and SP 800-63C Federation and Assertions…[that] presents the controls and technical requirements to meet the digital identity management assurance levels specified in each volume.” NIST “is requesting comments on the document in response to agency and industry implementations, industry and market innovation and the current threat environment.” Comments are due by 10 August.
  • The Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) updated its Cyber Risks to Next Generation 911 White Paper and released Cyber Risks to 911: Telephony Denial of Service and PSAP Ransomware Poster. CISA explained:
    • Potential cyber risks to Next Generation 9-1-1 (NG9-1-1) systems do not undermine the benefits of NG9-1-1. Nevertheless, cyber risks present a new level of exposure that PSAP administrators must understand and actively manage as a part of a comprehensive risk management program. Systems are already under attack. As cyber threats grow in complexity and sophistication, attacks could be more severe against NG9-1-1 systems as attackers can launch multiple distributed attacks with greater automation from a broader geography and against more targets.  This document provides an overview of the cyber risk landscape, offers an approach for assessing and managing risk, and provides additional cybersecurity resources. 
  • The Government Accountability Office (GAO) released a number of technology reports:
    • The GAO recommended that the Department of Energy’s (DOE) National Nuclear Security Administration (NNSA) “should incorporate additional management controls to better oversee and coordinate NNSA’s microelectronics activities. Such management controls could include investing the microelectronics coordinator with increased responsibility and authority, developing an overarching management plan, and developing a mission need statement and a microelectronics requirements document.”
  • The GAO found that
    • The Department of Homeland Security (DHS) has taken steps to implement selected leading practices in its transition from waterfall, an approach that historically delivered useable software years after program initiation, to Agile software development, which is focused on incremental and rapid delivery of working software in small segments. As shown below, this quick, iterative approach is to deliver results faster and collect user feedback continuously.
    • DHS has fully addressed one of three leading practice areas for organization change management and partially addressed the other two. Collectively, these practices advise an organization to plan for, implement, and measure the impact when undertaking a significant change. The department has fully defined plans for transitioning to Agile development. DHS has partially addressed implementation—the department completed 134 activities but deferred roughly 34 percent of planned activities to a later date. These deferred activities are in progress or have not been started. With respect to the third practice, DHS clarified expected outcomes for the transition, such as reduced risk of large, expensive IT failures. However, these outcomes are not tied to target measures. Without these, DHS will not know if the transition is achieving its desired results.
    • DHS has also addressed four of the nine leading practices for adopting Agile software development. For example, the department has modified its acquisition policies to support Agile development methods. However, it needs to take additional steps to, among other things, ensure all staff are appropriately trained and establish expectations for tracking software code quality. By fully addressing leading practices, DHS can reduce the risk of continued problems in developing and acquiring current, as well as, future IT systems.
  • The GAO rated “[t]he Department of Defense’s (DOD) current initiative to transition to Internet Protocol version 6 (IPv6), which began in April 2017, follows at least two prior attempts to implement IPv6 that were halted by DOD.”
    • In February 2019, DOD released its own IPv6 planning and implementation guidance that listed 35 required transition activities, 18 of which were due to be completed before March 2020. DOD completed six of the 18 activities as of March 2020. DOD officials acknowledged that the department’s transition time frames were optimistic; they added that they had thought that the activities’ deadlines were reasonable until they started performing the work. Without an inventory, a cost estimate, or a risk analysis, DOD significantly reduced the probability that it could have developed a realistic transition schedule. Addressing these basic planning requirements would supply DOD with needed information that would enable the department to develop realistic, detailed, and informed transition plans and time frames.

Further Reading

  • Amid Pandemic and Upheaval, New Cyberthreats to the Presidential Election” – The New York Times. Beyond disinformation and misinformation campaigns, United States’ federal and state officials are grappling with a range of cyber-related threats including some states’ insistence on using online voting, which the Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) deemed “high risk” in an unreleased assessment the agency softened before distribution to state lection officials. There are also worries that Russian or other nation-state hackers could assess voting databases in ways that would call election day results into question, or other hackers could break in, lock, and then ransom such data bases. CISA and other stakeholders have articulated concerns about the security of voting machines, apps, and systems currently used by states. 
  • Microsoft won’t sell police its facial-recognition technology, following similar moves by Amazon and IBM” – The Washington Post. The three tech giants responded to pressure from protestors to stop selling facial recognition technology to police departments with Microsoft being the latest to make this pledge. The companies have said they will not sell this technology until there is a federal law regulating it. The American Civil Liberties Union said in its press release “Congress and legislatures nationwide must swiftly stop law enforcement use of face recognition, and companies like Microsoft should work with the civil rights community  — not against it — to make that happen…[and] [t]his includes Microsoft halting its current efforts to advance legislation that would legitimize and expand the police use of facial recognition in multiple states nationwide.” The above mentioned “Justice in Policing Act of 2020” (H.R.7120/S.3912) would not regulate the technology per se but would ban its use from body and car cameras. However, the companies said nothing about selling this technology to federal agencies such as US Immigration and Customs Enforcement. And, IBM, unlike Amazon and Microsoft, announced it was leaving the facial recognition field altogether. However, AI Clearview, the controversial facial recognition firm, has not joined this pledge.
  • ICE Outlines How Investigators Rely on Third-Party Facial Recognition Services” – Nextgov. In a recently released privacy impact assessment, US Immigration and Customs Enforcement’s Homeland Security Investigations (HSI) explained its use of US and state government and commercial recognition databases and technologies. The agency claimed this is to be used only after agents have exhausted more traditional means of identifying suspects and others and only if relevant to the investigation. The agency claimed “ICE HSI primarily uses this law enforcement tool to identify victims of child exploitation and human trafficking, subjects engaged in the online and sexual exploitation of children, subjects engaged in financial fraud schemes, identity and benefit fraud, and those identified as members of transnational criminal organizations.” Given what some call abuses and others call mistakes in US surveillance programs, it is probable ICE will exceed the limits it is setting on the use of this technology absent meaningful, independent oversight.
  • Zoom confirms Beijing asked it to suspend activists over Tiananmen Square meetings” – Axios. In a statement, Zoom admitted it responded to pressure from the People’s Republic of China (PRC) to shut down 4 June meetings to commemorate Tiananmen Square inside and outside the PRC, including in the United States if enough PRC nationals were participating. It is not hard to imagine the company being called to task in Washington and in western Europe for conforming to Beijing’s wishes. The company seems to be vowing to develop technology to block participants by country as opposed to shutting down meetings and a process to consider requests by nations to block certain content illegal within their borders.
  • Coronavirus conspiracy theorists threaten 5G cell towers, DHS memo warns” – CyberScoop. The Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) has warned telecommunications companies they should establish or better still already have in place security protocols to protect equipment, especially 5G gear, from sabotage arising from the conspiracy theory that 5G transmission either compromises immune systems making one more susceptible to COVID-19 or actually spreads the virus. There have been a spate of attacks in the United Kingdom, and a number of Americans are advocating for this theory, including actor Woody Harrelson.  
  • Police Officers’ Personal Info Leaked Online” – Associated Press. At the same time police are facing protestors in the streets of many American cities and towns, the sensitive personal information of some officers have been posted online, possibly putting them and their families at risk.
  • Facebook Helped the FBI Hack a Child Predator” – Vice’s Motherboard. In a story apparently leaked by Facebook, it is revealed that the company hired a third-party hacker to help reveal a nefarious, technologically adept person who was terrorizing and extorting female minors through the development of a zero-day exploit. This is supposedly the first time Facebook engaged in conduct such as this to help law enforcement authorities. The company revealed it routinely tracks problematic users, including those exploiting children. This article would seem tailor-made to push back on the narrative being propagated by the Department of Justice and other nations’ law enforcement agencies that tech companies opposing backdoors in encrypted systems helps sexual predators. There are also the usual concerns that any exploit of a platform or technology people use to remain private will ultimately be used broadly by law enforcement agencies often to the detriment of human rights activists, dissidents, and journalists.
  • Amazon, Facebook and Google turn to deep network of political allies to battle back antitrust probes” – The Washington Post. These tech companies are utilizing means beyond traditional lobbying and public relations to wage the battle against US and state governments investigating them for possible antitrust and anticompetitive practices.
  • One America News, the Network That Spreads Conspiracies to the West Wing” – The New York Times. The upstart media outlet has received a boost in recent days by being promoted by President Donald Trump who quoted its as of yet unproven allegations that a Buffalo man knocked down by police was an antifa agitator. The outlet has received preferential treatment from the White House and is likely another means by which the White House will seek to get its message out.
  • EU says China behind ‘huge wave’ of Covid-19 disinformation” – The Guardian. European Commission Vice President Vĕra Jourová called out the People’s Republic of China (PRC) along with the Russian Federation for spreading prodigious amounts of disinformation in what is likely a shift for Brussels towards a more adversarial stance versus the PRC. As recently as March, an European Union body toned down a report on PRC activities, but this development seems to be a change of course.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Gerd Altmann from Pixabay

Trump Administration Claims PRC Is Targeting COVID-19 Research Organizations

First things first, if you would like to receive my Technology Policy Update, email me. You can find some of these Updates from 2019 and 2020 here.

This week, the Trump Administration highlighted hacking by the People’s Republic of China (PRC) that targets entities researching COVID-19. This announcement is the latest in a string of public attributions made by the Trump Administration as part of its larger cybersecurity strategy. For example, the Administration identified “three malware variants—COPPERHEDGE, TAINTEDSCRIBE, and PEBBLEDASH—used by the North Korean government.” Nonetheless, this particular attribution also happens to dovetail, coincidentally or not, with the Trump Administration and Republican Party’s push to throw the focus on the PRC’s actions or inactions at the beginning of the COVID-19 pandemic in Wuhan, PRC.

In an unclassified public service announcement, the Federal Bureau of Investigation (FBI) and Cybersecurity and Infrastructure Security Agency (CISA) stated they “are issuing this announcement to raise awareness of the threat to COVID-19-related research.” The agencies said “[t]he FBI is investigating the targeting and compromise of U.S. organizations conducting COVID-19-related research by PRC-affiliated cyber actors and non-traditional collectors.” The FBI and CISA claimed that “[t]hese actors have been observed attempting to identify and illicitly obtain valuable

The Administration names the PRC as the nation trying to hack into COVID-19 research facilities.  

intellectual property (IP) and public health data related to vaccines, treatments, and testing from networks and personnel affiliated with COVID-19-related research.” The agencies asserted “[t]he potential theft of this information jeopardizes the delivery of secure, effective, and efficient treatment options.” The FBI and CISA “urge all organizations conducting research in these areas to maintain dedicated cybersecurity and insider threat practices to prevent surreptitious review or theft of COVID-19-related material” and made the following recommendations:

  • Assume that press attention affiliating your organization with COVID-19 related research will lead to increased interest and cyber activity.
  • Patch all systems for critical vulnerabilities, prioritizing timely patching for known vulnerabilities of internet-connected servers and software processing internet data.
  • Actively scan web applications for unauthorized access, modification, or anomalous activities.
  • Improve credential requirements and require multi-factor authentication.
  • Identify and suspend access of users exhibiting unusual activity.

CISA Director Christopher Krebs contended “China’s long history of bad behavior in cyberspace is well documented, so it shouldn’t surprise anyone they are going after the critical organizations involved in the nation’s response to the Covid-19 pandemic.” He stressed CISA “defend our interests aggressively.”

And, to no great surprise, the PRC denied the U.S.’s claims. A spokesperson for the PRC’s Foreign Ministry said:

We firmly oppose and fight all kinds of cyber-attacks conducted by hackers. We are leading the world in COVID-19 treatment and vaccine research. It is immoral to target China with rumors and slanders in the absence of any evidence.

Moreover, the PRC is not the only nation of being accused of trying to hack COVID-19 researchers. Iran has been accused of trying to get into a pharmaceutical company, Gilead’s systems to access any information on its efforts to develop a vaccine. An Iranian spokesperson was quoted as claiming “[t]he Iranian government does not engage in cyber warfare…[and] [c]yber activities Iran engages in are purely defensive and to protect against further attacks on Iranian infrastructure.”

Last week, CISA and the United Kingdom’s Government Communications Headquarters’ (GCHQ) National Cyber Security Centre (NCSC) issued a joint advisory for the healthcare sector, especially companies and entities engaged in fighting COVID-19. The agencies stated that they have evidence that Advanced Persistent Threat (APT) groups “are exploiting the COVID-19 pandemic as part of their cyber operations.” NCSC and CISA “highlight[] ongoing activity by APT groups against organisations involved in both national and international COVID-19 responses…[and] describe[] some of the methods these actors are using to target organisations and provides mitigation advice.” The entities being targeted include healthcare bodies, pharmaceutical companies, academia, medical research organisations, and local government. However, the agencies do not identify the APT groups or their countries of origin in the advisory. 

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Second Volume of Senate Intelligence Committee Report On Election Interference, Part I

Recently, the Senate Intelligence Committee released the second of five planned volumes detailing its findings and recommendations arising from Russia’s actions during the 2016 U.S. election. Notably, the Senate Intelligence Committee broke with the Intelligence Community’s finding that Russian efforts were mostly aimed against former Secretary of State Hillary Clinton; rather the Committee found the Russian social media campaign “was overtly and almost invariably supportive of then-candidate Trump, and to the detriment of Secretary Clinton’s campaign.” The committee found that there was a dedicated campaign to suppress African American voting. Moreover, paid advertisements were the lesser part of Russian efforts. The committee has found that Russian hackers continue to post divisive, misleading, and false messages on social media to further foment unrest and division in the U.S.

The committee called on the tech industry to ramp up information sharing efforts, increase the information consumers are provided with regarding the source and veracity of social media posts, and allow researchers and presumably U.S. intelligence agencies greater access to the data held by companies like Twitter and Facebook to better track and counter the efforts of countries like Russia. In terms of legislative action, the committee recommended that Congress pass legislation to remove obstacles to the sharing of information between social media and government agencies, to create a clearinghouse of such information, and to continue to “examine the full panoply of issues surrounding social media, particularly those items that may have some impact on the ability of users to masquerade as others and provide inauthentic content…such as privacy rules, identity validation, transparency in how data is collected and used, and monitoring for inauthentic or malign content, among others, deserve continued examination.” However, the committee did not call for the passage of privacy, data security, or election security legislation. The committee is recommending that the executive branch launch a public awareness initiative “focused on building media literacy from an early age would help build long-term resilience to foreign manipulation of our democracy,” “stand up an interagency task force to continually monitor and assess foreign country’s use of social media platforms for democratic interference,” and “develop a clear plan for notifying candidates, parties, or others associated with elections when those individuals or groups have been the victim of a foreign country’s use of social media platforms to interfere in an election.”

Notably, the only dissenting views appended to the second volume are those of Senator Ron Wyden (D-OR), who placed the blame firmly on weak data security and privacy laws in the U.S. that allow social media platforms to be used to target certain slices of the populations:

Broad, effective data security and privacy policies, implemented across the platforms and enforced by a tough, competent government regulator, are necessary to prevent the loss of consumers’ data and the abuse of that data in election influence campaigns. Congress should pass legislation that addresses this concern in three respects. First, the Federal Trade. Commission must be given the power to set baseline data security and privacy rules for companies that store or share Americans’ data, as well as the authority and resources to fine companies that violate those rules, Second; companies should be obligated to disclose how consumer information is collected and shared and provide consumers the names of every individual or institution with whom their data has been Third, consumers must be given the ability to easily opt out of commercial data sharing.

None of the committee Republicans disputed the report’s findings or recommendations.