Further Reading, Other Developments, and Coming Events (12 January 2021)

Further Reading

  • Biden’s NSC to focus on global health, climate, cyber and human rights, as well as China and Russia” By Karen DeYoung — The Washington Post. Like almost every incoming White House, the Biden team has announced a restructuring of the National Security Council (NSC) to better effectuate the President-elect’s policy priorities. To not surprise, the volume on cybersecurity policy will be turned up. Other notable change is plans to take “cross-cutting” approaches to issues that will likely meld foreign and domestic and national security and civil issues, meaning there could be a new look on offensive cyber operations, for example. It is possible President Biden decides to put the genie back in the bottle, so to speak, by re-imposing an interagency decision-making process as opposed to the Trump Administration’s approach of delegating discretion to the National Security Agency/Cyber Command head. Also, the NSC will focus on emerging technology, a likely response to the technology arms race the United States finds itself in against the People’s Republic of China.
  • Exclusive: Pandemic relief aid went to media that promoted COVID misinformation” By Caitlin Dickson — yahoo! news. The consulting firm Alethea Group and the nonprofit Global Disinformation Index are claiming the COVID stimulus Paycheck Protection Program (PPP) provided loans and assistance to five firms that “were publishing false or misleading information about the pandemic, thus profiting off the infodemic” according to an Alethea Group vice president. This report follows an NBC News article claiming that 14 white supremacist and racist organizations have also received PPP loans. The Alethea Group and Global Disinformation Index named five entities who took PPP funds and kept spreading pandemic misinformation: Epoch Media Group, Newsmax Media, The Federalist, Liftable Media, and Prager University.
  • Facebook shuts Uganda accounts ahead of vote” — France24. The social media company shuttered a number of Facebook and Instagram accounts related to government officials in Uganda ahead of an election on account of “Coordinated Inauthentic Behaviour” (CIB). This follows the platform shutting down accounts related to the French Army and Russia seeking to influence events in Africa. These and other actions may indicate the platform is starting to pay the same attention to the non-western world as at least one former employee has argued the platform was negligent at best and reckless at worst in not properly resourcing efforts to police CIB throughout the Third World.
  • China tried to punish European states for Huawei bans by adding eleventh-hour rule to EU investment deal” By Finbarr Bermingham — South China Morning Post. At nearly the end of talks on a People’s Republic of China (PRC)-European Union (EU) trade deal, PRC negotiators tried slipping in language that would have barred entry to the PRC’s cloud computing market to any country or company from a country that restricts Huawei’s services and products. This is alternately being seen as either standard Chinese negotiating tactics or an attempt to avenge the thwarting of the crown jewel in its telecommunications ambitions.
  • Chinese regulators to push tech giants to share consumer credit data – sources” By Julie Zhu — Reuters. Ostensibly in a move to better manage the risks of too much unsafe lending, tech giants in the People’s Republic of China (PRC) will soon need to share data on consumer loans. It seems inevitable that such data will be used by Beijing to further crack down on undesirable people and elements within the PRC.
  • The mafia turns social media influencer to reinforce its brand” By Miles Johnson — The Financial Times. Even Italy’s feared ’Ndrangheta is creating and curating a social media presence.

Other Developments

  • President Donald Trump signed an executive order (EO) that bans eight applications from the People’s Republic of China on much the same grounds as the EOs prohibiting TikTok and WeChat. If this EO is not rescinded by the Biden Administration, federal courts may block its implementation as has happened with the TikTok and WeChat EOs to date. Notably, courts have found that the Trump Administration exceeded its authority under the International Emergency Economic Powers Act (IEEPA), which may also be an issue in the proposed prohibition on Alipay, CamScanner, QQ Wallet, SHAREit, Tencent QQ, VMate, WeChat Pay, and WPS Office. Trump found:
    • that additional steps must be taken to deal with the national emergency with respect to the information and communications technology and services supply chain declared in Executive Order 13873 of May 15, 2019 (Securing the Information and Communications Technology and Services Supply Chain).  Specifically, the pace and pervasiveness of the spread in the United States of certain connected mobile and desktop applications and other software developed or controlled by persons in the People’s Republic of China, to include Hong Kong and Macau (China), continue to threaten the national security, foreign policy, and economy of the United States.  At this time, action must be taken to address the threat posed by these Chinese connected software applications.
    • Trump directed that within 45 days of issuance of the EO, there shall be a prohibition on “any transaction by any person, or with respect to any property, subject to the jurisdiction of the United States, with persons that develop or control the following Chinese connected software applications, or with their subsidiaries, as those transactions and persons are identified by the Secretary of Commerce (Secretary) under subsection (e) of this section: Alipay, CamScanner, QQ Wallet, SHAREit, Tencent QQ, VMate, WeChat Pay, and WPS Office.”
  • The Government Accountability Office (GAO) issued its first statutorily required annual assessment of how well the United States Department of Defense (DOD) is managing its major information technology (IT) procurements. The DOD spent more than $36 billion of the $90 billion the federal government was provided for IT in FY 2020. The GAO was tasked with assessing how well the DOD did in using iterative development, managing costs and schedules, and implementing cybersecurity measures. The GAO found progress in the first two realms but a continued lag in deploying long recommended best practices to ensure the security of the IT the DOD buys or builds. Nonetheless, the GAO focused on 15 major IT acquisitions that qualify as administrative (i.e. “business”) and communications and information security (i.e. “non-business.”) While there were no explicit recommendations made, the GAO found:
    • Ten of the 15 selected major IT programs exceeded their planned schedules, with delays ranging from 1 month for the Marine Corps’ CAC2S Inc 1 to 5 years for the Air Force’s Defense Enterprise Accounting and Management System-Increment 1.
    • …eight of the 10 selected major IT programs that had tested their then-current technical performance targets reported having met all of their targets…. As of December 2019, four programs had not yet conducted testing activities—Army’s ACWS, Air Force’s AFIPPS Inc 1, Air Force’s MROi, and Navy ePS. Testing data for one program, Air Force’s ISPAN Inc 4, were classified.
    • …officials from the 15 selected major IT programs we reviewed reported using software development approaches that may help to limit risks to cost and schedule outcomes. For example, major business IT programs reported using COTS software. In addition, most programs reported using an iterative software development approach and using a minimum deployable product. With respect to cybersecurity practices, all the programs reported developing cybersecurity strategies, but programs reported mixed experiences with respect to conducting cybersecurity testing. Most programs reported using operational cybersecurity testing, but less than half reported conducting developmental cybersecurity testing. In addition, programs that reported conducting cybersecurity vulnerability assessments experienced fewer increases in planned program costs and fewer schedule delays. Programs also reported a variety of challenges associated with their software development and cybersecurity staff.
    • 14 of the 15 programs reported using an iterative software development approach which, according to leading practices, may help reduce cost growth and deliver better results to the customer. However, programs also reported using an older approach to software development, known as waterfall, which could introduce risk for program cost growth because of its linear and sequential phases of development that may be implemented over a longer period of time. Specifically, two programs reported using a waterfall approach in conjunction with an iterative approach, while one was solely using a waterfall approach.
    • With respect to cybersecurity, programs reported mixed implementation of specific practices, contributing to program risks that might impact cost and schedule outcomes. For example, all 15 programs reported developing cybersecurity strategies, which are intended to help ensure that programs are planning for and documenting cybersecurity risk management efforts.
    • In contrast, only eight of the 15 programs reported conducting cybersecurity vulnerability assessments—systematic examinations of an information system or product intended to, among other things, determine the adequacy of security measures and identify security deficiencies. These eight programs experienced fewer increases in planned program costs and fewer schedule delays relative to the programs that did not report using cybersecurity vulnerability assessments.
  • The United States (U.S.) Department of Energy gave notice of a “Prohibition Order prohibiting the acquisition, importation, transfer, or installation of specified bulk-power system (BPS) electric equipment that directly serves Critical Defense Facilities (CDFs), pursuant to Executive Order 13920.” (See here for analysis of the executive order.) The Department explained:
    • Executive Order No. 13920 of May 1, 2020, Securing the United States Bulk-Power System (85 FR 26595 (May 4, 2020)) (E.O. 13920) declares that threats by foreign adversaries to the security of the BPS constitute a national emergency. A current list of such adversaries is provided in a Request for Information (RFI), issued by the Department of Energy (Department or DOE) on July 8, 2020 seeking public input to aid in its implementation of E.O. 13920. The Department has reason to believe, as detailed below, that the government of the People’s Republic of China (PRC or China), one of the listed adversaries, is equipped and actively planning to undermine the BPS. The Department has thus determined that certain BPS electric equipment or programmable components subject to China’s ownership, control, or influence, constitute undue risk to the security of the BPS and to U.S. national security. The purpose of this Order is to prohibit the acquisition, importation, transfer, or subsequent installation of such BPS electric equipment or programmable components in certain sections of the BPS.
  • The United States’ (U.S.) Department of Commerce’s Bureau of Industry and Security (BIS) added the People’s Republic of China’s (PRC) Semiconductor Manufacturing International Corporation (SMIC) to its Entity List in a move intended to starve the company of key U.S. technology needed to manufacture high end semiconductors. Therefore, any U.S. entity wishing to do business with SMIC will need a license which the Trump Administration may not be likely to grant. The Department of Commerce explained in its press release:
    • The Entity List designation limits SMIC’s ability to acquire certain U.S. technology by requiring U.S. exporters to apply for a license to sell to the company.  Items uniquely required to produce semiconductors at advanced technology nodes—10 nanometers or below—will be subject to a presumption of denial to prevent such key enabling technology from supporting China’s military-civil fusion efforts.
    • BIS also added more than sixty other entities to the Entity List for actions deemed contrary to the national security or foreign policy interest of the United States.  These include entities in China that enable human rights abuses, entities that supported the militarization and unlawful maritime claims in the South China Sea, entities that acquired U.S.-origin items in support of the People’s Liberation Army’s programs, and entities and persons that engaged in the theft of U.S. trade secrets.
    • As explained in the Federal Register notice:
      • SMIC is added to the Entity List as a result of China’s military-civil fusion (MCF) doctrine and evidence of activities between SMIC and entities of concern in the Chinese military industrial complex. The Entity List designation limits SMIC’s ability to acquire certain U.S. technology by requiring exporters, reexporters, and in-country transferors of such technology to apply for a license to sell to the company. Items uniquely required to produce semiconductors at advanced technology nodes 10 nanometers or below will be subject to a presumption of denial to prevent such key enabling technology from supporting China’s military modernization efforts. This rule adds SMIC and the following ten entities related to SMIC: Semiconductor Manufacturing International (Beijing) Corporation; Semiconductor Manufacturing International (Tianjin) Corporation; Semiconductor Manufacturing International (Shenzhen) Corporation; SMIC Semiconductor Manufacturing (Shanghai) Co., Ltd.; SMIC Holdings Limited; Semiconductor Manufacturing South China Corporation; SMIC Northern Integrated Circuit Manufacturing (Beijing) Co., Ltd.; SMIC Hong Kong International Company Limited; SJ Semiconductor; and Ningbo Semiconductor International Corporation (NSI).
  • The United States’ (U.S.) Department of Commerce’s Bureau of Industry and Security (BIS) amended its Export Administration Regulations “by adding a new ‘Military End User’ (MEU) List, as well as the first tranche of 103 entities, which includes 58 Chinese and 45 Russian companies” per its press release. The Department asserted:
    • The U.S. Government has determined that these companies are ‘military end users’ for purposes of the ‘military end user’ control in the EAR that applies to specified items for exports, reexports, or transfers (in-country) to the China, Russia, and Venezuela when such items are destined for a prohibited ‘military end user.’
  • The Australia Competition and Consumer Commission (ACCC) rolled out another piece of the Consumer Data Right (CDR) scheme under the Competition and Consumer Act 2010, specifically accreditation guidelines “to provide information and guidance to assist applicants with lodging a valid application to become an accredited person” to whom Australians may direct data holders share their data. The ACCC explained:
    • The CDR aims to give consumers more access to and control over their personal data.
    • Being able to easily and efficiently share data will improve consumers’ ability to compare and switch between products and services and encourage competition between service providers, leading to more innovative products and services for consumers and the potential for lower prices.
    • Banking is the first sector to be brought into the CDR.
    • Accredited persons may receive a CDR consumer’s data from a data holder at the request and consent of the consumer. Any person, in Australia or overseas, who wishes to receive CDR data to provide products or services to consumers under the CDR regime, must be accredited
  • Australia’s government has released its “Data Availability and Transparency Bill 2020” that “establishes a new data sharing scheme for federal government data, underpinned by strong safeguards to mitigate risks and simplified processes to make it easier to manage data sharing requests” according to the summary provided in Parliament by the government’s point person. In the accompanying “Explanatory Memorandum,” the following summary was provided:
    • The Bill establishes a new data sharing scheme which will serve as a pathway and regulatory framework for sharing public sector data. ‘Sharing’ involves providing controlled access to data, as distinct from open release to the public.
    • To oversee the scheme and support best practice, the Bill creates a new independent regulator, the National Data Commissioner (the Commissioner). The Commissioner’s role is modelled on other regulators such as the Australian Information Commissioner, with whom the Commissioner will cooperate.
    • The data sharing scheme comprises the Bill and disallowable legislative instruments (regulations, Minister-made rules, and any data codes issued by the Commissioner). The Commissioner may also issue non-legislative guidelines that participating entities must have regard to, and may release other guidance as necessary.
    • Participants in the scheme are known as data scheme entities:
      • Data custodians are Commonwealth bodies that control public sector data, and have the right to deal with that data.
      • Accredited users are entities accredited by the Commissioner to access to public sector data. To become accredited, entities must satisfy the security, privacy, infrastructure and governance requirements set out in the accreditation framework.
      • Accredited data service providers (ADSPs) are entities accredited by the Commissioner to perform data services such as data integration. Government agencies and users will be able to draw upon ADSPs’ expertise to help them to share and use data safely.
    • The Bill does not compel sharing. Data custodians are responsible for assessing each sharing request, and deciding whether to share their data if satisfied the risks can be managed.
    • The data sharing scheme contains robust safeguards to ensure sharing occurs in a consistent and transparent manner, in accordance with community expectations. The Bill authorises data custodians to share public sector data with accredited users, directly or through an ADSP, where:
      • Sharing is for a permitted purpose – government service delivery, informing government policy and programs, or research and development;
      • The data sharing principles have been applied to manage the risks of sharing; and
      • The terms of the arrangement are recorded in a data sharing agreement.
    • Where the above requirements are met, the Bill provides limited statutory authority to share public sector data, despite other Commonwealth, State and Territory laws that prevent sharing. This override of non-disclosure laws is ‘limited’ because it occurs only when the Bill’s requirements are met, and only to the extent necessary to facilitate sharing.
  • The United Kingdom’s Competition and Markets Authority’s (CMA) is asking interested parties to provide input on the proposed acquisition of British semiconductor company by a United States (U.S.) company before it launches a formal investigation later this year. However, CMA is limited to competition considerations, and any national security aspects of the proposed deal would need to be investigated by Prime Minister Boris Johnson’s government. CMA stated:
    • US-based chip designer and producer NVIDIA Corporation (NVIDIA) plans to purchase the Intellectual Property Group business of UK-based Arm Limited (Arm) in a deal worth $40 billion. Arm develops and licenses intellectual property (IP) and software tools for chip designs. The products and services supplied by the companies support a wide range of applications used by businesses and consumers across the UK, including desktop computers and mobile devices, game consoles and vehicle computer systems.
    • CMA added:
      • The CMA will look at the deal’s possible effect on competition in the UK. The CMA is likely to consider whether, following the takeover, Arm has an incentive to withdraw, raise prices or reduce the quality of its IP licensing services to NVIDIA’s rivals.
  • The Israeli firm, NSO Group, has been accused by an entity associated with a British university of using real-time cell phone data to sell its COVID-19 contact tracing app, Fleming, in ways that may have broken the laws of a handful of nations. Forensic Architecture,  a research agency, based at Goldsmiths, University of London, argued:
    • In March 2020, with the rise of COVID-19, Israeli cyber-weapons manufacturer NSO Group launched a contact-tracing technology named ‘Fleming’. Two months later, a database belonging to NSO’s Fleming program was found unprotected online. It contained more than five hundred thousand datapoints for more than thirty thousand distinct mobile phones. NSO Group denied there was a security breach. Forensic Architecture received and analysed a sample of the exposed database, which suggested that the data was based on ‘real’ personal data belonging to unsuspecting civilians, putting their private information in risk
    • Forensic Architecture added:
      • Leaving a database with genuine location data unprotected is a serious violation of the applicable data protection laws. That a surveillance company with access to personal data could have overseen this breach is all the more concerning.
      • This could constitute a violation of the General Data Protection Regulation (GDPR) based on where the database was discovered as well as the laws of the nations where NSO Group allegedly collected personal data
    • The NSO Group denied the claims and was quoted by Tech Crunch:
      • “We have not seen the supposed examination and have to question how these conclusions were reached. Nevertheless, we stand by our previous response of May 6, 2020. The demo material was not based on real and genuine data related to infected COVID-19 individuals,” said an unnamed spokesperson. (NSO’s earlier statement made no reference to individuals with COVID-19.)
      • “As our last statement details, the data used for the demonstrations did not contain any personally identifiable information (PII). And, also as previously stated, this demo was a simulation based on obfuscated data. The Fleming system is a tool that analyzes data provided by end users to help healthcare decision-makers during this global pandemic. NSO does not collect any data for the system, nor does NSO have any access to collected data.”

Coming Events

  • On 13 January, the Federal Communications Commission (FCC) will hold its monthly open meeting, and the agency has placed the following items on its tentative agenda “Bureau, Office, and Task Force leaders will summarize the work their teams have done over the last four years in a series of presentations:
    • Panel One. The Commission will hear presentations from the Wireless Telecommunications Bureau, International Bureau, Office of Engineering and Technology, and Office of Economics and Analytics.
    • Panel Two. The Commission will hear presentations from the Wireline Competition Bureau and the Rural Broadband Auctions Task Force.
    • Panel Three. The Commission will hear presentations from the Media Bureau and the Incentive Auction Task Force.
    • Panel Four. The Commission will hear presentations from the Consumer and Governmental Affairs Bureau, Enforcement Bureau, and Public Safety and Homeland Security Bureau.
    • Panel Five. The Commission will hear presentations from the Office of Communications Business Opportunities, Office of Managing Director, and Office of General Counsel.
  • On 27 July, the Federal Trade Commission (FTC) will hold PrivacyCon 2021.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2021. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Judith Scharnowski from Pixabay

Further Reading, Other Developments, and Coming Events (15 December)

Further Reading

  • DHS, State and NIH join list of federal agencies — now five — hacked in major Russian cyberespionage campaign” By Ellen Nakashima and Craig Timberg — The Washington Post; “Scope of Russian Hack Becomes Clear: Multiple U.S. Agencies Were Hit” By David E. Sanger, Nicole Perlroth and Eric Schmitt — The New York Times; The list of United States (U.S.) government agencies breached by Sluzhba vneshney razvedki Rossiyskoy Federatsii (SVR), the Russian Federation’s Foreign Intelligence Service, has grown. Now the Department of Homeland Security, Defense, and State and the National Institutes of Health are reporting they have been breached. It is unclear if Fortune 500 companies in the U.S. and elsewhere and U.S. nuclear laboratories were also breached in this huge, sophisticated espionage exploit. It appears the Russians were selective and careful, and these hackers may have only accessed information held on U.S. government systems. And yet, the Trump Administration continues to issue equivocal statements neither denying nor acknowledging the hack, leaving the public to depend on quotes from anonymous officials. Perhaps admitting the Russians hacked U.S. government systems would throw light on Russian interference four years ago, and the President is loath to even contemplate that attack. In contrast, President Donald Trump has made all sorts of wild, untrue claims about vote totals being hacked despite no evidence supporting his assertions. It appears that the declaration of mission accomplished by some agencies of the Trump Administration over no Russian hacking of or interference with the 2020 election will be overshadowed by what may prove the most damaging hack of U.S. government systems ever.
  • Revealed: China suspected of spying on Americans via Caribbean phone networks” By Stephanie Kirchgaessner — The Guardian. This story depends on one source, so take it for what it is worth, but allegedly the People’s Republic of China (PRC) is using vulnerabilities in mobile communications networks to hack into the phones of Americans travelling in the Caribbean. If so, the PRC may be exploiting the same Signaling System 7 (SS7) weaknesses an Israeli firm, Circles, is using to sell access to phones, at least according to a report published recently by the University of Toronto’s Citizen Lab.
  • The Cartel Project | Revealed: The Israelis Making Millions Selling Cyberweapons to Latin America” By Amitai Ziv — Haaretz. Speaking of Israeli companies, the NSO Group among others are actively selling offensive cyber and surveillance capabilities to Central American nations often through practices that may be corrupt.
  • U.S. Schools Are Buying Phone-Hacking Tech That the FBI Uses to Investigate Terrorists” By Tom McKay and Dhruv Mehrotra — Gizmodo. Israeli firm Cellebrite and competitors are being used in school systems across the United States (U.S.) to access communications on students’ phones. The U.S. Supreme Court caselaw gives schools very wide discretion for searches, and the Fourth Amendment is largely null and void on school grounds.
  • ‘It’s Hard to Prove’: Why Antitrust Suits Against Facebook Face Hurdles” By Mike Issac and Cecilia Kang — The New York Times. The development of antitrust law over the last few decades may have laid an uphill path for the Federal Trade Commission (FTC) and state attorneys general in securing a breakup of Facebook, something that has not happened on a large scale since the historic splintering of AT&T in the early 1980’s.
  • Exclusive: Israeli Surveillance Companies Are Siphoning Masses Of Location Data From Smartphone Apps” By Thomas Brewster — Forbes. Turns out Israeli firms are using a feature (or what many would call a bug) in the online advertising system that allows those looking to buy ads to get close to real-time location data from application developers looking to sell advertising space. By putting out a shingle as a Demand Side Platform, it is possible to access reaps of location data, and two Israeli companies are doing just that and offering the service of locating and tracking people using this quirk in online advertising. And this is not just companies in Israel. There is a company under scrutiny in the United States (U.S.) that may have used these practices and then provided location data to federal agencies.

Other Developments

  • The Government Accountability Office (GAO) evaluated the United States’ (U.S.) Department of Defense’s electromagnetic spectrum (EMS) operations found that the DOD’s efforts to maintain EMS superiority over the Russian Federation and the People’s Republic of China (PRC). The GAO concluded:
    • Studies have shown that adversaries of the United States, such as China and Russia, are developing capabilities and strategies that could affect DOD superiority in the information environment, including the EMS. DOD has also reported that loss of EMS superiority could result in the department losing control of the battlefield, as its Electromagnetic Spectrum Operations (EMSO) supports many warfighting functions across all domains. DOD recognizes the importance of EMSO to military operations in actual conflicts and in operations short of open conflict that involve the broad information environment. However, gaps we identified in DOD’s ability to develop and implement EMS-related strategies have impeded progress in meeting DOD’s goals. By addressing gaps we found in five areas—(1) the processes and procedures to integrate EMSO throughout the department, (2) governance reforms to correct diffuse organization, (3) responsibility by an official with appropriate authority, (4) a strategy implementation plan, and (5) activities that monitor and assess the department’s progress in implementing the strategy—DOD can capitalize on progress that it has already made and better support ensuring EMS superiority.
    • The GAO recommended:
      • The Secretary of Defense should ensure that the Vice Chairman of the Joint Chiefs of Staff, as Senior Designated Official of the Electromagnetic Spectrum Operations Cross-Functional Team (CFT), identifies the procedures and processes necessary to provide for integrated defense-wide strategy, planning, and budgeting with respect to joint electromagnetic spectrum operations, as required by the FY19 NDAA. (Recommendation 1)
      • The Secretary of Defense should ensure that the Vice Chairman of the Joint Chiefs of Staff as Senior Designated Official of the CFT proposes EMS governance, management, organizational, and operational reforms to the Secretary. (Recommendation 2)
      • The Secretary of Defense should assign clear responsibility to a senior official with authority and resources necessary to compel action for the long-term implementation of the 2020 strategy in time to oversee the execution of the 2020 strategy implementation plan. (Recommendation 3)
      • The Secretary of Defense should ensure that the designated senior official for long-term strategy implementation issues an actionable implementation plan within 180 days following issuance of the 2020 strategy. (Recommendation 4)
      • The Secretary of Defense should ensure that the designated senior official for long-term strategy implementation creates oversight processes that would facilitate the department’s implementation of the 2020 strategy. (Recommendation 5)
  • A forerunner to Apple’s App Store has sued the company, claiming it has monopolized applications on its operating system to the detriment of other parties and done the same with respect to its payment system. The company behind Cydia is arguing that it conceived of and created the first application store for the iPhone, offering a range of programs Apple did not. Cydia is claiming that once Apple understood how lucrative an app store would be, it blocked Cydia and established its own store, the exclusive means through which programs can be installed and used on the iOS. Furthermore, this has enabled Apple to levy 30% of all in-application purchases made, which is allegedly a $50 billion market annually. This is the second high-profile suit this year against Apple. Epic Games, the maker of the popular game, Fortnite, sued Apple earlier this year on many of the same grounds because the company started allowing users to buy directly from it for a 30% discount. Apple responded by removing the game from the App Store, which has blocked players from downloading updated versions. That litigation has just begun. In its complaint, Cydia asserts:
    • Historically, distribution of apps for a specific operating system (“OS”) occurred in a separate and robustly competitive market. Apple, however, began coercing users to utilize no other iOS app distribution service but the App Store, coupling it closer and closer to the iPhone itself in order to crowd out all competition. But Apple did not come up with this idea initially—it only saw the economic promise that iOS app distribution represented after others, like [Cydia], demonstrated that value with their own iOS app distribution products/services. Faced with this realization, Apple then decided to take that separate market (as well as the additional iOS app payment processing market described herein) for itself.
    • Cydia became hugely popular by offering a marketplace to find and obtain third party iOS applications that greatly expanded the capabilities of the stock iPhone, including games, productivity applications, and audio/visual applications such as a video recorder (whereas the original iPhone only allowed still cameraphotos). Apple subsequently took many of these early third party applications’ innovations, incorporating them into the iPhone directly or through apps.
    • But far worse than simply copying others’ innovations, Apple also recognized that it could reap enormous profits if it cornered this fledgling market for iOS app distribution, because that would give Apple complete power over iOS apps, regardless of the developer. Apple therefore initiated a campaign to eliminate competition for iOS app distribution altogether. That campaign has been successful and continues to this day. Apple did (and continues to do) so by, inter alia, tying the App Store app to iPhone purchases by preinstalling it on all iOS devices and then requiring it as the default method to obtain iOS apps, regardless of user preference for other alternatives; technologically locking down the iPhone to prevent App Store competitors like Cydia from even operating on the device; and imposing contractual terms on users that coerce and prevent them from using App Store competitors. Apple has also mandated that iOS app developers use it as their sole option for app payment processing (such as in-app purchases), thus preventing other competitors, such as Cydia, from offering the same service to those developers.
    • Through these and other anticompetitive acts, Apple has wrongfully acquired and maintained monopoly power in the market (or aftermarket) for iOS app distribution, and in the market (or aftermarket) for iOS app payment processing. Apple has frozen Cydia and all other competitors out of both markets, depriving them of the ability to compete with the App Store and to offer developers and consumers better prices, better service, and more choice. This anticompetitive conduct has unsurprisingly generated massive profits and unprecedented market capitalization for Apple, as well as incredible market power.
  • California is asking to join antitrust suit against Google filed by the United States Department of Justice (DOJ) and eleven state attorneys general. This antitrust action centers on Google’s practices of making Google the default search engine on Android devices and paying browsers and other technology entities to make Google the default search engine. However, a number of states that had initially joined the joint state investigation of Google have opted not to join this action and will instead be continuing to investigate, signaling a much broader case than the one filed in the United States District Court for the District of Columbia. In any event, if the suit does proceed, and a change in Administration could result in a swift change in course, it may take years to be resolved. Of course, given the legion leaks from the DOJ and state attorneys general offices about the pressure U.S. Attorney General William Barr placed on staff and attorneys to bring a case before the election, there is criticism that rushing the case may result in a weaker, less comprehensive action that Google may ultimately fend off.
    • And, there is likely to be another lawsuit against Google filed by other state attorneys general. A number of attorneys general who had orginally joined the effort led by Texas Attorney General Ken Paxton in investigating Google released a statement at the time the DOJ suit was filed, indicating their investigation would continue, presaging a different, possibly broader lawsuit that might also address Google’s role in other markets. The attorneys general of New York, Colorado, Iowa, Nebraska, North Carolina, Tennessee, and Utah did not join the case that was filed but may soon file a related but parallel case. They stated:
      • Over the last year, both the U.S. DOJ and state attorneys general have conducted separate but parallel investigations into Google’s anticompetitive market behavior. We appreciate the strong bipartisan cooperation among the states and the good working relationship with the DOJ on these serious issues. This is a historic time for both federal and state antitrust authorities, as we work to protect competition and innovation in our technology markets. We plan to conclude parts of our investigation of Google in the coming weeks. If we decide to file a complaint, we would file a motion to consolidate our case with the DOJ’s. We would then litigate the consolidated case cooperatively, much as we did in the Microsoft case.
  • France’s Commission nationale de l’informatique et des libertés (CNIL) handed down multi-million Euro fines on Google and Amazon for putting cookies on users’ devices. CNIL fined Google a total of €100 million and Amazon €35 million because its investigation of both entities determined “when a user visited [their] website, cookies were automatically placed on his or her computer, without any action required on his or her part…[and] [s]everal of these cookies were used for advertising purposes.”
    • CNIL explained the decision against Google:
      • [CNIL] noticed three breaches of Article 82 of the French Data Protection Act:
      • Deposit of cookies without obtaining the prior consent of the user
        • When a user visited the website google.fr, several cookies used for advertising purposes were automatically placed on his or her computer, without any action required on his or her part.
        • Since this type of cookies can only be placed after the user has expressed his or her consent, the restricted committee considered that the companies had not complied with the requirement provided for in Article 82 of the French Data Protection Act regarding the collection of prior consent before placing cookies that are not essential to the service.
      • Lack of information provided to the users of the search engine google.fr
        • When a user visited the page google.fr, an information banner displayed at the bottom of the page, with the following note “Privacy reminder from Google”, in front of which were two buttons: “Remind me later” and “Access now”.
        • This banner did not provide the user with any information regarding cookies that had however already been placed on his or her computer when arriving on the site. The information was also not provided when he or she clicked on the button “Access now”.
        • Therefore, the restricted committee considered that the information provided by the companies did not enable the users living in France either to be previously and clearly informed regarding the deposit of cookies on their computer or, therefore, to be informed of the purposes of these cookies and the available means enabling to refuse them.
      • Partial failure of the « opposition » mechanism
        • When a user deactivated the ad personalization on the Google search by using the available mechanism from the button “Access now”, one of the advertising cookies was still stored on his or her computer and kept reading information aimed at the server to which it is attached.
        • Therefore, the restricted committee considered that the “opposition” mechanism set up by the companies was partially defective, breaching Article 82 of the French Data Protection Act.
    • CNIL explained the case against Amazon:
      • [CNIL] noticed two breaches of Article 82 of the French Data Protection Act:
      • Deposit of cookies without obtaining the prior consent of the user
        • The restricted committee noted that when a user visited one of the pages of the website amazon.fr, a large number of cookies used for advertising purposes was automatically placed on his or her computer, before any action required on his or her part. Yet, the restricted committee recalled that this type of cookies, which are not essential to the service, can only be placed after the user has expressed his or her consent. It considered that the deposit of cookies at the same time as arriving on the site was a practice which, by its nature, was incompatible with a prior consent.
      • Lack of information provided to the users of the website amazon.fr
        • First, the restricted committee noted that, in the case of a user visiting the website amazon.fr, the information provided was neither clear, nor complete.
        • It considered that the information banner displayed by the company, which was “By using this website, you accept our use of cookies allowing to offer and improve our services. Read More.”, only contained a general and approximate information regarding the purposes of all the cookies placed. In particular, it considered that, by reading the banner, the user could not understand that cookies placed on his or her computer were mainly used to display personalized ads. It also noted that the banner did not explain to the user that it could refuse these cookies and how to do it.
        • Then, the restricted committee noticed that the company’s failure to comply with its obligation was even more obvious regarding the case of users that visited the website amazon.fr after they had clicked on an advertisement published on another website. It underlined that in this case, the same cookies were placed but no information was provided to the users about that.
  • Senator Amy Klobuchar (D-MN) wrote the Secretary of Health and Human Services (HHS), to express “serious concerns regarding recent reports on the data collection practices of Amazon’s health-tracking bracelet (Halo) and to request information on the actions [HHS] is taking to ensure users’ health data is secure.” Klobuchar stated:
    • The Halo is a fitness tracker that users wear on their wrists. The tracker’s smartphone application (app) provides users with a wide-ranging analysis of their health by tracking a range of biological metrics including heartbeat patterns, exercise habits, sleep patterns, and skin temperature. The fitness tracker also enters into uncharted territory by collecting body photos and voice recordings and transmitting this data for analysis. To calculate the user’s body fat percentage, the Halo requires users to take scans of their body using a smartphone app. These photos are then temporarily sent to Amazon’s servers for analysis while the app returns a three-dimensional image of the user’s body, allowing the user to adjust the image to see what they would look like with different percentages of body fat. The Halo also offers a tone analysis feature that examines the nuances of a user’s voice to indicate how the user sounds to others. To accomplish this task, the device has built-in microphones that listen and records a user’s voice by taking periodic samples of speech throughout the day if users opt-in to the feature.
    • Recent reports have raised concerns about the Halo’s access to this extensive personal and private health information. Among publicly available consumer health devices, the Halo appears to collect an unprecedented level of personal information. This raises questions about the extent to which the tracker’s transmission of biological data may reveal private information regarding the user’s health conditions and how this information can be used. Last year, a study by BMJ (formerly the British Medical Journal) found that 79 percent of health apps studied by researchers were found to share user data in a manner that failed to provide transparency about the data being shared. The study concluded that health app developers routinely share consumer data with third-parties and that little transparency exists around such data sharing.
    • Klobuchar asked the Secretary of Health and Human Services Alex Azar II to “respond to the following questions:
      • What actions is HHS taking to ensure that fitness trackers like Halo safeguard users’ private health information?
      • What authority does HHS have to ensure the security and privacy of consumer data collected and analyzed by health tracking devices like Amazon’s Halo?
      • Are additional regulations required to help strengthen privacy and security protections for consumers’ personal health data given the rise of health tracking devices? Why or why not?
      • Please describe in detail what additional authority or resources that the HHS could use to help ensure the security and protection of consumer health data obtained through health tracking devices like the Halo.

Coming Events

  • On 15 December, the Senate Judiciary Committee’s Intellectual Property Subcommittee will hold a hearing titled “The Role of Private Agreements and Existing Technology in Curbing Online Piracy” with these witnesses:
    • Panel I
      • Ms. Ruth Vitale, Chief Executive Officer, CreativeFuture
      • Mr. Probir Mehta, Head of Global Intellectual Property and Trade Policy, Facebook, Inc.
      • Mr. Mitch Glazier, Chairman and CEO, Recording Industry Association of America
      • Mr. Joshua Lamel, Executive Director, Re:Create
    • Panel II
      • Ms. Katherine Oyama, Global Director of Business Public Policy, YouTube
      • Mr. Keith Kupferschmid, Chief Executive Officer, Copyright Alliance
      • Mr. Noah Becker, President and Co-Founder, AdRev
      • Mr. Dean S. Marks, Executive Director and Legal Counsel, Coalition for Online Accountability
  • The Senate Armed Services Committee’s Cybersecurity Subcommittee will hold a closed briefing on Department of Defense Cyber Operations on 15 December with these witnesses:
    • Mr. Thomas C. Wingfield, Deputy Assistant Secretary of Defense for Cyber Policy, Office of the Under Secretary of Defense for Policy
    • Mr. Jeffrey R. Jones, Vice Director, Command, Control, Communications and Computers/Cyber, Joint Staff, J-6
    • Ms. Katherine E. Arrington, Chief Information Security Officer for the Assistant Secretary of Defense for Acquisition, Office of the Under Secretary of Defense for Acquisition and Sustainment
    • Rear Admiral Jeffrey Czerewko, United States Navy, Deputy Director, Global Operations, J39, J3, Joint Staff
  • The Senate Banking, Housing, and Urban Affairs Committee’s Economic Policy Subcommittee will conduct a hearing titled “US-China: Winning the Economic Competition, Part II” on 16 December with these witnesses:
    • The Honorable Will Hurd, Member, United States House of Representatives;
    • Derek Scissors, Resident Scholar, American Enterprise Institute;
    • Melanie M. Hart, Ph.D., Senior Fellow and Director for China Policy, Center for American Progress; and
    • Roy Houseman, Legislative Director, United Steelworkers (USW).
  • On 17 December the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency’s (CISA) Information and Communications Technology (ICT) Supply Chain Risk Management (SCRM) Task Force will convene for a virtual event, “Partnership in Action: Driving Supply Chain Security.”

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Photo by Naya Shaw from Pexels

Further Reading, Other Developments, and Coming Events (10 December)

Further Reading

  • Social media superspreaders: Why Instagram, not Facebook, will be the real battleground for COVID-19 vaccine misinformation” By Isobel Asher Hamilton — Business Insider. According to one group, COVID-19 anti-vaccination lies and misinformation are proliferating on Instagram despite its parent company’s, Facebook, efforts to find and remove such content. There has been dramatic growth in such content on Instagram, and Facebook seems to be applying COVID-19 standards more loosely on Instagram. In fact, some people kicked off of Facebook for violating that platform’s standards on COVID-19 are still on Instagram spreading the same lies, misinformation, and disinformation. For example, British anti-vaccination figure David Icke was removed from Facebook for making claims that COVID-19 was caused by or related to 5G, but he has a significant following on Instagram.
  • ‘Grey area’: China’s trolling drives home reality of social media war” By Chris Zappone — The Sydney Morning Herald. The same concept that is fueling aggressive cyber activity at a level below outright war has spread to diplomacy. The People’s Republic of China (PRC) has been waging “gray” social media campaigns against a number of Western nations, including Australia, mainly be propagating lies and misinformation. The most recent example is the spreading a fake photo of an Australian soldier appearing to kill an Afghan child. This false material seems designed to distract from the real issues between the two nations arising from clashing policies on trade and human rights. The PRC’s activities do not appear to violate Australia’s foreign interference laws and seem to have left Canberra at a loss as to how to respond effectively.
  • Facebook to start policing anti-Black hate speech more aggressively than anti-White comments, documents show” By Elizabeth Dwoskin, Nitasha Tiku and Heather Kelly — The Washington Post. Facebook will apparently seek to revamp its algorithms to target the types of hate speech that have traditionally targeted women and minority groups. Up until now all attacks were treated equally so that something like “white people suck” would be treated the same way as anti-Semitic content. Facebook has resisted changes for years even though experts and civil rights groups made the case that people of color, women, and LGBTI people endure far more abuse online. There is probably no connection between Facebook’s more aggressive content moderation policies and the advent of a new administration in Washington more receptive to claims that social media platforms allow the abuse of these people.
  • How Joe Biden’s Digital Team Tamed the MAGA Internet” By Kevin Roose — The New York Times. Take this piece with a block of salt. The why they won articles are almost always rife with fallacies, including the rationale that if a candidate won, his or her strategy must have worked. It is not clear that the Biden Campaign’s online messaging strategy of being nice and emphasizing positive values actually beat the Trump Campaign’s “Death Star” so much as the President’s mishandling of the pandemic response and cratering of the economy did him in.
  • Coronavirus Apps Show Promise but Prove a Tough Sell” By Jennifer Valentino-DeVries — The New York Times. It appears the intersection of concerns about private and public sector surveillance from two very different groups has worked to keep down rates of adopting smartphone COVID tracking apps in the United States. There are people wary of private sector practices to hoover up as much data as possible, and others concerned about the government’s surveillance activities. Consequently, many are shunning Google and Apple’s COVID contact tracing apps to the surprise of government, industry, and academia. A pair of studies show resistance to downloading or using such apps even if there are very strong privacy safeguards. This result may well be a foreseeable outcome from U.S. policies that have allowed companies and the security services to collect and use vast quantities of personal information.
  • UAE target of cyber attacks after Israel deal, official says” — Reuters. A top cybersecurity official in the United Arab Emirates claimed his nation’s financial services industries were targeted for cyber attack and implied Iran and affiliated hackers were responsible.

Other Developments

  • President-elect Joe Biden announced his intention to nominate California Attorney General Xavier Becerra to serve as the next Secretary of Health and Human Services (HHS). If confirmed by the Senate, California Governor Gavin Newsom would name Becerra’s successor who would need to continue enforcement of the “California Consumer Privacy Act” (CCPA) (AB 375) while also working towards the transition to the “California Privacy Rights Act” (Proposition 24) approved by California voters last month. The new statute establishes the California Privacy Protection Agency that will assume the Attorney General’s responsibilities regarding the enforcement of California’s privacy laws. However, Becerra’s successor may play a pivotal role in the transition between the two regulators and the creation of the new regulations needed to implement Proposition 24.
  • The Senate approved the nomination of Nathan Simington to be a Commissioner of the Federal Communications Commission (FCC) by a 49-46 vote. Once FCC Chair Ajit Pai steps down, the agency will be left with two Democratic and two Republican Commissioners, pending the Biden Administration’s nominee to fill Pai’s spot. If the Senate stays Republican, it is possible the calculation could be made that a deadlocked FCC is better than a Democratic agency that could revive net neutrality rules among other Democratic and progressive policies. Consequently, Simington’s confirmation may be the first step in a FCC unable to develop substantive policy.
  • Another federal court has broadened the injunction against the Trump Administration’s ban on TikTok to encompass the entirety of the Department of Commerce’s September order meant to stop the usage of the application in the United States (U.S.) It is unclear as to whether the Trump Administration will appeal, and if it should, whether a court would decide the case before the Biden Administration begins in mid-January. The United States Court for the District of Columbia found that TikTok “established that  the government likely exceeded IEEPA’s express limitations as part of an agency action that was arbitrary and capricious” and would likely suffer irreparable harm, making an injunction an appropriate remedy.
  • The United States’ National Security Agency (NSA) “released a Cybersecurity Advisory on Russian state-sponsored actors exploiting CVE-2020-4006, a command-injection vulnerability in VMware Workspace One Access, Access Connector, Identity Manager, and Identity Manager Connector” and provided “mitigation and detection guidance.”
  • The United States (U.S.) Cybersecurity and Infrastructure Security Agency (CISA) and the Federal Bureau of Investigation (FBI) issued a joint alert, warning that U.S. think tanks are being targeted by “persistent continued cyber intrusions by advanced persistent threat (APT) actors.” The agencies stated “[t]his malicious activity is often, but not exclusively, directed at individuals and organizations that focus on international affairs or national security policy.” CISA and the FBI stated its “guidance may assist U.S. think tanks in developing network defense procedures to prevent or rapidly detect these attacks.” The agencies added:
    • APT actors have relied on multiple avenues for initial access. These have included low-effort capabilities such as spearphishing emails and third-party message services directed at both corporate and personal accounts, as well as exploiting vulnerable web-facing devices and remote connection capabilities. Increased telework during the COVID-19 pandemic has expanded workforce reliance on remote connectivity, affording malicious actors more opportunities to exploit those connections and to blend in with increased traffic. Attackers may leverage virtual private networks (VPNs) and other remote work tools to gain initial access or persistence on a victim’s network. When successful, these low-effort, high-reward approaches allow threat actors to steal sensitive information, acquire user credentials, and gain persistent access to victim networks.
    • Given the importance that think tanks can have in shaping U.S. policy, CISA and FBI urge individuals and organizations in the international affairs and national security sectors to immediately adopt a heightened state of awareness and implement the critical steps listed in the Mitigations section of this Advisory.
  • A group of Democratic United States Senators have written the CEO of Alphabet and Google about its advertising policies and how its platforms may have been used to spread misinformation and contribute to voter suppression. Thus far, most of the scrutiny about the 2020 election and content moderation policy has fallen on Facebook and Twitter even though Google-owned YouTube has been flagged as containing the same amount of misinformation. Senators Amy Klobuchar (D-MN) and Mark Warner (D-VA) led the effort and expressed “serious concerns regarding recent reports that Google is profiting from the sale of ads spreading election-related disinformation” to Alphabet and Google CEO Sundar Pichai. Klobuchar, Warner, and their colleagues asserted:
    • Google is also helping organizations spreading election-related disinformation to raise revenue by placing ads on their websites. While Google has some policies in place to prevent the spread of election misinformation, they are not properly enforced and are inadequate. We urge you to immediately strengthen and improve enforcement of your policies on election-related disinformation and voter suppression, reject all ads spreading election-related disinformation, and stop providing advertising services on sites that spread election-related disinformation.
    • …a recent study by the Global Disinformation Index (GDI) found that Google services ads on 145 out of 200 websites GDI examined that publish disinformation. 
    • Similarly, a recent report from the Center for Countering Digital Hate (CCDH) found that Google has been placing ads on websites publishing disinformation designed to undermine elections. In examining just six websites publishing election-related disinformation, CCDH estimates that they receive 40 million visits a month, generating revenue for these sites of up to $3.4 million annually from displaying Google ads. In addition, Google receives $1.6 million from the advertisers’ payments annually.  These sites published stories ahead of the 2020 general election that contained disinformation alleging that voting by mail was not secure, that mail-in voting was being introduced to “steal the election,” and that election officials were “discarding mail ballots.” 
  • A bipartisan group of United States Senators on one committee are urging Congressional leadership to include funding to help telecommunications companies remove and replace Huawei and ZTE equipment and to aid the Federal Communications Commission (FCC) in drafting accurate maps of broadband service in the United States (U.S.). Senate Commerce, Science, and Transportation Committee Chair Roger Wicker (R-MS) and a number of his colleagues wrote the leadership of both the Senate and House and argued:
    • we urge you to provide full funding for Public Law 116-124, the Secure and Trusted Communications Networks Act, and Public Law 116-130, the Broadband DATA Act.   
    • Closing the digital divide and winning the race to 5G are critical to America’s economic prosperity and global leadership in technology. However, our ability to connect all Americans and provide access to next-generation technology will depend in large part on the security of our communications infrastructure. The Secure and Trusted Communications Networks Act (“rip and replace”) created a program to help small, rural telecommunications operators remove equipment posing a security threat to domestic networks and replace it with equipment from trusted providers. This is a national security imperative. Fully funding this program is essential to protecting the integrity of our communications infrastructure and the future viability of our digital economy at large.
    • In addition to safeguarding the security of the nation’s communications systems, developing accurate broadband maps is also critically important. The United States faces a persistent digital divide, and closing this divide requires accurate maps that show where broadband is available and where it is not. Current maps overstate broadband availability, which prevents many underserved communities, particularly in rural areas, from receiving the funds needed to build or expand broadband networks to millions of unconnected Americans. Fully funding the Broadband DATA Act will ensure more accurate broadband maps and better stewardship over the millions of dollars the federal government awards each year to support broadband deployment. Without these maps, the government risks overbuilding existing networks, duplicating funding already provided, and leaving communities unserved.  
  • The Government Accountability Office (GAO) released an assessment of 5G policy options that “discusses (1) how the performance goals and expected uses are to be realized in U.S. 5Gwireless networks; (2) the challenges that could affect the performance or usage of 5G wireless networks in the U.S.; and (3) policy options to address these challenges.” The report had been requested by the chairs and ranking members of the House Armed Services, Senate Armed Services, Senate Intelligence, and House Intelligence Committees along with other Members. The GAO stated “[w]hile 5G is expected to deliver significantly improved network performance and greater capabilities, challenges may hinder the performance or usage of 5G technologies in the U.S. We grouped the challenges into the following four categories:
    • availability and efficient use of spectrum
    • security of 5G networks
    • concerns over data privacy
    • concerns over possible health effects
    • The GAO presented the following policy options along with opportunities and considerations for each:
      • Spectrum-Sharing Technologies Opportunities:
        • Could allow for more efficient use of the limited spectrum available for 5G and future generations of wireless networks.
        • It may be possible to leverage existing5G testbeds for testing the spectrum sharing technologies developed through applied research.
      • Spectrum-Sharing Technologies Considerations:
        • Research and development is costly, must be coordinated and administered, and its potential benefits are uncertain. Identifying a funding source, setting up the funding mechanism, or determining which existing funding streams to reallocate will require detailed analysis.
      • Coordinated Cybersecurity Monitoring Opportunities:
        • A coordinated monitoring program would help ensure the entire wireless ecosystem stays knowledgeable about evolving threats, in close to real time; identify cybersecurity risks; and allow stakeholders to act rapidly in response to emerging threats or actual network attacks.
      • Coordinated Cybersecurity Monitoring Considerations:
        • Carriers may not be comfortable reporting incidents or vulnerabilities, and determinations would need to be made about what information is disclosed and how the information will be used and reported.
      • Cybersecurity Requirements Opportunities
        • Taking these steps could produce a more secure network. Without a baseline set of security requirements the implementation of network security practices is likely to be piecemeal and inconsistent.
        • Using existing protocols or best practices may decrease the time and cost of developing and implementing requirements.
      • Cybersecurity Requirements Considerations
        • Adopting network security requirements would be challenging, in part because defining and implementing the requirements would have to be done on an application-specific basis rather than as a one-size-fits-all approach.
        • Designing a system to certify network components would be costly and would require a centralized entity, be it industry-led or government-led.
      • Privacy Practices Considerations
        • Development and adoption of uniform privacy practices would benefit from existing privacy practices that have been implemented by states, other countries, or that have been developed by federal agencies or other organizations.
      • Privacy Practices Opportunities
        • Privacy practices come with costs, and policymakers would need to balance the need for privacy with the direct and indirect costs of implementing privacy requirements. Imposing requirements can be burdensome, especially for smaller entities.
      • High-band Research Opportunities
        • Could result in improved statistical modeling of antenna characteristics and more accurately representing propagation characteristics.
        • Could result in improved understanding of any possible health effects from long-term radio frequency exposure to high-band emissions.
      • High-band Research Considerations
        • Research and development is costly and must be coordinated and administered, and its potential benefits are uncertain. Policymakers will need to identify a funding source or determine which existing funding streams to reallocate.

Coming Events

  • The Senate Judiciary Committee will hold an executive session at which the “Online Content Policy Modernization Act” (S.4632), a bill to narrow the liability shield in 47 USC 230, may be marked up on 10 December.
  • On 10 December, the Federal Communications Commission (FCC) will hold an open meeting and has released a tentative agenda:
    • Securing the Communications Supply Chain. The Commission will consider a Report and Order that would require Eligible Telecommunications Carriers to remove equipment and services that pose an unacceptable risk to the national security of the United States or the security and safety of its people, would establish the Secure and Trusted Communications Networks Reimbursement Program, and would establish the procedures and criteria for publishing a list of covered communications equipment and services that must be removed. (WC Docket No. 18-89)
    • National Security Matter. The Commission will consider a national security matter.
    • National Security Matter. The Commission will consider a national security matter.
    • Allowing Earlier Equipment Marketing and Importation Opportunities. The Commission will consider a Notice of Proposed Rulemaking that would propose updates to its marketing and importation rules to permit, prior to equipment authorization, conditional sales of radiofrequency devices to consumers under certain circumstances and importation of a limited number of radiofrequency devices for certain pre-sale activities. (ET Docket No. 20-382)
    • Promoting Broadcast Internet Innovation Through ATSC 3.0. The Commission will consider a Report and Order that would modify and clarify existing rules to promote the deployment of Broadcast Internet services as part of the transition to ATSC 3.0. (MB Docket No. 20-145)

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Photo by Tima Miroshnichenko from Pexels

Further Reading, Other Developments, and Coming Events (9 December)

Further Reading

  • Secret Amazon Reports Expose the Company’s Surveillance of Labor and Environmental Groups” By Lauren Kaori Gurley — Vice’s Motherboard. Yet another article by Vice drawing back the curtain on Amazon’s labor practices, especially its apparently fervent desire to stop unionizing. This piece shines light on the company’s Global Security Operations Center that tracks labor organizing and union activities among Amazon’s workers and monitors environmental and human rights on social media. The company has even hired Pinkerton operatives to surveil its warehouse employees. Although the focus is on Europe because the leaked emails on which the story is based pertain to activities on that continent, there is no reason to expect the same tactics are not being used elsewhere. Moreover, the company may be violating the much stricter laws in Europe protecting workers and union activities.
  • Cyber Command deployed personnel to Estonia to protect elections against Russian threat” By Shannon Vavra — cyberscoop.  It was recently revealed that personnel from the United States (U.S.) Cyber Command were deployed to Estonia to work with the latter country’s Defense Forces Cyber Command to fend off potential Russian attacks during the U.S. election. This follows another recent “hunt forward” mission for Cyber Command in Montenegro, another nation on the “frontline” of Russian hacking activities. Whether this has any effect beyond building trust and capacity between nations opposed to state-sponsored hacking and disinformation is unclear.
  • How China Is Buying Up the West’s High-Tech Sector” By Elizabeth Braw — Foreign Policy. This piece by a fellow at the ring wing American Enterprise Institute (AEI) makes the case that reviewing and potentially banning direct foreign investment by People’s Republic of China (PRC) in the United States (U.S.), European Union (EU), and European nations is probably not cutting off PRC access to cutting edge technology. PRC entities are investing directly or indirectly as limited partners in venture capital firms and are probably still gaining access to new technology. For example, an entity associated with the University of Cambridge is working with Huawei on a private 5G wireless network even though London is advancing legislation and policy to ban the PRC giant from United Kingdom (UK) networks. The author advocates for expanding the regulation of foreign investment to include limited partnerships and other structures that are apparently allowing the PRC to continue investing in and reaping the benefit of Western venture capital. There is hope, however, as a number of Western nations are starting government-funded venture capital firms to fund promising technology.
  • Twitter expands hate speech rules to include race, ethnicity” By Katie Paul — Reuters. The social media platform announced that it “further expanding our hateful conduct policy to prohibit language that dehumanizes people on the basis of race, ethnicity, or national origin.” A human rights group, the Color of Change, that was part of a coalition to pressure Twitter and other platforms called the change “essential concessions” but took issue with the timing, stating it would have had more impact had it been made before the election. A spokesperson added “[t]he jury is still out for a company with a spotty track record of policy implementation and enforcing its rules with far-right extremist users…[and] [v]oid of hard evidence the company will follow through, this announcement will fall into a growing category of too little, too late PR stunt offerings.”
  • White House drafts executive order that could restrict global cloud computing companies” By Steven Overly and Eric Geller — Politico. The Trump Administration may make another foray into trying to ban foreign companies from United States (U.S.) key critical infrastructure, and this time would reportedly bar U.S. cloud companies like Microsoft, Amazon, and others from partnering with foreign companies or entities that pose risk to the U.S. through the use of these U.S. systems to conduct cyber-attacks. This seems like another attempt to strike at the People’s Republic of China’s (PRC) technology firms. If issued, it remains to be seen how a Biden Administration would use or implement such a directive given that there is not enough time for the Trump government to see things through to end on such an order. In any event, one can be sure that tech giants have already begun pressing both the outgoing and incoming Administration against any such order and most likely Congress as well.

Other Developments

  • A bipartisan group of Senators and Representatives issued the framework for a $908 billion COVID-19 stimulus package that is reportedly the subject of serious in Congress. The framework details $10 billion for broadband without no detail on how these funds would be distributed.
  • The Australian Competition & Consumer Commission (ACCC) announced the signing of the Australian Product Safety Pledge, “a voluntary initiative that commits its signatories to a range of safety related responsibilities that go beyond what is legally required of them” in e-commerce. The ACCC stated “AliExpress, Amazon Australia, Catch.com.au and eBay Australia, who together account for a significant share of online sales in Australia, are the first businesses to sign the pledge, signifying their commitment to consumers’ safety through a range of commitments such as removing unsafe product listings within two days of being notified by the ACCC.” The pledge consists of 12 commitments:
    • Regularly consult the Product Safety Australia website and other relevant sources for information on recalled/unsafe products. Take appropriate action[1] on these products once they are identified.
    • Provide a dedicated contact point(s) for Australian regulatory authorities to notify and request take-downs of recalled/unsafe products.
    • Remove identified unsafe product listings within two business days of the dedicated contact point(s) receiving a take-down request from Australian regulatory authorities. Inform authorities on the action that has been taken and any relevant outcomes.
    • Cooperate with Australian regulatory authorities in identifying, as far as possible, the supply chain of unsafe products by responding to data/information requests within ten business days should relevant information not be publicly available.
    • Have an internal mechanism for processing data/information requests and take-downs of unsafe products.
    • Provide a clear pathway for consumers to notify the pledge signatory directly of unsafe product listings. Such notifications are treated according to the signatory’s processes and where responses to consumers are appropriate, they are given within five business days.
    • Implement measures to facilitate sellers’ compliance with Australian product safety laws. Share information with sellers on compliance training/guidance, including a link to the ACCC’s Selling online page on the Product Safety Australia website.
    • Cooperate with Australian regulatory authorities and sellers to inform consumers[2] about relevant recalls or corrective actions on unsafe products.
    • Set up processes aimed at preventing or restricting the sale of banned, non-compliant and recalled products as appropriate.
    • Put in place reasonable measures to act against repeat offenders selling unsafe products, including in cooperation with Australian regulatory authorities.
    • Take measures aimed at preventing the reappearance of unsafe product listings already removed.
    • Explore the potential use of new technologies and innovation to improve the detection and removal of unsafe products.
  • Senator Ron Wyden (D-OR) and Representative Lauren Underwood (D-IL) introduced “The Federal Cybersecurity Oversight Act” (S.4912) that would amend the “Federal Cybersecurity Enhancement Act of 2015” (P.L. 114-113) to restrict the use of exceptions to longstanding requirements that federal agencies use measures such as multi-factor authentication and encryption. Currently federal agencies exempt themselves on a number of grounds. Wyden and Underwood’s bill would tighten this process by making the exceptions good only for a year at a time and require the Office of Management and Budget (OMB) approve the execption. In a fact sheet, they claimed:
    • [T]he bill requires the Director of the Office of Management and Budget to approve all waivers, which can currently be self-issued by the head of the agency. To request a waiver, the agency head will have to certify that:
      • It would be excessively burdensome to implement the particular requirement;
      • The particular requirement is not necessary to secure the agency system and data; and
      • The agency has taken all necessary steps to secure the agency system and data.
  • The Government Accountability Office (GAO) looked at the United States (U.S.) longstanding efforts to buy common services and equipment in bulk known as Category Management. The GAO found progress but saw room for considerably more progress. GAO noted:
    • Since 2016, the Office of Management and Budget (OMB) has led efforts to improve how agencies buy these products and services through the category management initiative, which directs agencies across the government to buy more like a single enterprise. OMB has reported the federal government has saved $27.3 billion in 3 years through category management.
  • The GAO concluded:
    • The category management initiative has saved the federal government billions of dollars, and in some instances, enhanced agencies’ mission capabilities. However, the initiative has opportunities to accomplish much more. To date, OMB has focused primarily on contracting aspects of the initiative, and still has several opportunities to help agencies improve how they define their requirements for common products and services. OMB can take concrete steps to improve how agencies define these requirements through more robust guidance and training, changes to leadership delegations and cost savings reporting, and the development of additional metrics to measure implementation of the initiative.
    • Additionally, OMB can lead the development of a coordinated strategy that addresses government-wide data challenges hindering agencies’ efforts to assess their spending and identify prices paid for common products and services.
    • Finally, OMB can tailor additional training courses to provide more relevant information to agency personnel responsible for small business matters, and improve public reporting about the impact of category management on small businesses. In doing so, OMB can enhance the quality of the information provided to the small business community and policymakers. Through these efforts to further advance the category management initiative, OMB can help federal agencies accomplish their missions more effectively while also being better stewards of taxpayer dollars.
    • The GAO made the following recommendations:
      • The Director of the Office of Management and Budget should emphasize in its overarching category management guidance the importance of effectively defining requirements for common products and services when implementing the category management initiative. (Recommendation 1)
      • The Director of the Office of Management and Budget should work with the Category Management Leadership Council and the General Services Administration’s Category Management Program Management Office, and other appropriate offices, to develop additional tailored training for Senior Accountable Officials and agency personnel who manage requirements for common products and services. (Recommendation 2)
      • The Director of the Office of Management and Budget should account for agencies’ training needs, including training needs for personnel who define requirements for common products and services, when setting category management training goals. (Recommendation 3)
      • The Director of the Office of Management and Budget should ensure that designated Senior Accountable Officials have the authority necessary to hold personnel accountable for defining requirements for common products and services as well as contracting activities. (Recommendation 4)
      • The Director of the Office of Management and Budget should report cost savings from the category management initiative by agency. (Recommendation 5)
      • The Director of the Office of Management and Budget should work with the Category Management Leadership Council and the Performance Improvement Council to establish additional performance metrics for the category management initiative that are related to agency requirements. (Recommendation 6)
      • The Director of the Office of Management and Budget should, in coordination with the Category Management Leadership Council and the Chief Data Officer Council, establish a strategic plan to coordinate agencies’ responses to government-wide data challenges hindering implementation of the category management initiative, including challenges involving prices-paid and spending data. (Recommendation 7)
      • The Director of the Office of Management and Budget should work with the General Services Administration’s Category Management Program Management Office and other organizations, as appropriate, to develop additional tailored training for Office of Small Disadvantaged Business Utilization personnel that emphasizes information about small business opportunities under the category management initiative. (Recommendation 8)
      • The Director of the Office of Management and Budget should update its methodology for calculating potentially duplicative contract reductions to strengthen the linkage between category management actions and the number of contracts eliminated. (Recommendation 9)
      • The Director of the Office of Management and Budget should identify the time frames covered by underlying data when reporting on how duplicative contract reductions have impacted small businesses. (Recommendation 10)
  • The chair and ranking member of the House Commerce Committee are calling on the Federal Communications Commission (FCC) to take preparatory steps before Congress provides funding to telecommunications providers to remove and replace Huawei and ZTE equipment. House Energy and Commerce Committee Chair Frank Pallone Jr (D-NJ) and Ranking Member Greg Walden (R-OR) noted the “Secure and Trusted Communications Networks Act” (P.L. 116-124):
    • provides the Federal Communications Commission (FCC) with several new authorities to secure our communications supply chain, including the establishment and administration of the Secure and Trusted Communications Networks Reimbursement Program (Program). Through this Program, small communications providers may seek reimbursement for the cost of removing and replacing suspect network equipment. This funding is critical because some small and rural communications providers would not otherwise be able to afford these upgrades. Among the responsibilities entrusted to the FCC to carry out the Program is the development of a list of suggested replacements for suspect equipment, including physical and virtual communications equipment, application and management software, and services.
    • Pallone and Walden conceded that Congress has not yet provided funds but asked the FCC to take some steps:
      • First, the FCC should develop and release the list of eligible replacement equipment, software, and services as soon as possible. Second, the agency should reassure companies that they will not jeopardize their eligibility for reimbursement under the Program just because replacement equipment purchases were made before the Program is funded, assuming other eligibility criteria are met.
  • The Office of Special Counsel (OSC) wrote one of the whistleblowers at the United States Agency for Global Media (USAGM) and indicated it has ordered the head of USAGM to investigate the claims of malfeasance at the agency. The OSC stated:
    • On December 2, 2020, after reviewing the information you submitted, we directed the Chief Executive Officer (CEO) of USAGM to order an investigation into the following allegations and report back to OSC pursuant to 5 U.S.C. § 1213(c). Allegations to be investigated include that, since June 2020, USAGM:
      • Repeatedly violated the Voice of America (VOA) firewall—the law that protects VOA journalists’ “professional independence and integrity”;
      • Engaged in gross mismanagement and abuse of authority by:
        • Terminating the Presidents of each USAGM-funded network— Radio Free Asia (RFA), Radio Free Europe/Radio Liberty (RFE/RL), the Middle East Broadcasting Networks (MBN), and the Office of Cuba Broadcasting (OCB)—as well as the President and the CEO of the Open Technology Fund (OTF);
        • Dismissing the bipartisan board members that governed the USAGM- funded networks, replacing those board members with largely political appointees, and designating the USAGM CEO as Chairman;
        • Revoking all authority from various members of USAGM’s Senior Executive Service (SES) and reassigning those authorities to political appointees outside of the relevant offices;
        • Removing the VOA Editor for News Standards and Best Practices—a central figure in the VOA editorial standards process and a critical component of the VOA firewall—from his position and leaving that position vacant;
        • Similarly removing the Executive Editor of RFA;
        • Suspending the security clearances of six of USAGM’s ten SES members and placing them on administrative leave; and
        • Prohibiting several offices critical to USAGM’s mission—including the Offices of General Counsel, Chief Strategy, and Congressional and Public Affairs—from communicating with outside parties without the front office’s express knowledge and consent;
      • Improperly froze all agency hiring, contracting, and Information Technology migrations, and either refused to approve such decisions or delayed approval until the outside reputation and/or continuity of agency or network operations, and at times safety of staff, were threatened;
      • Illegally repurposed, and pressured career staff to illegally repurpose, congressionally appropriated funds and programs without notifying Congress; and
      • Refused to authorize the renewal of the visas of non-U.S. citizen journalists working for the agency, endangering both the continuity of agency operations and those individuals’ safety.

Coming Events

  • The Senate Judiciary Committee will hold an executive session at which the “Online Content Policy Modernization Act” (S.4632), a bill to narrow the liability shield in 47 USC 230, may be marked up on 10 December.
  • On 10 December, the Federal Communications Commission (FCC) will hold an open meeting and has released a tentative agenda:
    • Securing the Communications Supply Chain. The Commission will consider a Report and Order that would require Eligible Telecommunications Carriers to remove equipment and services that pose an unacceptable risk to the national security of the United States or the security and safety of its people, would establish the Secure and Trusted Communications Networks Reimbursement Program, and would establish the procedures and criteria for publishing a list of covered communications equipment and services that must be removed. (WC Docket No. 18-89)
    • National Security Matter. The Commission will consider a national security matter.
    • National Security Matter. The Commission will consider a national security matter.
    • Allowing Earlier Equipment Marketing and Importation Opportunities. The Commission will consider a Notice of Proposed Rulemaking that would propose updates to its marketing and importation rules to permit, prior to equipment authorization, conditional sales of radiofrequency devices to consumers under certain circumstances and importation of a limited number of radiofrequency devices for certain pre-sale activities. (ET Docket No. 20-382)
    • Promoting Broadcast Internet Innovation Through ATSC 3.0. The Commission will consider a Report and Order that would modify and clarify existing rules to promote the deployment of Broadcast Internet services as part of the transition to ATSC 3.0. (MB Docket No. 20-145)

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Makalu from Pixabay

Further Reading, Other Development, and Coming Events (8 December)

Further Reading

  • Facebook failed to put fact-check labels on 60% of the most viral posts containing Georgia election misinformation that its own fact-checkers had debunked, a new report says” By Tyler Sonnemaker — Business Insider. Despite its vows to improve its managing of untrue and false content, the platform is not consistently taking down such material related to the runoffs for the Georgia Senate seats. The group behind this finding argues it is because Facebook does not want to. What is left unsaid is that engagement drives revenue, and so, Facebook’s incentives are not to police all violations. Rather it would be to take down enough to be able to say their doing something.
  • Federal Labor Agency Says Google Wrongly Fired 2 Employees” By Kate Conger and Noam Scheiber — The New York Times. The National Labor Relations Board (NLRB) has reportedly sided with two employees Google fired for activities that are traditionally considered labor organizing. The two engineers had been dismissed for allegedly violating the company’s data security practices when they researched the company’s retention of a union-busting firm and sought to alert others about organizing. Even though Google is vowing to fight the action, which has not been finalized, it may well settle given the view of Big Tech in Washington these days. This action could also foretell how a Biden Administration NLRB may look at the labor practices of these companies.
  • U.S. states plan to sue Facebook next week: sources” By Diane Bartz — Reuters. We could see state and federal antitrust suits against Facebook this week. One investigation led by New York Attorney General Tish James could include 40 states although the grounds for alleged violations have not been leaked at this point. It may be Facebook’s acquisition of potential rivals Instagram and WhatsApp that have allowed it to dominate the social messaging market. The Federal Trade Commission (FTC) may also file suit, and, again, the grounds are unknown. The European Commission (EC) is also investigating Facebook for possible violations of European Union (EU) antitrust law over the company’s use of the personal data it holds and uses and about its operation of it online marketplace.
  • The Children of Pornhub” By Nicholas Kristof — The New York Times. This column comprehensively traces the reprehensible recent history of a Canadian conglomerate Mindgeek that owns Pornhub where one can find reams of child and non-consensual pornography. Why Ottawa has not cracked down on this firm is a mystery. The passage and implementation of the “Allow States and Victims to Fight Online Sex Trafficking Act of 2017” (P.L. 115-164) that narrowed the liability shield under 47 USC 230 has forced the company to remove content, a significant change from its indifference before the statutory change in law. Kristof suggests some easy, common sense changes Mindgeek could implement to combat the presence of this illegal material, but it seems like the company will do enough to say it is acting without seriously reforming its platform. Why would it? There is too much money to be made. Additionally, those fighting against this sort of material have been pressuring payment platforms to stop doing business with Mindgeek. PayPal has foresworn any  interaction, and due to pressure Visa and Mastercard are “reviewing” their relationship with Mindgeek and Pornhub. In a statement to a different news outlet, Pornhub claimed it is “unequivocally committed to combating child sexual abuse material (CSAM), and has instituted a comprehensive, industry-leading trust and safety policy to identify and eradicate illegal material from our community.” The company further claimed “[a]ny assertion that we allow CSAM is irresponsible and flagrantly untrue….[w]e have zero tolerance for CSAM.”
  • Amazon and Apple Are Powering a Shift Away From Intel’s Chips” By Don Clark — The New York Times. Two tech giants have chosen new faster, cheaper chips signaling a possible industry shift away from Intel, the firm that has been a significant player for decades. Intel will not go quietly, of course, and a key variable is whether must have software and applications are rewritten to accommodate the new chips from a British firm, Arm.

Other Developments

  • The Government Accountability Office (GAO) and the National Academy of Medicine (NAM) have released a joint report on artificial intelligence in healthcare, consisting of GAO’s Technology Assessment: Artificial Intelligence in Health Care: Benefits and Challenges of Technologies to Augment Patient Care and NAM’s Special Publication: Advancing Artificial Intelligence in Health Settings Outside the Hospital and Clinic. GAO’s report “discusses three topics: (1) current and emerging AI tools available for augmenting patient care and their potential benefits, (2) challenges to the development and adoption of these tools, and (3) policy options to maximize benefits and mitigate challenges to the use of AI tools to augment patient care.” NAM’s “paper aims to provide an analysis of: 1) current technologies and future applications of AI in HSOHC, 2) the logistical steps and challenges involved in integrating AI- HSOHC applications into existing provider workflows, and 3) the ethical and legal considerations of such AI tools, followed by a brief proposal of potential key initiatives to guide the development and adoption of AI in health settings outside the hospital and clinic (HSOHC).
    • The GAO “identified five categories of clinical applications where AI tools have shown promise to augment patient care: predicting health trajectories, recommending treatments, guiding surgical care, monitoring patients, and supporting population health management.” The GAO “also identified three categories of administrative applications where AI tools have shown promise to reduce provider burden and increase the efficiency of patient care: recording digital clinical notes, optimizing operational processes, and automating laborious tasks.” The GAO stated:
      • This technology assessment also identifies challenges that hinder the adoption and impact of AI tools to augment patient care, according to stakeholders, experts, and the literature. Difficulties accessing sufficient high-quality data may hamper innovation in this space. Further, some available data may be biased, which can reduce the effectiveness and accuracy of the tools for some people. Addressing bias can be difficult because the electronic health data do not currently represent the general population. It can also be challenging to scale tools up to multiple locations and integrate them into new settings because of differences in institutions and the patient populations they serve. The limited transparency of AI tools used in health care can make it difficult for providers, regulators, and others to determine whether an AI tool is safe and effective. A greater dispersion of data across providers and institutions can make securing patient data difficult. Finally, one expert described how existing case law does not specifically address AI tools, which can make providers and patients reticent to adopt them. Some of these challenges are similar to those identified previously by GAO in its first publication in this series, such as the lack of high-quality, structured data, and others are more specific to patient care, such as liability concerns.
    • The GAO “described six policy options:”
      • Collaboration. Policymakers could encourage interdisciplinary collaboration between developers and health care providers. This could result in AI tools that are easier to implement and use within an existing workflow.
      • Data Access. Policymakers could develop or expand high-quality data access mechanisms. This could help developers address bias concerns by ensuring data are representative, transparent, and equitable.
      • Best Practices. Policymakers could encourage relevant stakeholders and experts to establish best practices (such as standards) for development, implementation, and use of AI technologies. This could help with deployment and scalability of AI tools by providing guidance on data, interoperability, bias, and formatting issues.
      • Interdisciplinary Education. Policymakers could create opportunities for more workers to develop interdisciplinary skills. This could allow providers to use AI tools more effectively, and could be accomplished through a variety of methods, including changing medical curricula or grants.
      • Oversight Clarity. Policymakers could collaborate with relevant stakeholders to clarify appropriate oversight mechanisms. Predictable oversight could help ensure that AI tools remain safe and effective after deployment and throughout their lifecycle.
      • Status Quo. Policymakers could allow current efforts to proceed without intervention.
    • NAM claimed
      • Numerous AI-powered health applications designed for personal use have been shown to improve patient outcomes, building predictions based on large volumes of granular, real-time, and individualized behavioral and medical data. For instance, some forms of telehealth, a technology that has been critical during the COVID-19 pandemic, benefit considerably from AI software focused on natural language processing, which enables efficient triaging of patients based on urgency and type of illness. Beyond patient-provider communication, AI algorithms relevant to diabetic and cardiac care have demonstrated remarkable efficacy in helping patients manage their blood glucose levels in their day-to-day lives and in detecting cases of atrial fibrillation. AI tools that monitor and synthesize longitudinal patient behaviors are also particularly useful in psychiatric care, where of the exact timing of interventions is often critical. For example, smartphone-embedded sensors that track location and proximity of individuals can alert clinicians of possible substance use, prompting immediate intervention. On the population health level, these individual indicators of activity and health can be combined with environmental- and system-level data to generate predictive insight into local and global health trends. The most salient example of this may be the earliest warnings of the COVID-19 outbreak, issued in December 2019 by two private AI technology firms.
      • Successful implementation and widespread adoption of AI applications in HSOHC requires careful consideration of several key issues related to personal data, algorithm development, and health care insurance and payment. Chief among them are data interoperability, standardization, privacy, ameliorating systemic biases in algorithms, reimbursement of AI- assisted services, quality improvement, and integration of AI tools into provider workflows. Overcoming these challenges and optimizing the impact of AI tools on clinical outcomes will involve engaging diverse stakeholders, deliberately designing AI tools and interfaces, rigorously evaluating clinical and economic utility, and diffusing and scaling algorithms across different health settings. In addition to these potential logistical and technical hurdles, it is imperative to consider the legal and ethical issues surrounding AI, particularly as it relates to the fair and humanistic deployment of AI applications in HSOHC. Important legal considerations include the appropriate designation of accountability and liability of medical errors resulting from AI- assisted decisions for ensuring the safety of patients and consumers. Key ethical challenges include upholding the privacy of patients and their data—particularly with regard to non-HIPAA covered entities involved in the development of algorithms—building unbiased AI algorithms based on high-quality data from representative populations, and ensuring equitable access to AI technologies across diverse communities.
  • The National Institute of Standards and Technology (NIST) published a “new study of face recognition technology created after the onset of the COVID-19 pandemic [that] shows that some software developers have made demonstrable progress at recognizing masked faces.” In Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face Recognition Accuracy with Face Masks Using Post-COVID-19 Algorithms (NISTIR 8331), NIST stated the “report augments its predecessor with results for more recent algorithms provided to NIST after mid-March 2020.” NIST said that “[w]hile we do not have information on whether or not a particular algorithm was designed with face coverings in mind, the results show evidence that a number of developers have adapted their algorithms to support face recognition on subjects potentially wearing face masks.” NIST stated that
    • The following results represent observations on algorithms provided to NIST both before and after the COVID-19 pandemic to date. We do not have information on whether or not a particular algorithm was designed with face coverings in mind. The results documented capture a snapshot of algorithms submitted to the FRVT 1:1 in face recognition on subjects potentially wearing face masks.
      • False rejection performance: All algorithms submitted after the pandemic continue to give in-creased false non-match rates (FNMR) when the probes are masked. While a few pre-pandemic algorithms still remain within the most accurate on masked photos, some developers have submit-ted algorithms after the pandemic showing significantly improved accuracy and are now among the most accurate in our test.
      • Evolution of algorithms on face masks: We observe that a number of algorithms submitted since mid-March 2020 show notable reductions in error rates with face masks over their pre-pandemic predecessors. When comparing error rates for unmasked versus masked faces, the median FNMR across algorithms submitted since mid-March 2020 has been reduced by around 25% from the median pre-pandemic results. The figure below presents examples of developer evolution on both masked and unmasked datasets. For some developers, false rejection rates in their algorithms submitted since mid-March 2020 decreased by as much as a factor of 10 over their pre-pandemic algorithms, which is evidence that some providers are adapting their algorithms to handle facemasks. However, in the best cases, when comparing results for unmasked images to masked im-ages, false rejection rates have increased from 0.3%-0.5% (unmasked) to 2.4%-5% (masked).
      • False acceptance performance: As most systems are configured with a fixed threshold, it is necessary to report both false negative and false positive rates for each group at that threshold. When comparing a masked probe to an unmasked enrollment photo, in most cases, false match rates (FMR) are reduced by masks. The effect is generally modest with reductions in FMR usually being smaller than a factor of two. This property is valuable in that masked probes do not impart adverse false match security consequences for verification.
      • Mask-agnostic face recognition: All 1:1 verification algorithms submitted to the FRVT test since the start of the pandemic are evaluated on both masked and unmasked datasets. The test is de-signed this way to mimic operational reality: some images will have masks, some will not (especially enrollment samples from a database or ID card). And to the extent that the use of protective masks will exist for some time, our test will continue to evaluate algorithmic capability on verifying all combinations of masked and unmasked faces.
  • The government in London has issued a progress report on its current cybersecurity strategy that has another year to run. The Paymaster General assessed how well the United Kingdom (UK) has implemented the National Cyber Security Strategy 2016 to 2021 and pointed to goals yet to be achieved. This assessment comes in the shadow of the pending exit of the UK from the European Union (EU) and Prime Minister Boris Johnson’s plans to increase the UK’s role in select defense issues, including cyber operations. The Paymaster General stated:
    • The global landscape has changed significantly since the publication of the National Cyber Security Strategy Progress Report in May 2019. We have seen unprecedented levels of disruption to our way of life that few would have predicted. The COVID-19 pandemic has increased our reliance on digital technologies – for our personal communications with friends and family and our ability to work remotely, as well as for businesses and government to continue to operate effectively, including in support of the national response.
    • These new ways of living and working highlight the importance of cyber security, which is also underlined by wider trends. An ever greater reliance on digital networks and systems, more rapid advances in new technologies, a wider range of threats, and increasing international competition on underlying technologies and standards in cyberspace, emphasise the need for good cyber security practices for individuals, businesses and government.
    • Although the scale and international nature of these changes present challenges, there are also opportunities. With the UK’s departure from the European Union in January 2020, we can define and strengthen Britain’s place in the world as a global leader in cyber security, as an independent, sovereign nation.
    • The sustained, strategic investment and whole of society approach delivered so far through the National Cyber Security Strategy has ensured we are well placed to respond to this changing environment and seize new opportunities.
    • The Paymaster General asserted:
      • [The] report has highlighted growing risks, some accelerated by the COVID-19 pandemic, and longer-term trends that will shape the environment over the next decade:
      • Ever greater reliance on digital networks and systems as daily life moves online, bringing huge benefits but also creating new systemic and individuals risks.
      • Rapid technological change and greater global competition, challenging our ability to shape the technologies that will underpin our future security and prosperity.
      • A wider range of adversaries as criminals gain easier access to commoditised attack capabilities and cyber techniques form a growing part of states’ toolsets.
      • Competing visions for the future of the internet and the risk of fragmentation, making consensus on norms and ethics in cyberspace harder to achieve.
      • In February 2020 the Prime Minister announced the Integrated Review of Security, Defence, Development and Foreign Policy. This will define the government’s ambition for the UK’s role in the world and the long-term strategic aims of our national security and foreign policy. It will set out the way in which the UK will be a problem-solving and burden-sharing nation, and a strong direction for recovery from COVID-19, at home and overseas.
      • This will help to shape our national approach and priorities on cyber security beyond 2021. Cyber security is a key element of our international, defence and security posture, as well as a driving force for our economic prosperity.
  • The University of Toronto’s Citizen Lab published a report on an Israeli surveillance firm that uses “[o]ne of the widest-used—but least appreciated” means of surveilling people (i.e., “leveraging of weaknesses in the global mobile telecommunications infrastructure to monitor and intercept phone calls and traffic.” Citizen Lab explained that an affiliate of the NSO Group, “Circles is known for selling systems to exploit Signaling System 7 (SS7) vulnerabilities, and claims to sell this technology exclusively to nation-states.” Citizen Lab noted that “[u]nlike NSO Group’s Pegasus spyware, the SS7 mechanism by which Circles’ product reportedly operates does not have an obvious signature on a target’s phone, such as the telltale targeting SMS bearing a malicious link that is sometimes present on a phone targeted with Pegasus.” Citizen Lab found that
    • Circles is a surveillance firm that reportedly exploits weaknesses in the global mobile phone system to snoop on calls, texts, and the location of phones around the globe. Circles is affiliated with NSO Group, which develops the oft-abused Pegasus spyware.
    • Circles, whose products work without hacking the phone itself, says they sell only to nation-states. According to leaked documents, Circles customers can purchase a system that they connect to their local telecommunications companies’ infrastructure, or can use a separate system called the “Circles Cloud,” which interconnects with telecommunications companies around the world.
    • According to the U.S. Department of Homeland Security, all U.S. wireless networks are vulnerable to the types of weaknesses reportedly exploited by Circles. A majority of networks around the globe are similarly vulnerable.
    • Using Internet scanning, we found a unique signature associated with the hostnames of Check Point firewalls used in Circles deployments. This scanning enabled us to identify Circles deployments in at least 25 countries.
    • We determine that the governments of the following countries are likely Circles customers: Australia, Belgium, Botswana, Chile, Denmark, Ecuador, El Salvador, Estonia, Equatorial Guinea, Guatemala, Honduras, Indonesia, Israel, Kenya, Malaysia, Mexico, Morocco, Nigeria, Peru, Serbia, Thailand, the United Arab Emirates (UAE), Vietnam, Zambia, and Zimbabwe.
    • Some of the specific government branches we identify with varying degrees of confidence as being Circles customers have a history of leveraging digital technology for human rights abuses. In a few specific cases, we were able to attribute the deployment to a particular customer, such as the Security Operations Command (ISOC) of the Royal Thai Army, which has allegedly tortured detainees.
  • Senators Ron Wyden (D-OR) Elizabeth Warren (D-MA) Edward J. Markey (D-MA) and Brian Schatz (D-HI) “announced that the Department of Homeland Security (DHS) will launch an inspector general investigation into Customs and Border Protection’s (CBP) warrantless tracking of phones in the United States following an inquiry from the senators earlier this year” per their press release.
    • The Senators added:
      • As revealed by public contracts, CBP has paid a government contractor named Venntel nearly half a million dollars for access to a commercial database containing location data mined from applications on millions of Americans’ mobile phones. CBP officials also confirmed the agency’s warrantless tracking of phones in the United States using Venntel’s product in a September 16, 2020 call with Senate staff.
      • In 2018, the Supreme Court held in Carpenter v. United States that the collection of significant quantities of historical location data from Americans’ cell phones is a search under the Fourth Amendment and therefore requires a warrant.
      • In September 2020, Wyden and Warren successfully pressed for an inspector general investigation into the Internal Revenue Service’s use of Venntel’s commercial location tracking service without a court order.
    • In a letter, the DHS Office of the Inspector General (OIG) explained:
      • We have reviewed your request and plan to initiate an audit that we believe will address your concerns. The objective of our audit is to determine if the Department of Homeland Security (DHS) and it [sic] components have developed, updated, and adhered to policies related to cell-phone surveillance devices. In addition, you may be interested in our audit to review DHS’ use and protection of open source intelligence. Open source intelligence, while different from cell phone surveillance, includes the Department’s use of information provided by the public via cellular devices, such as social media status updates, geo-tagged photos, and specific location check-ins.
    • In an October letter, these Senators plus Senator Sherrod Brown (D-OH) argued:
      • CBP is not above the law and it should not be able to buy its way around the Fourth Amendment. Accordingly, we urge you to investigate CBP’s warrantless use of commercial databases containing Americans’ information, including but not limited to Venntel’s location database. We urge you to examine what legal analysis, if any, CBP’s lawyers performed before the agency started to use this surveillance tool. We also request that you determine how CBP was able to begin operational use of Venntel’s location database without the Department of Homeland Security Privacy Office first publishing a Privacy Impact Assessment.
  • The American Civil Liberties Union (ACLU) has filed a lawsuit in a federal court in New York City, seeking an order to compel the United States (U.S.) Department of Homeland Security (DHS), U.S. Customs and Border Protection (CBP), and U.S. Immigration and Customs Enforcement (ICE) “to release records about their purchases of cell phone location data for immigration enforcement and other purposes.” The ACLU made these information requests after numerous media accounts showing that these and other U.S. agencies were buying location data and other sensitive information in ways intended to evade the bar in the Fourth Amendment against unreasonable searches.
    • In its press release, the ACLU asserted:
      • In February, The Wall Street Journal reported that this sensitive location data isn’t just for sale to commercial entities, but is also being purchased by U.S. government agencies, including by U.S. Immigrations and Customs Enforcement to locate and arrest immigrants. The Journal identified one company, Venntel, that was selling access to a massive database to the U.S. Department of Homeland Security, U.S. Customs and Border Protection, and ICE. Subsequent reporting has identified other companies selling access to similar databases to DHS and other agencies, including the U.S. military.
      • These practices raise serious concerns that federal immigration authorities are evading Fourth Amendment protections for cell phone location information by paying for access instead of obtaining a warrant. There’s even more reason for alarm when those agencies evade requests for information — including from U.S. senators — about such practices. That’s why today we asked a federal court to intervene and order DHS, CBP, and ICE to release information about their purchase and use of precise cell phone location information. Transparency is the first step to accountability.
    • The ACLU explained in the suit:
      • Multiple news sources have confirmed these agencies’ purchase of access to databases containing precise location information for millions of people—information gathered by applications (apps) running on their smartphones. The agencies’ purchases raise serious concerns that they are evading Fourth Amendment protections for cell phone location information by paying for access instead of obtaining a warrant. Yet, more than nine months after the ACLU submitted its FOIA request (“the Request”), these agencies have produced no responsive records. The information sought is of immense public significance, not only to shine a light on the government’s use of powerful location-tracking data in the immigration context, but also to assess whether the government’s purchase of this sensitive data complies with constitutional and legal limitations and is subject to appropriate oversight and control.
  • Facebook’s new Oversight Board announced “the first cases it will be deliberating and the opening of the public comment process” and “the appointment of five new trustees.” The cases were almost all referred by Facebook users and the new board is asking for comments on the right way to manage what may be objectionable content. The Oversight Board explained it “prioritizing cases that have the potential to affect lots of users around the world, are of critical importance to public discourse or raise important questions about Facebook’s policies.”
    • The new trustees are:
      • Kristina Arriaga is a globally recognized advocate for freedom of expression, with a focus on freedom of religion and belief. Kristina is president of the advisory firm Intrinsic.
      • Cherine Chalaby is an expert on internet governance, international finance and technology, with extensive board experience. As Chairman of ICANN, he led development of the organization’s five-year strategic plan for 2021 to 2025.
      • Wanda Felton has over 30 years of experience in the financial services industry, including serving as Vice Chair of the Board and First Vice President of the Export-Import Bank of the United States.
      • Kate O’Regan is a former judge of the Constitutional Court of South Africa and commissioner of the Khayelitsha Commission. She is the inaugural director of the Bonavero Institute of Human Rights at the University of Oxford.
      • Robert Post is an American legal scholar and Professor of Law at Yale Law School, where he formerly served as Dean. He is a leading scholar of the First Amendment and freedom of speech.

Coming Events

  • The National Institute of Standards and Technology (NIST) will hold a webinar on the Draft Federal Information Processing Standards (FIPS) 201-3 on 9 December.
  • On 9 December, the Senate Commerce, Science, and Transportation Committee will hold a hearing titled “The Invalidation of the EU-US Privacy Shield and the Future of Transatlantic Data Flows” with the following witnesses:
    • The Honorable Noah Phillips, Commissioner, Federal Trade Commission
    • Ms. Victoria Espinel, President and Chief Executive Officer, BSA – The Software Alliance
    • Mr. James Sullivan, Deputy Assistant Secretary for Services, International Trade Administration, U.S. Department of Commerce
    • Mr. Peter Swire, Elizabeth and Tommy Holder Chair of Law and Ethics, Georgia Tech Scheller College of Business, and Research Director, Cross-Border Data Forum
  • The Senate Judiciary Committee will hold an executive session at which the “Online Content Policy Modernization Act” (S.4632), a bill to narrow the liability shield in 47 USC 230, may be marked up.
  • On 10 December, the Federal Communications Commission (FCC) will hold an open meeting and has released a tentative agenda:
    • Securing the Communications Supply Chain. The Commission will consider a Report and Order that would require Eligible Telecommunications Carriers to remove equipment and services that pose an unacceptable risk to the national security of the United States or the security and safety of its people, would establish the Secure and Trusted Communications Networks Reimbursement Program, and would establish the procedures and criteria for publishing a list of covered communications equipment and services that must be removed. (WC Docket No. 18-89)
    • National Security Matter. The Commission will consider a national security matter.
    • National Security Matter. The Commission will consider a national security matter.
    • Allowing Earlier Equipment Marketing and Importation Opportunities. The Commission will consider a Notice of Proposed Rulemaking that would propose updates to its marketing and importation rules to permit, prior to equipment authorization, conditional sales of radiofrequency devices to consumers under certain circumstances and importation of a limited number of radiofrequency devices for certain pre-sale activities. (ET Docket No. 20-382)
    • Promoting Broadcast Internet Innovation Through ATSC 3.0. The Commission will consider a Report and Order that would modify and clarify existing rules to promote the deployment of Broadcast Internet services as part of the transition to ATSC 3.0. (MB Docket No. 20-145)

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Gerd Altmann from Pixabay

Further Reading, Other Development, and Coming Events (7 December)

Further Reading

  • Facebook steps up campaign to ban false information about coronavirus vaccines” By Elizabeth Dwoskin — The Washington Post. In its latest step to find and remove lies, misinformation, and disinformation, the social media giant is now committing to removing and blocking untrue material about COVID-19 vaccines, especially from the anti-vaccine community. Will the next step be to take on anti-vaccination proponents generally?
  • Comcast’s 1.2 TB data cap seems like a ton of data—until you factor in remote work” By Rob Pegoraro — Fast Company. Despite many people and children working and learning from home, Comcast is reimposing a 1.2 terabyte limit on data for homes. Sounds like quite a lot until you factor in video meetings, streaming, etc. So far, other providers have not set a cap.
  • Google’s star AI ethics researcher, one of a few Black women in the field, says she was fired for a critical email” By Drew Harwell and Nitasha Tiku — The Washington Post. Timnit Gebru, a top flight artificial intelligence (AI) computer scientist, was fired for questioning Google’s review of a paper she wanted to present at an AI conference that is likely critical of the company’s AI projects. Google claims she resigned, but Gebru says she was fired. She has long been an advocate for women and minorities in tech and AI and her ouster will likely only increase scrutiny of and questions about Google’s commitment to diversity and an ethical approach to the development and deployment of AI. It will also probably lead to more employee disenchantment about the company that follows in the wake of protests about Google’s involvement with the United States Department of Defense’s Project Maven and hiring of former United States Department of Homeland Security chief of staff Miles Taylor who was involved with the policies that resulted in caging children and separating families on the southern border of the United States.
  • Humans Can Help Clean Up Facebook and Twitter” By Greg Bensinger — The New York Times. In this opinion piece, the argument is made that social media platforms should redeploy their human monitors to the accounts that violate terms of service most frequently (e.g., President Donald Trump) and more aggressively label and remove untrue or inflammatory content, they would have a greater impact on lies, misinformation, and disinformation.
  • Showdown looms over digital services tax” By Ashley Gold — Axios. Because the Organization for Economic Cooperation and Development (OECD) has not reached a deal on digital services taxes, a number of the United States (U.S.) allies could move forward with taxes on U.S. multinationals like Amazon, Google, and Apple. The Trump Administration has variously taken an adversarial position threatening to retaliate against countries like France who have enacted a tax that has not been collected during the OECD negotiations. The U.S. also withdrew from talks. It is probable the Biden Administration will be more willing to work in a multi-lateral fashion and may strike a deal on an issue that it not going away as the United Kingdom, Italy, and Canada also have plans for a digital tax.
  • Trump’s threat to veto defense bill over social-media protections is heading to a showdown with Congress” By Karoun Demirjian and Tony Romm — The Washington Post. I suppose I should mention of the President’s demands that the FY 2021 National Defense Authorization Act (NDAA) contain a repeal of 47 U.S.C. 230 (Section 230 of the Communications Act) that came at the eleventh hour and fifty-ninth minute of negotiations on a final version of the bill. Via Twitter, Donald Trump threatened to veto the bill which has been passed annually for decades. Republicans were not having it, however, even if they agreed on Trump’s desire to remove liability protection for technology companies. And yet, if Trump continues to insist on a repeal, Republicans may find themselves in a bind and the bill could conceivably get pulled until President-elect Joe Biden is sworn in. On the other hand, Trump’s veto threats about renaming military bases currently bearing the names of Confederate figures have not been renewed even though the final version of the bill contains language instituting a process to do just that.

Other Developments

  • The Senate Judiciary Committee held over its most recent bill to narrow 47 U.S.C. 230 (Section 230 of the Communications Act) that provides liability protection for technology companies for third-party material posted on their platforms and any decisions to edit, alter, or remove such content. The committee opted to hold the “Online Content Policy Modernization Act” (S.4632), which may mean the bill’s chances of making it to the Senate floor are low. What’s more, even if the Senate passes Section 230 legislation, it is not clear there will be sufficient agreement with Democrats in the House to get a final bill to the President before the end of this Congress. On 1 October, the committee also decided to hold over bill to try to reconcile the fifteen amendments submitted for consideration. The Committee could soon meet again to formally markup and report out this legislation.
    • At the earlier hearing, Chair Lindsey Graham (R-SC) submitted an amendment revising the bill’s reforms to Section 230 that incorporate some of the below amendments but includes new language. For example, the bill includes a definition of “good faith,” a term not currently defined in Section 230. This term would be construed as a platform taking down or restricting content only according to its publicly available terms of service, not as a pretext, and equally to all similarly situated content. Moreover, good faith would require alerting the user and giving him or her an opportunity to respond subject to certain exceptions. The amendment also makes clear that certain existing means of suing are still available to users (e.g. suing claiming a breach of contract.)
    • Senator Mike Lee (R-UT) offered a host of amendments:
      • EHF20913 would remove “user[s]” from the reduced liability shield that online platforms would receive under the bill. Consequently, users would still not be legally liable for the content posted by another user.
      • EHF20914 would revise the language the language regarding the type of content platforms could take down with legal protection to make clear it would not just be “unlawful” content but rather content “in violation of a duly enacted law of the United States,” possibly meaning federal laws and not state laws. Or, more likely, the intent would be to foreclose the possibility a platform would say it is acting in concert with a foreign law and still assert immunity.
      • EHF20920 would add language making clear that taking down material that violates terms of service or use according to an objectively reasonable belief would be shielded from liability.
      • OLL20928 would expand legal protection to platforms for removing or restricting spam,
      • OLL20929 would bar the Federal Communications Commission (FCC) from a rulemaking on Section 230.
      • OLL20930 adds language making clear if part of the revised Section 230 is found unconstitutional, the rest of the law would still be applicable.
      • OLL20938 revises the definition of an “information content provider,” the term of art in Section 230 that identifies a platform, to expand when platforms may be responsible for the creation or development of information and consequently liable for a lawsuit.
    • Senator Josh Hawley (R-MO) offered an amendment that would create a new right of action for people to sue large platforms for taking down his or her content if not done in “good faith.” The amendment limits this right only to “edge providers” who are platforms with more than 30 million users in the U.S. , 300 million users worldwide, and with revenues of more than $1.5 billion. This would likely exclude all platforms except for Twitter, Facebook, Instagram, TikTok, Snapchat, and a select group of a few others.
    • Senator John Kennedy (R-LA) offered an amendment that removes all Section 230 legal immunity from platforms that collect personal data and then uses an “automated function” to deliver targeted or tailored content to a user unless a user “knowingly and intentionally elect[s]” to receive such content.
  • The Massachusetts Institute of Technology’s (MIT) Work of the Future Task Force issued its final report and drew the following conclusions:
    • Technological change is simultaneously replacing existing work and creating new work. It is not eliminating work altogether.
    • Momentous impacts of technological change are unfolding gradually.
    • Rising labor productivity has not translated into broad increases in incomes because labor market institutions and policies have fallen into disrepair.
    • Improving the quality of jobs requires innovation in labor market institutions.
    • Fostering opportunity and economic mobility necessitates cultivating and refreshing worker skills.
    • Investing in innovation will drive new job creation, speed growth, and meet rising competitive challenges.
    • The Task Force stated:
      • In the two-and-a-half years since the Task Force set to work, autonomous vehicles, robotics, and AI have advanced remarkably. But the world has not been turned on its head by automation, nor has the labor market. Despite massive private investment, technology deadlines have been pushed back, part of a normal evolution as breathless promises turn into pilot trials, business plans, and early deployments — the diligent, if prosaic, work of making real technologies work in real settings to meet the demands of hard-nosed customers and managers.
      • Yet, if our research did not confirm the dystopian vision of robots ushering workers off of factor y floors or artificial intelligence rendering superfluous human expertise and judgment, it did uncover something equally pernicious: Amidst a technological ecosystem delivering rising productivity, and an economy generating plenty of jobs (at least until the COVID-19 crisis), we found a labor market in which the fruits are so unequally distributed, so skewed towards the top, that the majority of workers have tasted only a tiny morsel of a vast har vest.
      • As this report documents, the labor market impacts of technologies like AI and robotics are taking years to unfold. But we have no time to spare in preparing for them. If those technologies deploy into the labor institutions of today, which were designed for the last century, we will see similar effects to recent decades: downward pressure on wages, skills, and benefits, and an increasingly bifurcated labor market. This report, and the MIT Work of the Future Task Force, suggest a better alternative: building a future for work that har vests the dividends of rapidly advancing automation and ever-more powerful computers to deliver opportunity and economic security for workers. To channel the rising productivity stemming from technological innovations into broadly shared gains, we must foster institutional innovations that complement technological change.
  • The European Data Protection Supervisor (EDPS) Wojciech Wiewiorówski published his “preliminary opinion on the European Commission’s (EC) Communication on “A European strategy for data” and the creation of a common space in the area of health, namely the European Health Data Space (EHDS).” The EDPS lauded the goal of the EHDS, “the prevention, detection and cure of diseases, as well as for evidence-based decisions in order to enhance effectiveness, accessibility and sustainability of the healthcare systems.” However, Wiewiorówski articulated his concerns that the EC needs to think through the applicability of the General Data Protection Regulation (GDPR), among other European Union (EU) laws before it can legally move forward. The EDPS stated:
    • The EDPS calls for the establishment of a thought-through legal basis for the processing operations under the EHDS in line with Article 6(1) GDPR and also recalls that such processing must comply with Article 9 GDPR for the processing of special categories of data.
    • Moreover, the EDPS highlights that due to the sensitivity of the data to be processed within the EHDS, the boundaries of what constitutes a lawful processing and a compatible further processing of the data must be crystal-clear for all the stakeholders involved. Therefore, the transparency and the public availability of the information relating to the processing on the EHDS will be key to enhance public trust in the EHDS.
    • The EDPS also calls on the Commission to clarify the roles and responsibilities of the parties involved and to clearly identify the precise categories of data to be made available to the EHDS. Additionally, he calls on the Member States to establish mechanisms to assess the validity and quality of the sources of the data.
    • The EDPS underlines the importance of vesting the EHDS with a comprehensive security infrastructure, including both organisational and state-of-the-art technical security measures to protect the data fed into the EHDS. In this context, he recalls that Data Protection Impact Assessments may be a very useful tool to determine the risks of the processing operations and the mitigation measures that should be adopted.
    • The EDPS recommends paying special attention to the ethical use of data within the EHDS framework, for which he suggests taking into account existing ethics committees and their role in the context of national legislation.
    • The EDPS is convinced that the success of the EHDS will depend on the establishment of a strong data governance mechanism that provides for sufficient assurances of a lawful, responsible, ethical management anchored in EU values, including respect for fundamental rights. The governance mechanism should regulate, at least, the entities that will be allowed to make data available to the EHDS, the EHDS users, the Member States’ national contact points/ permit authorities, and the role of DPAs within this context.
    • The EDPS is interested in policy initiatives to achieve ‘digital sovereignty’ and has a preference for data being processed by entities sharing European values, including privacy and data protection. Moreover, the EDPS calls on the Commission to ensure that the stakeholders taking part in the EHDS, and in particular, the controllers, do not transfer personal data unless data subjects whose personal data are transferred to a third country are afforded a level of protection essentially equivalent to that guaranteed within the European Union.
    • The EDPS calls on Member States to guarantee the effective implementation of the right to data portability specifically in the EHDS, together with the development of the necessary technical requirements. In this regard, he considers that a gap analysis might be required regarding the need to integrate the GDPR safeguards with other regulatory safeguards, provided e.g. by competition law or ethical guidelines.
  • The Office of Management and Budget (OMB) extended a guidance memorandum directing agencies to consolidate data centers after Congress pushed back the sunset date for the program. OMB extended OMB Memorandum M-19-19, Update to Data Center Optimization Initiative (DCOI) through 30 September 2022, which applies “to the 24 Federal agencies covered by the Chief Financial Officers (CFO) Act of 1990, which includes the Department of Defense.” The DCOI was codified in the “Federal Information Technology Acquisition Reform” (FITARA) (P.L. 113-291) and extended in 2018 until October 1, 2020. And this sunset was pushed back another two years in the FY 2020 National Defense Authorization Act (NDAA) (P.L. 116-92).
    • In March 2020, the Government Accountability Office (GAO) issued another of its periodic assessments of the DCOI, started in 2012 by the Obama Administration to shrink the federal government’s footprint of data centers, increase efficiency and security, save money, and reduce energy usage.
    • The GAO found that 23 of the 24 agencies participating in the DCOI met or planned to meet their FY 2019 goals to close 286 of the 2,727 data centers considered part of the DCOI. This latter figure deserves some discussion, for the Trump Administration changed the definition of what is a data center to exclude smaller ones (so-called non-tiered data centers). GAO asserted that “recent OMB DCOI policy changes will reduce the number of data centers covered by the policy and both OMB and agencies may lose important visibility over the security risks posed by these facilities.” Nonetheless, these agencies are projecting savings of $241.5 million when all the 286 data centers planned for closure in FY 2019 actually close. It bears note that the GAO admitted in a footnote it “did not independently validate agencies’ reported cost savings figures,” so these numbers may not be reliable.
    • In terms of how to improve the DCOI, the GAO stated that “[i]n addition to reiterating our prior open recommendations to the agencies in our review regarding their need to meet DCOI’s closure and savings goals and optimization metrics, we are making a total of eight new recommendations—four to OMB and four to three of the 24 agencies. Specifically:
      • The Director of the Office of Management and Budget should (1) require that agencies explicitly document annual data center closure goals in their DCOI strategic plans and (2) track those goals on the IT Dashboard. (Recommendation 1)
      • The Director of the Office of Management and Budget should require agencies to report in their quarterly inventory submissions those facilities previously reported as data centers, even if those facilities are not subject to the closure and optimization requirements of DCOI. (Recommendation 2)
      • The Director of the Office of Management and Budget should document OMB’s decisions on whether to approve individual data centers when designated by agencies as either a mission critical facility or as a facility not subject to DCOI. (Recommendation 3)
      • The Director of the Office of Management and Budget should take action to address the key performance measurement characteristics missing from the DCOI optimization metrics, as identified in this report. (Recommendation 4)
  • Australia’s Inspector-General of Intelligence and Security (IGIS) released its first report on how well the nation’s security services did in observing the law with respect to COVID  app  data. The IGIS “is satisfied that the relevant agencies have policies and procedures in place and are taking reasonable steps to avoid intentional collection of COVID app data.” The IGIS revealed that “[i]ncidental collection in the course of the lawful collection of other data has occurred (and is permitted by the Privacy Act); however, there is no evidence that any agency within IGIS jurisdiction has decrypted, accessed or used any COVID app data.” The IGIS is also “satisfied  that  the intelligence agencies within IGIS jurisdiction which have the capability to incidentally collect a least some types of COVID app data:
    • Are aware of their responsibilities under Part VIIIA of the Privacy Act and are taking active steps to minimise the risk that they may collect COVID app data.
    • Have appropriate  policies  and  procedures  in  place  to  respond  to  any  incidental  collection of COVID app data that they become aware of. 
    • Are taking steps to ensure any COVID app data is not accessed, used or disclosed.
    • Are taking steps to ensure any COVID app data is deleted as soon as practicable.
    • Have not decrypted any COVID app data.
    • Are applying the usual security measures in place in intelligence agencies such that a ‘spill’ of any data, including COVID app data, is unlikely.
  • New Zealand’s Government Communications Security Bureau’s National Cyber Security Centre (NCSC) has released its annual Cyber Threat Report that found that “nationally significant organisations continue to be frequently targeted by malicious cyber actors of all types…[and] state-sponsored and non-state actors targeted public and private sector organisations to steal information, generate revenue, or disrupt networks and services.” The NCSC added:
    • Malicious cyber actors have shown their willingness to target New Zealand organisations in all sectors using a range of increasingly advanced tools and techniques. Newly disclosed vulnerabilities in products and services, alongside the adoption of new services and working arrangements, are rapidly exploited by state-sponsored actors and cyber criminals alike. A common theme this year, which emerged prior to the COVID-19 pandemic, was the exploitation of known vulnerabilities in internet-facing applications, including corporate security products, remote desktop services and virtual private network applications.
  • The former Director of the United States’ (U.S.) Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA) wrote an opinion piece disputing President Donald Trump’s claims that the 2020 Presidential Election was fraudulent. Christopher Krebs asserted:
    • While I no longer regularly speak to election officials, my understanding is that in the 2020 results no significant discrepancies attributed to manipulation have been discovered in the post-election canvassing, audit and recount processes.
    • This point cannot be emphasized enough: The secretaries of state in Georgia, Michigan, Arizona, Nevada and Pennsylvania, as well officials in Wisconsin, all worked overtime to ensure there was a paper trail that could be audited or recounted by hand, independent of any allegedly hacked software or hardware.
    • That’s why Americans’ confidence in the security of the 2020 election is entirely justified. Paper ballots and post-election checks ensured the accuracy of the count. Consider Georgia: The state conducted a full hand recount of the presidential election, a first of its kind, and the outcome of the manual count was consistent with the computer-based count. Clearly, the Georgia count was not manipulated, resoundingly debunking claims by the president and his allies about the involvement of CIA supercomputers, malicious software programs or corporate rigging aided by long-gone foreign dictators.

Coming Events

  • The National Institute of Standards and Technology (NIST) will hold a webinar on the Draft Federal Information Processing Standards (FIPS) 201-3 on 9 December.
  • On 9 December, the Senate Commerce, Science, and Transportation Committee will hold a hearing titled “The Invalidation of the EU-US Privacy Shield and the Future of Transatlantic Data Flows” with the following witnesses:
    • The Honorable Noah Phillips, Commissioner, Federal Trade Commission
    • Ms. Victoria Espinel, President and Chief Executive Officer, BSA – The Software Alliance
    • Mr. James Sullivan, Deputy Assistant Secretary for Services, International Trade Administration, U.S. Department of Commerce
    • Mr. Peter Swire, Elizabeth and Tommy Holder Chair of Law and Ethics, Georgia Tech Scheller College of Business, and Research Director, Cross-Border Data Forum
  • On 10 December, the Federal Communications Commission (FCC) will hold an open meeting and has released a tentative agenda:
    • Securing the Communications Supply Chain. The Commission will consider a Report and Order that would require Eligible Telecommunications Carriers to remove equipment and services that pose an unacceptable risk to the national security of the United States or the security and safety of its people, would establish the Secure and Trusted Communications Networks Reimbursement Program, and would establish the procedures and criteria for publishing a list of covered communications equipment and services that must be removed. (WC Docket No. 18-89)
    • National Security Matter. The Commission will consider a national security matter.
    • National Security Matter. The Commission will consider a national security matter.
    • Allowing Earlier Equipment Marketing and Importation Opportunities. The Commission will consider a Notice of Proposed Rulemaking that would propose updates to its marketing and importation rules to permit, prior to equipment authorization, conditional sales of radiofrequency devices to consumers under certain circumstances and importation of a limited number of radiofrequency devices for certain pre-sale activities. (ET Docket No. 20-382)
    • Promoting Broadcast Internet Innovation Through ATSC 3.0. The Commission will consider a Report and Order that would modify and clarify existing rules to promote the deployment of Broadcast Internet services as part of the transition to ATSC 3.0. (MB Docket No. 20-145)

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Photo by Daniel Schludi on Unsplash

Further Reading, Other Developments, and Coming Events (4 December)

Further Reading

  • How Misinformation ‘Superspreaders’ Seed False Election Theories” By Sheera Frenkel — The New York Times. A significant percentage of lies, misinformation, and disinformation about the legitimacy of the election have been disseminated by a small number of right-wing figures, which are then repeated, reposted, and retweeted. The Times relies on research of how much engagement people like President Donald Trump and Dan Bongino get on Facebook after posting untrue claims about the election and it turns out that such trends and rumors do not start spontaneously.
  • Facebook Said It Would Ban Holocaust Deniers. Instead, Its Algorithm Provided a Network for Them” By Aaron Sankin — The Markup. This news organization still found Holocaust denial material promoted by Facebook’s algorithm even though the platform said it was taking down such material recently. This result may point to the difficulty of policing objectionable material that uses coded language and/or the social media platforms lack of sufficient resources to weed out this sort of content.
  • What Facebook Fed the Baby Boomers” By Charlie Warzel — The New York Times. A dispiriting trip inside two people’s Facebook feeds. This article makes the very good point that comments are not moderated, and these tend to be significant sources of vitriol and disinformation.
  • How to ‘disappear’ on Happiness Avenue in Beijing” By Vincent Ni and Yitsing Wang — BBC. By next year, the People’s Republic of China (PRC) may have as many as 560 million security cameras, and one artist ran an experiment of sorts to see if a group of people could walk down a major street in the capital without being seen by a camera or without their face being seen at places with lots of cameras.
  • Patients of a Vermont Hospital Are Left ‘in the Dark’ After a Cyberattack” By Ellen Barry and Nicole Perlroth — The New York Times. A Russian hacking outfit may have struck back after the Department of Defense’s (DOD) Cyber Command and Microsoft struck them. A number of hospitals were hacked, and care was significantly disrupted. This dynamic may lend itself to arguments that the United States (U.S.) may be wise to curtail its offensive operations.
  • EU seeks anti-China alliance on tech with Biden” By Jakob Hanke Vela and David M. Herszenhorn — Politico. The European Union (EU) is hoping the United States (U.S.) will be more amenable to working together in the realm of future technology policy, especially against the People’s Republic of China (PRC) which has made a concerted effort to drive the adoption of standards that favor its companies (e.g., the PRC pushed for and obtained 5G standards that will favor Huawei). Diplomatically speaking, this is considered low-hanging fruit, and a Biden Administration will undoubtedly be more multilateral than the Trump Administration.
  • Can We Make Our Robots Less Biased Than We Are?” By David Berreby — The New York Times. The bias present in facial recognition technology and artificial intelligence is making its way into robotics, posing the question of how do we change this? Many African American and other minority scientists are calling for the inclusion of people of color inn designing such systems as a countermeasure to the usual bias for white men.

Other Developments

  • The top Democrat on the Senate Homeland Security and Governmental Affairs Committee wrote President Donald Trump and “slammed the Trump Administration for their lack of action against foreign adversaries, including Russia, China, and North Korea, that have sponsored cyber-attacks against American hospitals and research institutions in an effort to steal information related to development of Coronavirus vaccines.” Peters used language that was unusually strong as Members of Congress typically tone down the rhetoric and deploy coded language to signal their level of displeasure about administration action or inaction. Peters could well feel strongly about what he perceives to be Trump Administration indifference to the cyber threats facing institutions researching and developing COVID-19 vaccines, this is an issue on which he may well be trying to split Republicans, placing them in the difficult position of lining up behind a president disinclined to prioritize some cyber issues or breaking ranks with him.
    • Peters stated:
      • I urge you, again, to send a strong message to any foreign government attempting to hack into our medical institutions that this behavior is unacceptable. The Administration should use the tools at its disposal, including the threat of sanctions, to deter future attacks against research institutions. In the event that any foreign government directly threatens the lives of Americans through attacks on medical facilities, other Department of Defense capabilities should be considered to make it clear that there will be consequences for these actions.
  • A United States federal court has ruled against a Trump Administration appointee Michael Pack and the United States Agency for Global Media (USAGM) and their attempts to interfere illegally with the independence of government-funded news organizations such as the Voice of America (VOA). The District Court for the District of Columbia enjoined Pack and the USAGM from a list of actions VOA and USAGM officials claim are contrary to the First Amendment and the organization’s mission.
  • The Federal Trade Commission (FTC) is asking a United States federal court to compel former Trump White House advisor Steve Bannon to appear for questioning per a Civil Investigative Demand (CID) as part of its ongoing probe of Cambridge Analytica’s role in misusing personal data of Facebook users in the 2016 Presidential Election. The FTC noted it “issued the CID to determine, among other things, whether Bannon may be held individually liable for the deceptive conduct of Cambridge Analytica, LLC—the subject of an administrative law enforcement action brought by the Commission.” There had been an interview scheduled in September but the day before it was to take place, Bannon’s lawyers informed the FTC he would not be attending.
    • In 2019, the FTC settled with former Cambridge Analytica CEO Alexander Nix and app developer Aleksandr Kogan in “administrative orders restricting how they conduct any business in the future, and requiring them to delete or destroy any personal information they collected.” The FTC did not, however, settle with the company itself. The agency alleged “that Cambridge Analytica, Nix, and Kogan deceived consumers by falsely claiming they did not collect any personally identifiable information from Facebook users who were asked to answer survey questions and share some of their Facebook profile data.” Facebook settled with the FTC for a record $5 billion for its role in the Cambridge Analytica scandal and for how it violated its 2012 consent order with the agency.
  • Apple responded to a group of human rights and civil liberties organizations about its plans to deploy technology on its operating system that allows users greater control of their privacy. Apple confirmed that its App Tracking Transparency (ATT) would be made part of its iOS early next year and would provide users of Apple products with a prompt with a warning about how their information may be used by the app developer. ATT would stop app developers from tracking users when they use other apps on ta device. Companies like Facebook have objected, claiming that the change is a direct shot at them and their revenue. Apple does not reap a significant revenue stream from collecting, combining, and processing user data whereas Facebook does. Facebook also tracks users across devices and apps on a device through a variety of means.
    • Apple stated:
      • We delayed the release of ATT to early next year to give developers the time they indicated they needed to properly update their systems and data practices, but we remain fully committed to ATT and to our expansive approach to privacy protections. We developed ATT for a single reason: because we share your concerns about users being tracked without their consent and the bundling and reselling of data by advertising networks and data brokers.
      • ATT doesn’t ban the reasonable collection of user data for app functionality or even for advertising. Just as with the other data-access permissions we have added over many software releases, developers will be able to explain why they want to track users both before the ATT prompt is shown and in the prompt itself. At that point, users will have the freedom to make their own choice about whether to proceed. This privacy innovation empowers consumers — not Apple — by simply making it clear what their options are, and giving them the information and power to choose.
    • As mentioned, a number of groups wrote Apple in October “to express our disappointment that Apple is delaying the full implementation of iOS 14’s anti-tracking features until early 2021.” They argued:
      • These features will constitute a vital policy improvement with the potential to strengthen respect for privacy across the industry. Apple should implement these features as expeditiously as possible.
      • We were heartened by Apple’s announcement that starting with the iOS 14 update, all app developers will be required to provide information that will help users understand the privacy implications of an app before they install it, within the App Store interface.
      • We were also pleased that iOS 14 users would be required to affirmatively opt in to app tracking, on an app-by-app basis. Along with these changes, we urge Apple to verify the accuracy of app policies, and to publish transparency reports showing the number of apps that are rejected and/or removed from the App Store due to inadequate or inaccurate policies.
  • The United States (U.S.) Government Accountability Office (GAO) sent its assessment of the privacy notices and practices of U.S. banks and credit unions to the chair of the Senate committee that oversees this issue. Senate Banking, Housing, and Urban Affairs Committee Chair Mike Crapo (R-ID) had asked the GAO “to examine the types of personal information that financial institutions collect, use, and share; how they make consumers aware of their information-sharing practices; and federal regulatory oversight of these activities.” The GAO found that a ten-year-old model privacy disclosure form used across these industries may comply with the prevailing federal requirements but no longer encompasses the breadth and scope of how the personal information of people is collected, processed, and used. The GAO called on the Consumer Financial Protection Bureau (CFPB) to update this form. The GAO explained:
    • Banks and credit unions collect, use, and share consumers’ personal information—such as income level and credit card transactions—to conduct everyday business and market products and services. They share this information with a variety of third parties, such as service providers and retailers.
    • The Gramm-Leach-Bliley Act (GLBA) requires financial institutions to provide consumers with a privacy notice describing their information-sharing practices. Many banks and credit unions elect to use a model form—issued by regulators in 2009—which provides a safe harbor for complying with the law (see figure). GAO found the form gives a limited view of what information is collected and with whom it is shared. Consumer and privacy groups GAO interviewed cited similar limitations. The model form was issued over 10 years ago. The proliferation of data-sharing since then suggests a reassessment of the form is warranted. Federal guidance states that notices about information collection and usage are central to providing privacy protections and transparency.
    • Since Congress transferred authority to the CFPB for implementing GLBA privacy provisions, the agency has not reassessed if the form meets consumer expectations for disclosures of information-sharing. CFPB officials said they had not considered a reevaluation because they had not heard concerns from industry or consumer groups about privacy notices. Improvements to the model form could help ensure that consumers are better informed about all the ways banks and credit unions collect and share personal information
    • The increasing amounts of and changing ways in which industry collects and shares consumer personal information—including from online activities—highlights the importance of clearly disclosing practices for collection, sharing, and use. However, our work shows that banks and credit unions generally used the model form, which was created more than 10 years ago, to make disclosures required under GLBA. As a result, the disclosures often provided a limited view of how banks and credit unions collect, use, and share personal information.
    • We recognize that the model form is required to be succinct, comprehensible to consumers, and allow for comparability across institutions. But, as information practices continue to change or expand, consumer insights into those practices may become even more limited. Improvements and updates to the model privacy form could help ensure that consumers are better informed about all the ways that banks and credit unions collect, use, and share personal information. For instance, in online versions of privacy notices, there may be opportunities for readers to access additional details—such as through hyperlinks—in a manner consistent with statutory requirements.
  • The Australian Competition & Consumer Commission (ACCC) is asking for feedback on Google’s proposed $2.1 billion acquisition of Fitbit. In a rather pointed statement, the chair of the ACCC, Rod Sims, made clear “[o]ur decision to begin consultation should not be interpreted as a signal that the ACCC will ultimately accept the undertaking and approve the transaction.” The buyout is also under scrutiny in the European Union (EU) and may be affected by the suit the United States Department of Justice (DOJ) and some states have brought against the company for anti-competitive behavior. The ACCC released a Statement of Issues in June about the proposed deal.
    • The ACCC explained “[t]he proposed undertaking would require Google to:
      • not use certain user data collected through Fitbit and Google wearables for Google’s advertising purposes for 10 years, with an option for the ACCC to extend this obligation by up to a further 10 years;
      • maintain access for third parties, such as health and fitness apps, to certain user data collected through Fitbit and Google wearable devices for 10 years; and
      • maintain levels of interoperability between third party wearables and Android smartphones for 10 years.
    • In August, the EU “opened an in-depth investigation to assess the proposed acquisition of Fitbit by Google under the EU Merger Regulation.” The European Commission (EC) expressed its concerns “that the proposed transaction would further entrench Google’s market position in the online advertising markets by increasing the already vast amount of data that Google could use for personalisation of the ads it serves and displays.” The EC stated “[a]t this stage of the investigation, the Commission considers that Google:
      • is dominant in the supply of online search advertising services in the EEA countries (with the exception of Portugal for which market shares are not available);
      • holds a strong market position in the supply of online display advertising services at least in Austria, Belgium, Bulgaria, Croatia, Denmark, France, Germany, Greece, Hungary, Ireland, Italy, Netherlands, Norway, Poland, Romania, Slovakia, Slovenia, Spain, Sweden and the United Kingdom, in particular in relation to off-social networks display ads;
      • holds a strong market position in the supply of ad tech services in the EEA.
    • The EC explained that it “will now carry out an in-depth investigation into the effects of the transaction to determine whether its initial competition concerns regarding the online advertising markets are confirmed…[and] will also further examine:
      • the effects of the combination of Fitbit’s and Google’s databases and capabilities in the digital healthcare sector, which is still at a nascent stage in Europe; and
      • whether Google would have the ability and incentive to degrade the interoperability of rivals’ wearables with Google’s Android operating system for smartphones once it owns Fitbit.
    • Amnesty International (AI) sent EC Executive Vice-President Margrethe Vestager a letter, arguing “[t]he merger risks further extending the dominance of Google and its surveillance-based business model, the nature and scale of which already represent a systemic threat to human rights.” AI asserted “[t]he deal is particularly troubling given the sensitive nature of the health data that Fitbit holds that would be acquired by Google.” AI argued “[t]he Commission must ensure that the merger does not proceed unless the two business enterprises can demonstrate that they have taken adequate account of the human rights risks and implemented strong and meaningful safeguards that prevent and mitigate these risks in the future.”
  • Europol, the United Nations Interregional Crime and Justice Research Institute (UNICRI) and Trend Micro have cooperated on a report that looks “into current and predicted criminal uses of artificial intelligence (AI).
    • The organizations argued “AI could be used to support:
      • convincing social engineering attacks at scale;
      • document-scraping malware to make attacks more efficient;
      • evasion of image recognition and voice biometrics;
      • ransomware attacks, through intelligent targeting and evasion;
      • data pollution, by identifying blind spots in detection rules.
    • The organizations concluded:
      • Based on available insights, research, and a structured open-source analysis, this report covered the present state of malicious uses and abuses of AI, including AI malware, AI-supported password guessing, and AI-aided encryption and social engineering attacks. It also described concrete future scenarios ranging from automated content generation and parsing, AI-aided reconnaissance, smart and connected technologies such as drones and autonomous cars, to AI-enabled stock market manipulation, as well as methods for AI-based detection and defense systems.
      • Using one of the most visible malicious uses of AI — the phenomenon of so-called deepfakes — the report further detailed a case study on the use of AI techniques to manipulate or generate visual and audio content that would be difficult for humans or even technological solutions to immediately distinguish from authentic ones.
      • As speculated on in this paper, criminals are likely to make use of AI to facilitate and improve their attacks by maximizing opportunities for profit within a shorter period, exploiting more victims, and creating new, innovative criminal business models — all the while reducing their chances of being caught. Consequently, as “AI-as-a-Service”206 becomes more widespread, it will also lower the barrier to entry by reducing the skills and technical expertise required to facilitate attacks. In short, this further exacerbates the potential for AI to be abused by criminals and for it to become a driver of future crimes.
      • Although the attacks detailed here are mostly theoretical, crafted as proofs of concept at this stage, and although the use of AI to improve the effectiveness of malware is still in its infancy, it is plausible that malware developers are already using AI in more obfuscated ways without being detected by researchers and analysts. For instance, malware developers could already be relying on AI-based methods to bypass spam filters, escape the detection features of antivirus software, and frustrate the analysis of malware. In fact, DeepLocker, a tool recently introduced by IBM and discussed in this paper, already demonstrates these attack abilities that would be difficult for a defender to stop.
      • To add, AI could also enhance traditional hacking techniques by introducing new ways of performing attacks that would be difficult for humans to predict. These could include fully automated penetration testing, improved password-guessing methods, tools to break CAPTCHA security systems, or improved social engineering attacks. With respect to open-source tools providing such functionalities, the paper discussed some that have already been introduced, such as DeepHack, DeepExploit, and XEvil.
      • The widespread use of AI assistants, meanwhile, also creates opportunities for criminals who could exploit the presence of these assistants in households. For instance, criminals could break into a smart home by hijacking an automation system through exposed audio devices.

Coming Events

  • The National Institute of Standards and Technology (NIST) will hold a webinar on the Draft Federal Information Processing Standards (FIPS) 201-3 on 9 December.
  • On 9 December, the Senate Commerce, Science, and Transportation Committee will hold a hearing titled “The Invalidation of the EU-US Privacy Shield and the Future of Transatlantic Data Flows” with the following witnesses:
    • The Honorable Noah Phillips, Commissioner, Federal Trade Commission
    • Ms. Victoria Espinel, President and Chief Executive Officer, BSA – The Software Alliance
    • Mr. James Sullivan, Deputy Assistant Secretary for Services, International Trade Administration, U.S. Department of Commerce
    • Mr. Peter Swire, Elizabeth and Tommy Holder Chair of Law and Ethics, Georgia Tech Scheller College of Business, and Research Director, Cross-Border Data Forum
  • On 10 December, the Federal Communications Commission (FCC) will hold an open meeting and has released a tentative agenda:
    • Securing the Communications Supply Chain. The Commission will consider a Report and Order that would require Eligible Telecommunications Carriers to remove equipment and services that pose an unacceptable risk to the national security of the United States or the security and safety of its people, would establish the Secure and Trusted Communications Networks Reimbursement Program, and would establish the procedures and criteria for publishing a list of covered communications equipment and services that must be removed. (WC Docket No. 18-89)
    • National Security Matter. The Commission will consider a national security matter.
    • National Security Matter. The Commission will consider a national security matter.
    • Allowing Earlier Equipment Marketing and Importation Opportunities. The Commission will consider a Notice of Proposed Rulemaking that would propose updates to its marketing and importation rules to permit, prior to equipment authorization, conditional sales of radiofrequency devices to consumers under certain circumstances and importation of a limited number of radiofrequency devices for certain pre-sale activities. (ET Docket No. 20-382)
    • Promoting Broadcast Internet Innovation Through ATSC 3.0. The Commission will consider a Report and Order that would modify and clarify existing rules to promote the deployment of Broadcast Internet services as part of the transition to ATSC 3.0. (MB Docket No. 20-145)

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Further Reading, Other Developments, and Coming Events (3 November)

Further Reading

  • How Facebook and Twitter plan to handle election day disinformation” By Sam Dean — The Los Angeles Times; “How Big Tech is planning for election night” By Heather Kelly — The Washington Post; and “What to Expect From Facebook, Twitter and YouTube on Election Day” By Mike Isaac, Kate Conger and Daisuke Wakabayashi — The New York Times. Social media platforms have prepared for today and will use a variety of measures to combat lies, disinformation, and misinformation from sources foreign and domestic. Incidentally, my read is that these tech companies are more worried as measured by resource allocation about problematic domestic content.
  • QAnon received earlier boost from Russian accounts on Twitter, archives show” By Joseph Menn — Reuters. Researchers have delved into Twitter’s archive of taken down or suspended Russian accounts and are now claiming that the Russian Federation was pushing and amplifying the QAnon conspiracy in the United States (U.S.) as early as November 2017. This revised view of Russian involvement differs from conclusions reached in August that Russia played a small but significant role in the proliferation of the conspiracy.
  • Facial recognition used to identify Lafayette Square protester accused of assault” By Justin Jouvenal and Spencer S. Hsu — The Washington Post. When police were firing pepper balls and rolling gas canisters towards protestors in Lafayette Park in Washington, D.C. on 1 June 2020, a protestor was filmed assaulting two officers. A little known and not publicly revealed facial recognition technology platform available to many federal, state, and local law enforcement agencies in the Capital area resulted in a match with the footage. Now, a man stands accused of crimes during a Black Lives Matter march, and civil liberties and privacy advocates are objecting to the use of the National Capital Region Facial Recognition Investigative Leads System (NCRFRILS) on a number of grounds, including the demonstrated weakness of these systems to accurately identify people of color, the fact it has been used but not disclosed to a number of defendants, and the potential chilling effect it will have on people going to protests. Law enforcement officials claim there are strict privacy and process safeguards and an identification alone cannot be used as the basis of an arrest.
  • CISA’s political independence from Trump will be an Election Day asset” By Joseph Marks — The Washington Post. The United States’ (U.S.) Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA) has established its bona fides with nearly all stakeholders inside and outside the U.S. government since its creation in 2018. Many credit its Director, Christopher Krebs, and many also praise the agency for conveying information and tips about cyber and other threats without incurring the ire of a White House and President antithetical to any hint of Russian hacking or interference in U.S. elections. However, today and the next few days may prove the agency’s metal as it will be in uncharted territory trying to tamp down fear about unfounded rumors and looking to discredit misinformation and disinformation in close to real time.
  • Trump allies, largely unconstrained by Facebook’s rules against repeated falsehoods, cement pre-election dominance” By Isaac Stanley-Becker and Elizabeth Dwoskin — The Washington Post. Yet another article showing that if Facebook has a bias, it is against liberal viewpoints and content, for conservative material is allowed even if it breaks the rules of the platform. Current and former employees and an analysis support this finding. The Trump Family have been among those who have benefitted from the kid gloves used by the company regarding posts that would have likely resulted in consequences from other users. However, smaller conservative outlets or less prominent conservative figures have faced consequences for content that violates Facebook’s standards, however.

Other Developments

  • The European Union’s (EU) Executive Vice-President Margrethe Vestager made a speech titled “Building trust in technology,” in which she previewed one long awaited draft EU law on technology and another to address antitrust and anti-competitive practices of large technology companies. Vestager stated “in just a few weeks, we plan to publish two draft laws that will help to create a more trustworthy digital world.” Both drafts are expected on 2 December and represent key pieces of the new EU leadership’s Digital Strategy, the bloc’s initiative to update EU laws to account for changes in technology since the beginning of the century. The Digital Services Act will address and reform the legal treatment of both online commerce and online content. The draft Digital Markets Act would give the European Commission (EC) more tools to combat the same competition and market dominance issues the United States (U.S.), Australia, and other nations are starting to tackle visa vis companies like Apple, Amazon, Facebook, and Google.
    • Regarding the Digital Services Act, Vestager asserted:
      • One part of those rules will be the Digital Services Act, which will update the E-Commerce Directive, and require digital services to take more responsibility for dealing with illegal content and dangerous products. 
      • Those services will have to put clear and simple procedures in place, to deal with notifications about illegal material on their platforms. They’ll have to make it harder for dodgy traders to use the platform, by checking sellers’ IDs before letting them on the platform. And they’ll also have to provide simple ways for users to complain, if they think their material should not have been removed – protecting the right of legitimate companies to do business, and the right of individuals to freedom of expression. 
      • Those new responsibilities will help to keep Europeans just as safe online as they are in the physical world. They’ll protect legitimate businesses, which follow the rules, from being undercut by others who sell cheap, dangerous products. And by applying the same standards, all over Europe, they’ll make sure every European can rely on the same protection – and that digital businesses of all sizes can easily operate throughout Europe, without having to meet the costs of complying with different rules in different EU countries. 
      • The new rules will also require digital services – especially the biggest platforms – to be open about the way they shape the digital world that we see. They’ll have to report on what they’ve done to take down illegal material. They’ll have to tell us how they decide what information and products to recommend to us, and which ones to hide – and give us the ability to influence those decisions, instead of simply having them made for us. And they’ll have to tell us who’s paying for the ads that we see, and why we’ve been targeted by a certain ad.  
      • But to really give people trust in the digital world, having the right rules in place isn’t enough. People also need to know that those rules really work – that even the biggest companies will actually do what they’re supposed to. And to make sure that happens, there’s no substitute for effective enforcement.  
      • And effective enforcement is a vital part of the draft laws that we’ll propose in December. For instance, the Digital Services Act will improve the way national authorities cooperate, to make sure the rules are properly enforced, throughout the EU. Our proposal won’t change the fundamental principle, that digital services should be regulated by their home country. But it will set up a permanent system of cooperation that will help those regulators work more effectively, to protect consumers all across Europe. And it will give the EU power to step in, when we need to, to enforce the rules against some very large platforms.
    • Vestager also discussed the competition law the EC hopes to enact:
      • So, to keep our markets fair and open to competition, it’s vital that we have the right toolkit in place. And that’s what the second set of rules we’re proposing – what we call the Digital Markets Act – is for. 
      • That proposal will have two pillars. The first of those pillars will be a clear list of dos and don’ts for big digital gatekeepers, based on our experience with the sorts of behaviour that can stop markets working well. 
      • For instance, the decisions that gatekeepers take, about how to rank different companies in search results, can make or break businesses in dozens of markets that depend on the platform. And if platforms also compete in those markets themselves, they can use their position as player and referee to help their own services succeed, at the expense of their rivals. For instance, gatekeepers might manipulate the way that they rank different businesses, to show their own services more visibly than their rivals’. So, the proposal that we’ll put forward in a few weeks’ time will aim to ban this particular type of unfair self-preferencing. 
      • We also know that these companies can collect a lot of data about companies that rely on their platform – data which they can then use, to compete against those very same companies in other markets. That can seriously damage fairness in these markets – which is why our proposal aims to ban big gatekeepers from misusing their business users’ data in that way. 
      • These clear dos and don’ts will allow us to act much faster and more effectively, to tackle behaviour that we know can stop markets working well. But we also need to be ready for new situations, where digitisation creates deep, structural failures in the way our markets work.  
      • Once a digital company gets to a certain size, with the big network of users and the huge collections of data that brings, it can be very hard for anyone else to compete – even if they develop a much better service. So, we face a constant risk that big companies will succeed in pushing markets to a tipping point, sending them on a rapid, unstoppable slide towards monopoly – and creating yet another powerful gatekeeper. 
      • One way to deal with risks like this would be to stop those emerging gatekeepers from locking users into their platform. That could mean, for instance, that those gatekeepers would have to make it easier for users to switch platform, or to use more than one service. That would keep the market open for competition, by making it easier for innovative rivals to compete. But right now, we don’t have the power to take this sort of action when we see these risks arising. 
      • It can also be difficult to deal with large companies which use the power of their platforms again and again, to take over one related market after another. We can deal with that issue with a series of cases – but the risk is that we’ll always find ourselves playing catch-up, while platforms move from market to market, using the same strategies to drive out their rivals. 
      • The risk, though, is that we’ll have a fragmented system, with different rules in different EU countries. That would make it hard to tackle huge platforms that operate throughout Europe, and to deal with other problems that you find in digital markets in many EU countries. And it would mean that businesses and consumers across Europe can’t all rely on the same protection. 
      • That’s why the second pillar of the Digital Markets Act would put a harmonised market investigation framework in place across the single market, giving us the power to tackle market failures like this in digital markets, and stop new ones from emerging. That would give us a harmonised set of rules that would allow us to investigate certain structural problems in digital markets. And if necessary, we could take action to make these markets contestable and competitive.
  • California Governor Gavin Newsom (D) vetoed one of the bills sent to him to amend the “California Consumer Privacy Act” (AB 375) last week. In mid-October, he signed two bills that amended the CCPA but one will only take effect if the “California Privacy Rights Act” (CPRA) (Ballot Initiative 24) is not enacted by voters in the November election. Moreover, if the CPRA is enacted via ballot, then the two other statutes would likely become dead law as the CCPA and its amendments would become moot once the CPRA becomes effective in 2023.
    • Newsom vetoed AB 1138 that would amend the recently effective “Parent’s Accountability and Child Protection Act” would bar those under the age of 13 from opening a social media account unless the platform got the explicit consent from their parents. Moreover, “[t]he bill would deem a business to have actual knowledge of a consumer’s age if it willfully disregards the consumer’s age.” In his veto message, Newsom explained that while he agrees with the spirit of the legislation, it would create unnecessary confusion and overlap with federal law without any increase in protection for children. He signaled an openness to working with the legislature on this issue, however.
    • Newsom signed AB 1281 that would extend the carveout for employers to comply with the CCPA from 1 January 2021 to 1 January 2022. The CCPA “exempts from its provisions certain information collected by a business about a natural person in the course of the natural person acting as a job applicant, employee, owner, director, officer, medical staff member, or contractor, as specified…[and also] exempts from specified provisions personal information reflecting a written or verbal communication or a transaction between the business and the consumer, if the consumer is a natural person who is acting as an employee, owner, director, officer, or contractor of a company, partnership, sole proprietorship, nonprofit, or government agency and whose communications or transaction with the business occur solely within the context of the business conducting due diligence regarding, or providing or receiving a product or service to or from that company, partnership, sole proprietorship, nonprofit, or government agency.” AB 1281 “shall become operative only” if the CPRA is not approved by voters.
    • Newsom also signed AB 713 that would:
      • except from the CCPA information that was deidentified in accordance with specified federal law, or was derived from medical information, protected health information, individually identifiable health information, or identifiable private information, consistent with specified federal policy, as provided.
      • except from the CCPA a business associate of a covered entity, as defined, that is governed by federal privacy, security, and data breach notification rules if the business associate maintains, uses, and discloses patient information in accordance with specified requirements.
      • except information that is collected for, used in, or disclosed in research, as defined.
      • additionally prohibit a business or other person from reidentifying information that was deidentified, unless a specified exception is met.
      • beginning January 1, 2021, require a contract for the sale or license of deidentified information to include specified provisions relating to the prohibition of reidentification, as provided.
  • The Government Accountability Office (GAO) published a report requested by the chair of a House committee on a Federal Communications Commission (FCC) program to provide subsidies for broadband providers to provide service for High cost areas, typically for people in rural or hard to reach areas. Even though the FCC has provided nearly $5 billion in 2019 through the Universal Service Fund’s (USF) high cost program, the agency lack data to determine whether the goals of the program are being met. House Energy and Commerce Committee Chair Frank Pallone Jr (D-NJ) had asked for the assessment, which could well form the basis for future changes to how the FCC funds and sets conditions for use of said funds for broadband.
    • The GAO noted:
      • According to stakeholders GAO interviewed, FCC faces three key challenges to accomplish its high-cost program performance goals: (1) accuracy of FCC’s broadband deployment data, (2) broadband availability on tribal lands, and (3) maintaining existing fixed-voice infrastructure and attaining universal mobile service. For example, although FCC adopted a more precise method of collecting and verifying broadband availability data, stakeholders expressed concern the revised data would remain inaccurate if carriers continue to overstate broadband coverage for marketing and competitive reasons. Overstating coverage impairs FCC’s efforts to promote universal voice and broadband since an area can become ineligible for high-cost support if a carrier reports that service already exists in that area. FCC has also taken actions to address the lack of broadband availability on tribal lands, such as making some spectrum available to tribes for wireless broadband in rural areas. However, tribal stakeholders told GAO that some tribes are unable to secure funding to deploy the infrastructure necessary to make use of spectrum for wireless broadband purposes.
    • The GAO concluded:
      • The effective use of USF resources is critical given the importance of ensuring that Americans have access to voice and broadband services. Overall, FCC’s high-cost program performance goals and measures are often not conducive to providing FCC with high-quality performance information. The performance goals lack measurable or quantifiable bases upon which numeric targets can be set. Further, while FCC’s performance measures met most of the key attributes of effective measures, the measures often lacked linkage, clarity, and objectivity. Without such clarity and specific desired outcomes, FCC lacks performance information that could help FCC make better-informed decisions about how to allocate the program’s resources to meet ongoing and emerging challenges to ensuring universal access to voice and broadband services. Furthermore, the absence of public reporting of this information leaves stakeholders, including Congress, uncertain about the program’s effectiveness to deploy these services.
    • The GAO recommended that
      • The Chairman of FCC should revise the high-cost performance goals so that they are measurable and quantifiable.
      • The Chairman of FCC should ensure high-cost performance measures align with key attributes of successful performance measures, including ensuring that measures clearly link with performance goals and have specified targets.
      • The Chairman of FCC should ensure the high-cost performance measure for the goal of minimizing the universal service contribution burden on consumers and businesses takes into account user-fee leading practices, such as equity and sustainability considerations.
      • The Chairman of FCC should publicly and periodically report on the progress it has made for its high-cost program’s performance goals, for example, by including relevant performance information in its Annual Broadband Deployment Report or the USF Monitoring Report.
  • An Irish court has ruled that the Data Protection Commission (DPC) must cover the legal fees of Maximillian Schrems for litigating and winning his case in which the Court of Justice of the European Union (CJEU) struck down the adequacy decision underpinning the European Union-United States Privacy Shield (aka Schrems II). It has been estimated that Schrems’ legal costs could total between €2-5 million for filing a complaint against Facebook, the low end of which would represent 10% of the DPC’s budget this year. In addition, the DPC has itself spent almost €3 million litigating the case.
  • House Transportation and Infrastructure Chair Peter DeFazio (D-OR) and Ranking Member Sam Graves (R-MO) asked that the Government Accountability Office (GAO) review the implications of the Federal Communications Commission’s (FCC) 2019 proposal to allow half of the Safety Band (i.e. 5.9 GHz spectrum) to be used for wireless communications even though the United States (U.S.) Department of Transportation weighed in against this decision. DeFazio and Graves are asking for a study of “the safety implications of sharing more than half of the 5.9 GHz spectrum band” that is currently “reserved exclusively for transportation safety purposes.” This portion of the spectrum was set aside in 1999 for connected vehicle technologies, according to DeFazio and Graves, and given the rise of automobile technology that requires spectrum, they think the FCC’s proposed proceeding “may significantly affect the efficacy of current and future applications of vehicle safety technologies.” DeFazio and Graves asked the GAO to review the following:
    • What is the current status of 5.9 GHz wireless transportation technologies, both domestically and internationally?
    • What are the views of industry and other stakeholders on the potential uses of these technologies, including scenarios where the 5.9 GHz spectrum is shared among different applications?
    • In a scenario in which the 5.9 GHz spectrum band is shared among different applications, what effect would this have on the efficacy, including safety, of current wireless transportation technology deployments?
    • What options exist for automakers and highway management agencies to ensure the safe deployment of connected vehicle technologies in a scenario in which the 5.9 GHz spectrum band is shared among different applications?
    • How, if at all, have DOT and FCC assessed current and future needs for 5.9 GHz spectrum in transportation applications? 
    • How, if at all, have DOT and FCC coordinated to develop a federal spectrum policy for connected vehicles?
  • The Harvard Law School’s Cyberlaw Clinic and the Electronic Frontier Foundation (EFF) have published “A Researcher’s Guide to Some Legal Risks of Security Research,” which is timely given the case before the Supreme Court of the United States to determine whether the “Computer Fraud and Abuse Act” (CFAA). The Cyberlaw Clinic and EFF stated:
    • Just last month, over 75 prominent security researchers signed a letter urging the Supreme Court not to interpret the CFAA, the federal anti-hacking/computer crime statute, in a way that would criminalize swaths of valuable security research.
    • In the report, the Cyberlaw Clinic and EFF explained:
      • This guide overviews broad areas of potential legal risk related to security research, and the types of security research likely implicated. We hope it will serve as a useful starting point for concerned researchers and others. While the guide covers what we see as the main areas of legal risk for security researchers, it is not exhaustive.
    • In Van Buren v. United States, the Court will consider the question of “[w]hether a person who is authorized to access information on a computer for certain purposes violates Section 1030(a)(2) of the Computer Fraud and Abuse Act if he accesses the same information for an improper purpose.” In this case, the defendant was a police officer who took money as part of a sting operation to illegally use his access to Georgia’s database of license plates to obtain information about a person. The Eleventh Circuit Court of Appeals denied his appeal of his conviction under the CFAA per a previous ruling in that circuit that “a defendant violates the CFAA not only when he obtains information that he has no “rightful[]” authorization whatsoever to acquire, but also when he obtains information “for a nonbusiness purpose.”
    • As the Cyberlaw Clinic and EFF noted in the report:
      • Currently, courts are divided about whether the statute’s prohibition on “exceed[ing] authorized access” applies to people who have authorization to access data (for some purpose), but then access it for a (different) purpose that violates a contractual terms of service or computer use policy. Some courts have found that the CFAA covers activities that do not circumvent any technical access barriers, from making fake profiles that violate Facebook’s terms of service to running a search on a government database without permission. Other courts, disagreeing with the preceding approach, have held that a verbal or contractual prohibition alone cannot render access punishable under the CFAA.
  • The United Kingdom’s Institution of Engineering and Technology (IET) and the National Cyber Security Centre (NCSC) have published “Code of Practice: Cyber Security and Safety” to help engineers develop technological and digital products with both safety and cybersecurity in mind. The IET and NCSC explained:
    • The aim of this Code is to help organizations accountable for safety-related systems manage cyber security vulnerabilities that could lead to hazards. It does this by setting out principles that when applied will improve the interaction between the disciplines of system safety and cyber security, which have historically been addressed as distinct activities.
    • The objectives for organizations that follow this Code are: (a) to manage the risk to safety from any insecurity of digital technology; (b) to be able to provide assurance that this risk is acceptable; and (c) where there is a genuine conflict between the controls used to achieve safety and security objectives, to ensure that these are properly resolved by the appropriate authority.
    • It should be noted that the focus of this Code is on the safety and security of digital technology, but it is recognized that addressing safety and cyber security risk is not just a technological issue: it involves people, process, physical and technological aspects. It should also be noted that whilst digital technology is central to the focus, other technologies can be used in pursuit of a cyber attack. For example, the study of analogue emissions (electromagnetic, audio, etc.) may give away information being digitally processed and thus analogue interfaces could be used to provide an attack surface.

Coming Events

  • On 10 November, the Senate Commerce, Science, and Transportation Committee will hold a hearing to consider nominations, including Nathan Simington’s to be a Member of the Federal Communications Commission.
  • On 17 November, the Senate Judiciary Committee will reportedly hold a hearing with Facebook CEO Mark Zuckerberg and Twitter CEO Jack Dorsey on Section 230 and how their platforms chose to restrict The New York Post article on Hunter Biden.
  • On 18 November, the Federal Communications Commission (FCC) will hold an open meeting and has released a tentative agenda:
    • Modernizing the 5.9 GHz Band. The Commission will consider a First Report and Order, Further Notice of Proposed Rulemaking, and Order of Proposed Modification that would adopt rules to repurpose 45 megahertz of spectrum in the 5.850-5.895 GHz band for unlicensed operations, retain 30 megahertz of spectrum in the 5.895-5.925 GHz band for the Intelligent Transportation Systems (ITS) service, and require the transition of the ITS radio service standard from Dedicated Short-Range Communications technology to Cellular Vehicle-to-Everything technology. (ET Docket No. 19-138)
    • Further Streamlining of Satellite Regulations. The Commission will consider a Report and Order that would streamline its satellite licensing rules by creating an optional framework for authorizing space stations and blanket-licensed earth stations through a unified license. (IB Docket No. 18-314)
    • Facilitating Next Generation Fixed-Satellite Services in the 17 GHz Band. The Commission will consider a Notice of Proposed Rulemaking that would propose to add a new allocation in the 17.3-17.8 GHz band for Fixed-Satellite Service space-to-Earth downlinks and to adopt associated technical rules. (IB Docket No. 20-330)
    • Expanding the Contribution Base for Accessible Communications Services. The Commission will consider a Notice of Proposed Rulemaking that would propose expansion of the Telecommunications Relay Services (TRS) Fund contribution base for supporting Video Relay Service (VRS) and Internet Protocol Relay Service (IP Relay) to include intrastate telecommunications revenue, as a way of strengthening the funding base for these forms of TRS and making it more equitable without increasing the size of the Fund itself. (CG Docket Nos. 03-123, 10-51, 12-38)
    • Revising Rules for Resolution of Program Carriage Complaints. The Commission will consider a Report and Order that would modify the Commission’s rules governing the resolution of program carriage disputes between video programming vendors and multichannel video programming distributors. (MB Docket Nos. 20-70, 17-105, 11-131)
    • Enforcement Bureau Action. The Commission will consider an enforcement action.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Pexels from Pixabay

Further Reading, Other Developments, and Coming Events (22 October)

Further Reading

  •  “A deepfake porn Telegram bot is being used to abuse thousands of women” By Matt Burgess — WIRED UK. A bot set loose on Telegram can take pictures of women and, apparently teens, too, and “takes off” their clothing, rendering a naked image of females who never took naked pictures. This seems to be the next iteration in deepfake porn, a problem that will surely get worse until governments legislate against it and technology companies have incentives to locate and take down such material.
  • The Facebook-Twitter-Trump Wars Are Actually About Something Else” By Charlie Warzel — The New York Times. This piece makes the case that there are no easy fixes for American democracy or for misinformation on social media platforms.
  • Facebook says it rejected 2.2m ads for breaking political campaigning rules” — Agence France-Presse. Facebook’s Vice President of Global Affairs and Communications Nick Clegg said the social media giant is employing artificial intelligence and humans to find and remove political advertisements that violate policy in order to avoid a repeat of 2016 where untrue information and misinformation played roles in both Brexit and the election of Donald Trump as President of the United States.
  • Huawei Fallout—Game-Changing New China Threat Strikes At Apple And Samsung” By Zak Doffman — Forbes. Smartphone manufacturers from the People’s Republic of China (PRC) appear ready to step into the projected void caused by the United States (U.S.) strangling off Huawei’s access to chips. Xiaomi and Oppo have already seen sales surge worldwide and are poised to pick up where Huawei is being forced to leave off, perhaps demonstrating the limits of U.S. power to blunt the rise of PRC technology companies.
  • As Local News Dies, a Pay-for-Play Network Rises in Its Place” By Davey Alba and Jack Nicas — The New York Times. With a decline and demise of many local media outlets in the United States, new groups are stepping into the void, and some are politically minded but not transparent about biases. The organization uncovered in this article is nakedly Republican and is running and planting articles at both legitimate and artificial news sites for pay. Sometimes conservative donors pay, sometimes campaigns do. Democrats are engaged in the same activity but apparently to a lesser extent. These sorts of activities will only erode further faith in the U.S. media.
  • Forget Antitrust Laws. To Limit Tech, Some Say a New Regulator Is Needed.” By Steve Lohr — The New York Times. This piece argues that anti-trust enforcement actions are plodding, tending to take years to finish. Consequently, this body of law is inadequate to the task of addressing the market dominance of big technology companies. Instead, a new regulatory body is needed along the lines of those regulating the financial services industries that is more nimble than anti-trust. Given the problems in that industry with respect to regulation, this may not be the best model.
  • “‘Do Not Track’ Is Back, and This Time It Might Work” By Gilad Edelman — WIRED. Looking to utilize the requirement in the “California Consumer Privacy Act” (CCPA) (AB 375) that requires regulated entities to respect and effectuate the use of a one-time opt-out mechanism, a group of entities have come together to build and roll out the Global Privacy Control. In theory, users could download this technical specification to their phones and computers, install it, use it once, and then all websites would be on notice regarding that person’s privacy preferences. Such a means would go to the problem turned up by Consumer Reports recent report on the difficulty of trying to opt out of having one’s personal information sold.
  • EU countries sound alarm about growing anti-5G movement” By Laurens Cerulus — Politico. 15 European Union (EU) nations wrote the European Commission (EC) warning that the nascent anti-5G movement borne of conspiracy thinking and misinformation threatens the Eu’s position vis-à-vis the United States (U.S.) and the People’s Republic of China (PRC). There have been more than 200 documented arson attacks in the EU with the most having occurred in the United Kingdom, France, and the Netherlands. These nations called for a more muscular, more forceful debunking of the lies and misinformation being spread about 5G.
  • Security firms call Microsoft’s effort to disrupt botnet to protect against election interference ineffective” By Jay Greene — The Washington Post. Microsoft seemingly acted alongside the United States (U.S.) Cyber Command to take down and impair the operation of Trickbot, but now cybersecurity experts are questioning how effective Microsoft’s efforts really were. Researchers have shown the Russian operated Trickbot has already stood up operations and has dispersed across servers around the world, showing how difficult it is to address some cyber threats.
  • Governments around the globe find ways to abuse Facebook” By Sara Fischer and Ashley Gold — Axios. This piece puts a different spin on the challenges Facebook faces in countries around the world, especially those that ruthlessly use the platform to spread lies and misinformation than the recent BuzzFeed News article. The new article paints Facebook as the well-meaning company being taken advantage of while the other one portrays a company callous to content moderation except in nations where it causes them political problems such as the United States, the European Union, and other western democracies.

Other Developments

  • The United States (U.S.) Department of Justice’s (DOJ) Cyber-Digital Task Force (Task Force) issued “Cryptocurrency: An Enforcement Framework,” that “provides a comprehensive overview of the emerging threats and enforcement challenges associated with the increasing prevalence and use of cryptocurrency; details the important relationships that the Department of Justice has built with regulatory and enforcement partners both within the United States government and around the world; and outlines the Department’s response strategies.” The Task Force noted “[t]his document does not contain any new binding legal requirements not otherwise already imposed by statute or regulation.” The Task Force summarized the report:
    • [I]n Part I, the Framework provides a detailed threat overview, cataloging the three categories into which most illicit uses of cryptocurrency typically fall: (1) financial transactions associated with the commission of crimes; (2) money laundering and the shielding of legitimate activity from tax, reporting, or other legal requirements; and (3) crimes, such as theft, directly implicating the cryptocurrency marketplace itself. 
    • Part II explores the various legal and regulatory tools at the government’s disposal to confront the threats posed by cryptocurrency’s illicit uses, and highlights the strong and growing partnership between the Department of Justice and the Securities and Exchange Commission, the Commodity Futures Commission, and agencies within the Department of the Treasury, among others, to enforce federal law in the cryptocurrency space.
    • Finally, the Enforcement Framework concludes in Part III with a discussion of the ongoing challenges the government faces in cryptocurrency enforcement—particularly with respect to business models (employed by certain cryptocurrency exchanges, platforms, kiosks, and casinos), and to activity (like “mixing” and “tumbling,” “chain hopping,” and certain instances of jurisdictional arbitrage) that may facilitate criminal activity.    
  • The White House’s Office of Science and Technology Policy (OSTP) has launched a new website for the United States’ (U.S.) quantum initiative and released a report titled “Quantum Frontiers: Report On Community Input To The Nation’s Strategy For Quantum Information Science.” The Quantum Initiative flows from the “National Quantum Initiative Act” (P.L. 115-368) “to  provide  for  a  coordinated  Federal  program  to  accelerate  quantum  research  and  development  for  the  economic and national security of the United States.” The OSTP explained that the report “outlines eight frontiers that contain core problems with fundamental questions confronting quantum information science (QIS) today:
    • Expanding Opportunities for Quantum Technologies to Benefit Society
    • Building the Discipline of Quantum Engineering
    • Targeting Materials Science for Quantum Technologies
    • Exploring Quantum Mechanics through Quantum Simulations
    • Harnessing Quantum Information Technology for Precision Measurements
    • Generating and Distributing Quantum Entanglement for New Applications
    • Characterizing and Mitigating Quantum Errors
    • Understanding the Universe through Quantum Information
    • OSTP asserted “[t]hese frontier areas, identified by the QIS research community, are priorities for the government, private sector, and academia to explore in order to drive breakthrough R&D.”
  • The New York Department of Financial Services (NYDFS) published its report on the July 2020 Twitter hack during which a team of hacker took over a number of high-profile accounts (e.g. Barack Obama, Kim Kardashian West, Jeff Bezos, and Elon Musk) in order to perpetrate a cryptocurrency scam. The NYDFS has jurisdiction over cryptocurrencies and companies dealing in this item in New York. The NYDFS found that the hackers used the most basic means to acquire permission to take over accounts. The NYDFS explained:
    • Given that Twitter is a publicly traded, $37 billion technology company, it was surprising how easily the Hackers were able to penetrate Twitter’s network and gain access to internal tools allowing them to take over any Twitter user’s account. Indeed, the Hackers used basic techniques more akin to those of a traditional scam artist: phone calls where they pretended to be from Twitter’s Information Technology department. The extraordinary access the Hackers obtained with this simple technique underscores Twitter’s cybersecurity vulnerability and the potential for devastating consequences. Notably, the Twitter Hack did not involve any of the high-tech or sophisticated techniques often used in cyberattacks–no malware, no exploits, and no backdoors.
    • The implications of the Twitter Hack extend far beyond this garden-variety fraud. There are well-documented examples of social media being used to manipulate markets and interfere with elections, often with the simple use of a single compromised account or a group of fake accounts.In the hands of a dangerous adversary, the same access obtained by the Hackers–the ability to take control of any Twitter users’ account–could cause even greater harm.
    • The Twitter Hack demonstrates the need for strong cybersecurity to curb the potential weaponization of major social media companies. But our public institutions have not caught up to the new challenges posed by social media. While policymakers focus on antitrust and content moderation problems with large social media companies, their cybersecurity is also critical. In other industries that are deemed critical infrastructure, such as telecommunications, utilities, and finance, we have established regulators and regulations to ensure that the public interest is protected. With respect to cybersecurity, that is what is needed for large, systemically important social media companies.
    • The NYDFS recommended the cybersecurity measures cryptocurrency companies in New York should implement to avoid similar hacks, including its own cybersecurity regulations that bind its regulated entities in New York. The NYDFS also called for a national regulator to address the lack of a dedicated regulator of Twitter and other massive social media platforms. The NYDFS asserted:
      • Social media companies currently have no dedicated regulator. They are subject to the same general oversight applicable to other companies. For instance, the SEC’s regulations for all public companies apply to public social media companies, and antitrust and related laws and regulations enforced by the Department of Justice and the FTC apply to social media companies as they do to all companies. Social media companies are also subject to generally applicable laws, such as the California Consumer Privacy Act and the New York SHIELD Act. The European Union’s General Data Protection Regulation, which regulates the storage and use of personal data, also applies to social media entities doing business in Europe.
      • But there are no regulators that have the authority to uniformly regulate social media platforms that operate over the internet, and to address the cybersecurity concerns identified in this Report. That regulatory vacuum must be filled.
      • A useful starting point is to create a “systemically important” designation for large social media companies, like the designation for critically important bank and non-bank financial institutions. In the wake of the 2007-08 financial crisis, Congress established a new regulatory framework for financial institutions that posed a systemic threat to the financial system of the United States. An institution could be designated as a Systemically Important Financial Institution (“SIFI”) “where the failure of or a disruption to the functioning of a financial market utility or the conduct of a payment, clearing, or settlement activity could create, or increase, the risk of significant liquidity or credit problems spreading among financial institutions or markets and thereby threaten the stability of the financial system of the United States.”
      • The risks posed by social media to our consumers, economy, and democracy are no less grave than the risks posed by large financial institutions. The scale and reach of these companies, combined with the ability of adversarial actors who can manipulate these systems, require a similarly bold and assertive regulatory approach.
      • The designation of an institution as a SIFI is made by the Financial Stability Oversight Council (“FSOC”), which Congress established to “identify risks to the financial stability of the United States” and to provide enhanced supervision of SIFIs.[67] The FSOC also “monitors regulatory gaps and overlaps to identify emerging sources of systemic risk.” In determining whether a financial institution is systemically important, the FSOC considers numerous factors including: the effect that a failure or disruption to an institution would have on financial markets and the broader financial system; the nature of the institution’s transactions and relationships; the nature, concentration, interconnectedness, and mix of the institution’s activities; and the degree to which the institution is regulated.
      • An analogue to the FSOC should be established to identify systemically important social media companies. This new Oversight Council should evaluate the reach and impact of social media companies, as well as the society-wide consequences of a social media platform’s misuse, to determine which companies they should designate as systemically important. Once designated, those companies should be subject to enhanced regulation, such as through the provision of “stress tests” to evaluate the social media companies’ susceptibility to key threats, including cyberattacks and election interference.
      • Finally, the success of such oversight will depend on the establishment of an expert agency to oversee designated social media companies. Systemically important financial companies designated by the FSOC are overseen by the Federal Reserve Board, which has a long-established and deep expertise in banking and financial market stability. A regulator for systemically important social media would likewise need deep expertise in areas such as technology, cybersecurity, and disinformation. This expert regulator could take various forms; it could be a completely new agency or could reside within an established agency or at an existing regulator.
  • The Government Accountability Office (GAO) evaluated how well the Trump Administration has been implementing the “Open, Public, Electronic and Necessary Government Data Act of 2018” (OPEN Government Data Act) (P.L. 115-435). As the GAO explained, this statute “requires federal agencies to publish their information as open data using standardized, nonproprietary formats, making data available to the public open by default, unless otherwise exempt…[and] codifies and expands on existing federal open data policy including the Office of Management and Budget’s (OMB) memorandum M-13-13 (M-13-13), Open Data Policy—Managing Information as an Asset.”
    • The GAO stated
      • To continue moving forward with open government data, the issuance of OMB implementation guidance should help agencies develop comprehensive inventories of their data assets, prioritize data assets for publication, and decide which data assets should or should not be made available to the public.
      • Implementation of this statutory requirement is critical to agencies’ full implementation and compliance with the act. In the absence of this guidance, agencies, particularly agencies that have not previously been subject to open data policies, could fall behind in meeting their statutory timeline for implementing comprehensive data inventories.
      • It is also important for OMB to meet its statutory responsibility to biennially report on agencies’ performance and compliance with the OPEN Government Data Act and to coordinate with General Services Administration (GSA) to improve the quality and availability of agency performance data that could inform this reporting. Access to this information could inform Congress and the public on agencies’ progress in opening their data and complying with statutory requirements. This information could also help agencies assess their progress and improve compliance with the act.
    • The GAO made three recommendations:
      • The Director of OMB should comply with its statutory requirement to issue implementation guidance to agencies to develop and maintain comprehensive data inventories. (Recommendation 1)
      • The Director of OMB should comply with the statutory requirement to electronically publish a report on agencies’ performance and compliance with the OPEN Government Data Act. (Recommendation 2)
      • The Director of OMB, in collaboration with the Administrator of GSA, should establish policy to ensure the routine identification and correction of errors in electronically published performance information. (Recommendation 3)
  • The United States’ (U.S.) National Security Agency (NSA) issued a cybersecurity advisory titled “Chinese State-Sponsored Actors Exploit Publicly Known Vulnerabilities,” that “provides Common Vulnerabilities and Exposures (CVEs) known to be recently leveraged, or scanned-for, by Chinese state-sponsored cyber actors to enable successful hacking operations against a multitude of victim networks.” The NSA recommended a number of mitigations generally for U.S. entities, including:
    • Keep systems and products updated and patched as soon as possible after patches are released.
    • Expect that data stolen or modified (including credentials, accounts, and software) before the device was patched will not be alleviated by patching, making password changes and reviews of accounts a good practice.
    • Disable external management capabilities and set up an out-of-band management network.
    • Block obsolete or unused protocols at the network edge and disable them in device configurations.
    • Isolate Internet-facing services in a network Demilitarized Zone (DMZ) to reduce the exposure of the internal network.
    • Enable robust logging of Internet-facing services and monitor the logs for signs of compromise.
    • The NSA then proceeded to recommend specific fixes.
    • The NSA provided this policy backdrop:
      • One of the greatest threats to U.S. National Security Systems (NSS), the U.S. Defense Industrial Base (DIB), and Department of Defense (DOD) information networks is Chinese state-sponsored malicious cyber activity. These networks often undergo a full array of tactics and techniques used by Chinese state-sponsored cyber actors to exploit computer networks of interest that hold sensitive intellectual property, economic, political, and military information. Since these techniques include exploitation of publicly known vulnerabilities, it is critical that network defenders prioritize patching and mitigation efforts.
      • The same process for planning the exploitation of a computer network by any sophisticated cyber actor is used by Chinese state-sponsored hackers. They often first identify a target, gather technical information on the target, identify any vulnerabilities associated with the target, develop or re-use an exploit for those vulnerabilities, and then launch their exploitation operation.
  • Belgium’s data protection authority (DPA) (Autorité de protection des données in French or Gegevensbeschermingsautoriteit in Dutch) (APD-GBA) has reportedly found that the Transparency & Consent Framework (TCF) developed by the Interactive Advertising Bureau (IAB) violates the General Data Protection Regulation (GDPR). The Real-Time Bidding (RTB) system used for online behavioral advertising allegedly transmits the personal information of European Union residents without their consent even before a popup appears on their screen asking for consent. The APD-GBA is the lead DPA in the EU in investigating the RTB and will likely now circulate their findings and recommendations to other EU DPAs before any enforcement will commence.
  • None Of Your Business (noyb) announced “[t]he Irish High Court has granted leave for a “Judicial Review” against the Irish Data Protection Commission (DPC) today…[and] [t]he legal action by noyb aims to swiftly implement the [Court of Justice for the European Union (CJEU)] Decision prohibiting Facebook’s” transfer of personal data from the European Union to the United States (U.S.) Last month, after the DPC directed Facebook to stop transferring the personal data of EU citizens to the U.S., the company filed suit in the Irish High Court to stop enforcement of the order and succeeded in staying the matter until the court rules on the merits of the challenge.
    • noyb further asserted:
      • Instead of making a decision in the pending procedure, the DPC has started a second, new investigation into the same subject matter (“Parallel Procedure”), as widely reported (see original reporting by the WSJ). No logical reasons for the Parallel Procedure was given, but the DPC has maintained that Mr Schrems will not be heard in this second case, as he is not a party in this Parallel Procedure. This Paralell procedure was criticised by Facebook publicly (link) and instantly blocked by a Judicial Review by Facebook (see report by Reuters).
      • Today’s Judicial Review by noyb is in many ways the counterpart to Facebook’s Judicial Review: While Facebook wants to block the second procedure by the DPC, noyb wants to move the original complaints procedure towards a decision.
      • Earlier this summer, the CJEU struck down the adequacy decision for the agreement between the EU and (U.S. that had provided the easiest means to transfer the personal data of EU citizens to the U.S. for processing under the General Data Protection Regulation (GDPR) (i.e. the EU-U.S. Privacy Shield). In the case known as Schrems II, the CJEU also cast doubt on whether standard contractual clauses (SCC) used to transfer personal data to the U.S. would pass muster given the grounds for finding the Privacy Shield inadequate: the U.S.’s surveillance regime and lack of meaningful redress for EU citizens. Consequently, it has appeared as if data protection authorities throughout the EU would need to revisit SCCs for transfers to the U.S., and it appears the DPC was looking to stop Facebook from using its SCC. Facebook is apparently arguing in its suit that it will suffer “extremely significant adverse effects” if the DPC’s decision is implemented.
  • Most likely with the aim of helping British chances for an adequacy decision from the European Union (EU), the United Kingdom’s Information Commissioner’s Office (ICO) published guidance that “discusses the right of access [under the General Data Protection Regulation (GDPR)] in detail.” The ICO explained “is aimed at data protection officers (DPOs) and those with specific data protection responsibilities in larger organisations…[but] does not specifically cover the right of access under Parts 3 and 4 of the Data Protection Act 2018.”
    • The ICO explained
      • The right of access, commonly referred to as subject access, gives individuals the right to obtain a copy of their personal data from you, as well as other supplementary information.
  • The report the House Education and Labor Ranking Member requested from the Government Accountability Office (GAO) on the data security and data privacy practices of public schools. Representative Virginia Foxx (R-NC) asked the GAO “to review the security of K-12 students’ data. This report examines (1) what is known about recently reported K-12 cybersecurity incidents that compromised student data, and (2) the characteristics of school districts that experienced these incidents.” Strangely, the report did have GAO’s customary conclusions or recommendations. Nonetheless, the GAO found:
    • Ninety-nine student data breaches reported from July 1, 2016 through May 5, 2020 compromised the data of students in 287 school districts across the country, according to our analysis of K-12 Cybersecurity Resource Center (CRC) data (see fig. 3). Some breaches involved a single school district, while others involved multiple districts. For example, an attack on a vendor system in the 2019-2020 school year affected 135 districts. While information about the number of students affected was not available for every reported breach, examples show that some breaches affected thousands of students, for instance, when a cybercriminal accessed 14,000 current and former students’ personally identifiable information (PII) in one district.
    • The 99 reported student data breaches likely understate the number of breaches that occurred, for different reasons. Reported incidents sometimes do not include sufficient information to discern whether data were breached. We identified 15 additional incidents in our analysis of CRC data in which student data might have been compromised, but the available information was not definitive. In addition, breaches can go undetected for some time. In one example, the personal information of hundreds of thousands of current and former students in one district was publicly posted for 2 years before the breach was discovered.
    • The CRC identified 28 incidents involving videoconferences from April 1, 2020 through May 5, 2020, some of which disrupted learning and exposed students to harm. In one incident, 50 elementary school students were exposed to pornography during a virtual class. In another incident in a different district, high school students were targeted with hate speech during a class, resulting in the cancellation that day of all classes using the videoconferencing software. These incidents also raise concerns about the potential for violating students’ privacy. For example, one district is reported to have instructed teachers to record their class sessions. Teachers said that students’ full names were visible to anyone viewing the recording.
    • The GAO found gaps in the protection and enforcement of student privacy by the United States government:
      • [The Department of] Education is responsible for enforcing Family Educational Rights and Privacy Act (FERPA), which addresses the privacy of PII in student education records and applies to all schools that receive funds under an applicable program administered by Education. If parents or eligible students believe that their rights under FERPA have been violated, they may file a formal complaint with Education. In response, Education is required to take appropriate actions to enforce and deal with violations of FERPA. However, because the department’s authority under FERPA is directly related to the privacy of education records, Education’s security role is limited to incidents involving potential violations under FERPA. Further, FERPA amendments have not directly addressed educational technology use.
      • The “Children’s Online Privacy Protection Act” (COPPA) requires the Federal Trade Commission (FTC) to issue and enforce regulations concerning children’s privacy. The COPPA Rule, which took effect in 2000 and was later amended in 2013, requires operators of covered websites or online services that collect personal information from children under age 13 to provide notice and obtain parental consent, among other things. COPPA generally applies to the vendors who provide educational technology, rather than to schools directly. However, according to FTC guidance, schools can consent on behalf of parents to the collection of students’ personal information if such information is used for a school-authorized educational purpose and for no other commercial purpose.
  • Upturn, an advocacy organization that “advances equity and justice in the design, governance, and use of technology,” has released a report showing that United States (U.S.) law enforcement agencies have multiple means of hacking into encrypted or protected smartphones. There have long been the means and vendors available in the U.S. and abroad for breaking into phones despite the claims of a number of nations like the Five Eyes (U.S., the United Kingdom, Australia, Canada, and New Zealand) that default end-to-end encryption was a growing problem that allowed those preying on children and engaged in terrorism to go undetected. In terms of possible bias, Upturn is “is supported by the Ford Foundation, the Open Society Foundations, the John D. and Catherine T. MacArthur Foundation, Luminate, the Patrick J. McGovern Foundation, and Democracy Fund.”
    • Upturn stated:
      • Every day, law enforcement agencies across the country search thousands of cellphones, typically incident to arrest. To search phones, law enforcement agencies use mobile device forensic tools (MDFTs), a powerful technology that allows police to extract a full copy of data from a cellphone — all emails, texts, photos, location, app data, and more — which can then be programmatically searched. As one expert puts it, with the amount of sensitive information stored on smartphones today, the tools provide a “window into the soul.”
      • This report documents the widespread adoption of MDFTs by law enforcement in the United States. Based on 110 public records requests to state and local law enforcement agencies across the country, our research documents more than 2,000 agencies that have purchased these tools, in all 50 states and the District of Columbia. We found that state and local law enforcement agencies have performed hundreds of thousands of cellphone extractions since 2015, often without a warrant. To our knowledge, this is the first time that such records have been widely disclosed.
    • Upturn argued:
      • Law enforcement use these tools to investigate not only cases involving major harm, but also for graffiti, shoplifting, marijuana possession, prostitution, vandalism, car crashes, parole violations, petty theft, public intoxication, and the full gamut of drug-related offenses. Given how routine these searches are today, together with racist policing policies and practices, it’s more than likely that these technologies disparately affect and are used against communities of color.
      • We believe that MDFTs are simply too powerful in the hands of law enforcement and should not be used. But recognizing that MDFTs are already in widespread use across the country, we offer a set of preliminary recommendations that we believe can, in the short-term, help reduce the use of MDFTs. These include:
        • banning the use of consent searches of mobile devices,
        • abolishing the plain view exception for digital searches,
        • requiring easy-to-understand audit logs,
        • enacting robust data deletion and sealing requirements, and
        • requiring clear public logging of law enforcement use.

Coming Events

  • The Federal Communications Commission (FCC) will hold an open commission meeting on 27 October, and the agency has released a tentative agenda:
    • Restoring Internet Freedom Order Remand – The Commission will consider an Order on Remand that would respond to the remand from the U.S. Court of Appeals for the D.C. Circuit and conclude that the Restoring Internet Freedom Order promotes public safety, facilitates broadband infrastructure deployment, and allows the Commission to continue to provide Lifeline support for broadband Internet access service. (WC Docket Nos. 17-108, 17-287, 11- 42)
    • Establishing a 5G Fund for Rural America – The Commission will consider a Report and Order that would establish the 5G Fund for Rural America to ensure that all Americans have access to the next generation of wireless connectivity. (GN Docket No. 20-32)
    • Increasing Unlicensed Wireless Opportunities in TV White Spaces – The Commission will consider a Report and Order that would increase opportunities for unlicensed white space devices to operate on broadcast television channels 2-35 and expand wireless broadband connectivity in rural and underserved areas. (ET Docket No. 20-36)
    • Streamlining State and Local Approval of Certain Wireless Structure Modifications – The Commission will consider a Report and Order that would further accelerate the deployment of 5G by providing that modifications to existing towers involving limited ground excavation or deployment would be subject to streamlined state and local review pursuant to section 6409(a) of the Spectrum Act of 2012. (WT Docket No. 19-250; RM-11849)
    • Revitalizing AM Radio Service with All-Digital Broadcast Option – The Commission will consider a Report and Order that would authorize AM stations to transition to an all-digital signal on a voluntary basis and would also adopt technical specifications for such stations. (MB Docket Nos. 13-249, 19-311)
    • Expanding Audio Description of Video Content to More TV Markets – The Commission will consider a Report and Order that would expand audio description requirements to 40 additional television markets over the next four years in order to increase the amount of video programming that is accessible to blind and visually impaired Americans. (MB Docket No. 11-43)
    • Modernizing Unbundling and Resale Requirements – The Commission will consider a Report and Order to modernize the Commission’s unbundling and resale regulations, eliminating requirements where they stifle broadband deployment and the transition to next- generation networks, but preserving them where they are still necessary to promote robust intermodal competition. (WC Docket No. 19-308)
    • Enforcement Bureau Action – The Commission will consider an enforcement action.
  • The Senate Commerce, Science, and Transportation Committee will hold a hearing on 28 October regarding 47 U.S.C. 230 titled “Does Section 230’s Sweeping Immunity Enable Big Tech Bad Behavior?” with testimony from:
    • Jack Dorsey, Chief Executive Officer of Twitter;
    • Sundar Pichai, Chief Executive Officer of Alphabet Inc. and its subsidiary, Google; and 
    • Mark Zuckerberg, Chief Executive Officer of Facebook.
  • On 29 October, the Federal Trade Commission (FTC) will hold a seminar titled “Green Lights & Red Flags: FTC Rules of the Road for Business workshop” that “will bring together Ohio business owners and marketing executives with national and state legal experts to provide practical insights to business and legal professionals about how established consumer protection principles apply in today’s fast-paced marketplace.”
  • On 10 November, the Senate Commerce, Science, and Transportation Committee will hold a hearing to consider nominations, including Nathan Simington’s to be a Member of the Federal Communications Commission.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Mehmet Turgut Kirkgoz from Pixabay

Further Reading, Other Developments, and Coming Events (15 October)

Further Reading

  •  “Amazon to escape UK digital services tax that will hit smaller traders” By Mark Sweney — The Guardian. According to media reports, the United Kingdom’s (UK) new digital services tax will not be levied on goods Amazon sells directly to consumers. Rather, the new tax HM Revenue and Customs will be on the revenue from services Amazon and other platforms charge to third-party sellers using Amazon. And, Amazon has made clear it will merely pass along the 2% tax to these entities. This is a strange outcome to a policy ostensibly designed to address the fact that the tach giant paid only £14.4 million in corporation taxes to the UK last year on £13.7 billion in revenue.
  • Norway blames Russia for cyber-attack on parliament” — BBC News. In a statement, the Norwegian government claimed that its Parliament has been breached, and Norway’s Foreign Minister is saying the Russian Federation is the culprit. Last month the government in Oslo said that the email accounts of some government officials had been compromised, but this announcement seems to indicate the breach was far wider than thought last month, or that the government knew and was holding back the information. If true, this is the second such penetration and exfiltration by Russian security services of a European government in the recent past as the German government made the same claims, which lead to the European Union’s first cyber sanctions.
  • Twitter suspends accounts for posing as Black Trump supporters” By Kari Paul — The Guardian and “Fake Twitter accounts posing as Black Trump supporters appear, reach thousands, then vanish” By Craig Timberg and Isaac Stanley-Becker — The Washington Post. As a rule of thumb, I find the Cui Bono helpful. And, so it is with fake Twitter accounts of alleged African Americans who will vote for President Donald Trump. Are these courtesy of the Republican Party and the Trump Campaign? Maybe. They would certainly gain from peeling off African American support for Vice President Joe Biden considering its his strongest constituency as measured by percentage support relative to total population. The Russians? Sure. They also stand to benefit from stirring the cauldron of unease and division in the United States regardless of who wins, and possibly even more so if Biden wins for the U.S. will likely return to its pre-Trump adversarial policy towards the Russian Federation. And, finally how does Twitter benefit from taking down the sort of fake accounts that violate its terms of service when this has not often been its modus operandi? Perhaps to curry favor with a Biden Administration likely to push for changes as to how social media platforms are to be regulated.
  • Backers of Australia’s mandatory news code welcome French ruling on Google” By Amanda Meade — The Guardian. Not surprisingly, the Australian Competition and Consumer Commission (ACCC) was delighted when a French appeals court ruled in favor of France’s competition authority against Google in its challenge of a French law to require social media platforms to pay traditional media for use of their content. The ACCC has been fighting its own battle on this front with its draft code that would require Google and Facebook to do the same down under.
  • Can Tinder be sued for breach of care?” By James Purtrill — ABC News. Given the recent allegations that Tinder knew of sexual assaulters using their app and doing nothing, this piece looks at the liability Tinder may face under Australian law. It is quite likely if sexual assaults related to Tinder indifference or negligence is occurring in other common law countries, then the company may be facing lawsuits there, too.

Other Developments

  • The Government Accountability Office (GAO) found that the Federal Aviation Administration (FAA) has not all it can on aviation cybersecurity despite the absence of any successful cyber attacks on a plane’s avionics system. The GAO asserted:
    • FAA has not (1) assessed its oversight program to determine the priority of avionics cybersecurity risks, (2) developed an avionics cybersecurity training program, (3) issued guidance for independent cybersecurity testing, or (4) included periodic testing as part of its monitoring process. Until FAA strengthens its oversight program, based on assessed risks, it may not be able to ensure it is providing sufficient oversight to guard against evolving cybersecurity risks facing avionics systems in commercial airplanes.
    • The GAO allowed:
      • Increasing use of technology and connectivity in avionics has brought new opportunities for persons with malicious intentions to target commercial transport airplanes. The connections among avionics and other systems onboard airplanes and throughout the aviation ecosystem are growing more complex as airplanes become more connected to systems that are essential for flight safety and operations. Airframe manufacturers are deploying software and hardware protections to reduce the risk of the cyber threats currently facing avionics systems.
    • The GAO contended:
      • Further, while FAA has mechanisms for coordinating among its internal components and with other federal agencies and private sector stakeholders to address cybersecurity risks, it has not established avionics cybersecurity risks as a priority. As a result, avionics cybersecurity issues that have been raised within FAA have not been consistently tracked to resolution. Until FAA conducts an overall assessment of the cybersecurity risks to avionics systems and prioritizes coordination efforts based on that assessment, it may not be allocating resources and coordinating on risks as effectively as it could.
    • The GAO made this recommendations:
      • The FAA Administrator should direct the Associate Administrator for Aviation Safety to conduct a risk assessment of avionics systems cybersecurity to identify the relative priority of avionics cybersecurity risks for its oversight program compared to other safety concerns and develop a plan to address those risks. (Recommendation 1)
      • The FAA Administrator should direct the Associate Administrator for Aviation Safety, based on the assessment of avionics cybersecurity risks, to identify staffing and training needs for agency inspectors specific to avionics cybersecurity, and develop and implement appropriate training to address identified needs. (Recommendation 2)
      • The FAA Administrator should direct the Associate Administrator for Aviation Safety, based on the assessment of avionics cybersecurity risks, to develop and implement guidance for avionics cybersecurity testing of new airplane designs that includes independent testing. (Recommendation 3)
      • The FAA Administrator should direct the Associate Administrator for Aviation Safety, based on the assessment of avionics cybersecurity risks, to review and consider revising its policies and procedures for monitoring the effectiveness of avionics cybersecurity controls in the deployed fleet to include developing procedures for safely conducting independent testing. (Recommendation 4)
      • The FAA Administrator should direct the Associate Administrator for Aviation Safety to develop a mechanism to ensure that avionics cybersecurity issues are appropriately tracked and resolved when coordinating among internal stakeholders. (Recommendation 5)
      • The FAA Administrator should direct the Associate Administrator for Aviation Safety, based on the assessment of avionics cybersecurity risks, to review and consider the extent to which oversight resources should be committed to avionics cybersecurity. (Recommendation 6)
  • The chairs and ranking members of the House Energy and Commerce Committee and one of its subcommittee wrote the Government Accountability Office (GAO) to “evaluate Department of Health and Human Services’ (HHS) [cyber] incident response capabilities…[and] should include assessing the agency’s forensic threat intelligence data infrastructure used in responding to major or significant incidents involving persistent threats and data breaches.” Chair Frank Pallone, Jr. (D-NJ), Ranking Member Greg Walden (R-OR), and Oversight and Investigations Subcommittee Chair Diana DeGette (D-CO), and Ranking Member Brett Guthrie (R-KY) stated:
    • The Chief Information Security Officer at HHS recently acknowledged that the ongoing COVID-19 public health crisis has placed a new target on HHS, and malicious actors have boosted their efforts to infiltrate the agency and access sensitive data. In addition, it was reported in March 2020 that HHS suffered a cyber-attack on its computer system. According to people familiar with the incident, it was part of a campaign of disruption and disinformation that was aimed at undermining the response to the coronavirus pandemic and may have been the work of a foreign actor. Further, emerging cyber threats, such as the advanced persistent threat groups that exploited COVID-19 in early 2020, underscore the importance of effectively protecting information systems supporting the agency.
    • Given the types of information created, stored, and shared on the information systems owned and operated by HHS, it is important that the agency implement effective incident response handling processes and procedures to address persistent cyber-based threats.
  • A federal court denied Epic Games’ request for a preliminary injunction requiring Apple to put Fortnite back into the App Store. The judge assigned the case had signaled this request would likely fail as its request for a temporary restraining order was also rejected. The United States District Court for the Northern District of California summarized Epic’s motion:
    • In this motion for preliminary injunction, Epic Games asks the Court to force Apple to reinstate Fortnite to the Apple App Store, despite its acknowledged breach of its licensing agreements and operating guidelines, and to stop Apple from terminating its affiliates’ access to developer tools for other applications, including Unreal Engine, while Epic Games litigates its claims.
    • The court stated:
      • Epic Games bears the burden in asking for such extraordinary relief. Given the novelty and the magnitude of the issues, as well as the debate in both the academic community and society at large, the Court is unwilling to tilt the playing field in favor of one party or the other with an early ruling of likelihood of success on the merits. Epic Games has strong arguments regarding Apple’s exclusive distribution through the iOS App Store, and the in-app purchase (“IAP”) system through which Apple takes 30% of certain IAP payments. However, given the limited record, Epic Games has not sufficiently addressed Apple’s counter arguments. The equities, addressed in the temporary restraining order, remain the same.
    • The court held:
      • Apple and all persons in active concert or participation with Apple, are preliminarily enjoined from taking adverse action against the Epic Affiliates with respect to restricting, suspending or terminating the Epic Affiliates from the Apple’s Developer Program, on the basis that Epic Games enabled IAP direct processing in Fortnite through means other than the Apple IAP system, or on the basis of the steps Epic Games took to do so. This preliminary injunction shall remain in effect during the pendency of this litigation unless the Epic Affiliates breach: (1) any of their governing agreements with Apple, or (2) the operative App Store guidelines. This preliminary injunction supersedes the prior temporary restraining order.
    • In its complaint, Epic Games is arguing that Apple’s practices violate federal and California antitrust and anti-competition laws. Epic Games argued:
      • This case concerns Apple’s use of a series of anti-competitive restraints and monopolistic practices in markets for (i) the distribution of software applications (“apps”) to users of mobile computing devices like smartphones and tablets, and (ii) the processing of consumers’ payments for digital content used within iOS mobile apps(“in-app content”). Apple imposes unreasonable and unlawful restraints to completely monopolize both markets and prevent software developers from reaching the over one billion users of its mobile devices (e.g., iPhone and iPad) unless they go through a single store controlled by Apple, the App Store, where Apple exacts an oppressive 30% tax on the sale of every app. Apple also requires software developers who wish to sell digital in-app content to those consumers to use a single payment processing option offered by Apple, In-App Purchase, which likewise carries a 30% tax.
      • In contrast, software developers can make their products available to users of an Apple personal computer (e.g., Mac or MacBook) in an open market, through a variety of stores or even through direct downloads from a developer’s website, with a variety of payment options and competitive processing fees that average 3%, a full ten times lower than the exorbitant 30% fees Apple applies to its mobile device in-app purchases.
    • In its late August denial of Epic Games’ request for a temporary restraining order, the court decided the plaintiff does not necessarily have an antitrust case strong enough to succeed on the merits, has not demonstrated irreparable harm because the “current predicament appears to be of its own making,” would unjustifiably be enriched if Fortnite is reinstated to the App Store without having to pay 30% of in app purchases to Apple, and is not operating in a public interest strong enough to overcome the expectation private parties will honor their contracts or resolve disputes through normal means.
  • As part of its Digital Modernization initiative, the Department of Defense (DOD) released its Data Strategy which is supposed to change how the DOD and its components collect, process, and use data, which is now being framed as an essential element of 21st Century conflicts. The DOD stated:
    • DOD must accelerate its progress towards becoming a data-centric organization. DOD has lacked the enterprise data management to ensure that trusted, critical data is widely available to or accessible by mission commanders, warfighters, decision-makers, and mission partners in a real- time, useable, secure, and linked manner. This limits data-driven decisions and insights, which hinders the execution of swift and appropriate action.
    • Additionally, DOD software and hardware systems must be designed, procured, tested, upgraded, operated, and sustained with data interoperability as a key requirement. All too often these gaps are bridged with unnecessary human-machine interfaces that introduce complexity, delay, and increased risk of error. This constrains the Department’s ability to operate against threats at machine speed across all domains.
    • DOD also must improve skills in data fields necessary for effective data management. The Department must broaden efforts to assess our current talent, recruit new data experts, and retain our developing force while establishing policies to ensure that data talent is cultivated. We must also spend the time to increase the data acumen resident across the workforce and find optimal ways to promote a culture of data awareness.
    • The DOD explained how it will implement the new strategy:
      • Strengthened data governance will include increased oversight at multiple levels. The Office of the DOD Chief Data Officer (CDO) will govern the Department’s data management efforts and ensure sustained focus by DOD leaders. The DOD Chief Information Officer (DOD CIO) will ensure that data priorities are fully integrated into the DOD Digital Modernization program, ensuring synchronization with DOD’s cloud; AI; Command, Control, and Communications (C3); and cybersecurity efforts. The DOD CIO will also promote compliance with CDO guidance via CIO authorities for managing IT investments, issuing DOD policy, and certifying Service/component budgets.
      • The CDO Council, chaired by the DOD CDO, will serve as the primary venue for collaboration among data officers from across the Department. This body will identify and prioritize data challenges, develop solutions, and oversee policy and data standards of the Department. While working closely with the appropriate governance bodies, members of the CDO Council must also advocate that data considerations be made an integral part of all the Department’s requirements, research, procurement, budgeting, and manpower decisions.
    • The DOD concluded:
      • Data underpins digital modernization and is increasingly the fuel of every DOD process, algorithm, and weapon system. The DOD Data Strategy describes an ambitious approach for transforming the Department into a data-driven organization. This requires strong and effective data management coupled with close partnerships with users, particularly warfighters. Every leader must treat data as a weapon system, stewarding data throughout its lifecycle and ensuring it is made available to others. The Department must provide its personnel with the modern data skills and tools to preserve U.S. military advantage in day-to-day competition and ensure that they can prevail in conflict.
    • In its draft Digital Modernization Strategy, the DOD stated:
      • The DOD Digital Modernization Strategy, which also serves as the Department’s Information Resource Management (IRM) Strategic Plan, presents Information Technology (IT)-related modernization goals and objectives that provide essential support for the three lines of effort in the National Defense Strategy (NDS), and the supporting National Defense Business Operations Plan (NDBOP). It presents the DOD CIO’s vision for achieving the Department’s goals and creating “a more secure, coordinated, seamless, transparent, and cost-effective IT architecture that transforms data into actionable information and ensures dependable mission execution in the face of a persistent cyber threat.”

Coming Events

  • The European Union Agency for Cybersecurity (ENISA), Europol’s European Cybercrime Centre (EC3) and the Computer Emergency Response Team for the EU Institutions, Bodies and Agencies (CERT-EU) will hold the 4th annual IoT Security Conference series “to raise awareness on the security challenges facing the Internet of Things (IoT) ecosystem across the European Union:”
    • Supply Chain for IoT – 21 October at 15:00 to 16:30 CET
  • The Federal Communications Commission (FCC) will hold an open commission meeting on 27 October, and the agency has released a tentative agenda:
    • Restoring Internet Freedom Order Remand – The Commission will consider an Order on Remand that would respond to the remand from the U.S. Court of Appeals for the D.C. Circuit and conclude that the Restoring Internet Freedom Order promotes public safety, facilitates broadband infrastructure deployment, and allows the Commission to continue to provide Lifeline support for broadband Internet access service. (WC Docket Nos. 17-108, 17-287, 11- 42)
    • Establishing a 5G Fund for Rural America – The Commission will consider a Report and Order that would establish the 5G Fund for Rural America to ensure that all Americans have access to the next generation of wireless connectivity. (GN Docket No. 20-32)
    • Increasing Unlicensed Wireless Opportunities in TV White Spaces – The Commission will consider a Report and Order that would increase opportunities for unlicensed white space devices to operate on broadcast television channels 2-35 and expand wireless broadband connectivity in rural and underserved areas. (ET Docket No. 20-36)
    • Streamlining State and Local Approval of Certain Wireless Structure Modifications – The Commission will consider a Report and Order that would further accelerate the deployment of 5G by providing that modifications to existing towers involving limited ground excavation or deployment would be subject to streamlined state and local review pursuant to section 6409(a) of the Spectrum Act of 2012. (WT Docket No. 19-250; RM-11849)
    • Revitalizing AM Radio Service with All-Digital Broadcast Option – The Commission will consider a Report and Order that would authorize AM stations to transition to an all-digital signal on a voluntary basis and would also adopt technical specifications for such stations. (MB Docket Nos. 13-249, 19-311)
    • Expanding Audio Description of Video Content to More TV Markets – The Commission will consider a Report and Order that would expand audio description requirements to 40 additional television markets over the next four years in order to increase the amount of video programming that is accessible to blind and visually impaired Americans. (MB Docket No. 11-43)
    • Modernizing Unbundling and Resale Requirements – The Commission will consider a Report and Order to modernize the Commission’s unbundling and resale regulations, eliminating requirements where they stifle broadband deployment and the transition to next- generation networks, but preserving them where they are still necessary to promote robust intermodal competition. (WC Docket No. 19-308)
    • Enforcement Bureau Action – The Commission will consider an enforcement action.
  • On October 29, the Federal Trade Commission (FTC) will hold a seminar titled “Green Lights & Red Flags: FTC Rules of the Road for Business workshop” that “will bring together Ohio business owners and marketing executives with national and state legal experts to provide practical insights to business and legal professionals about how established consumer protection principles apply in today’s fast-paced marketplace.”
  • The Senate Commerce, Science, and Transportation Committee will reportedly hold a hearing on 29 October regarding 47 U.S.C. 230 with testimony from:
    • Jack Dorsey, Chief Executive Officer of Twitter;
    • Sundar Pichai, Chief Executive Officer of Alphabet Inc. and its subsidiary, Google; and 
    • Mark Zuckerberg, Chief Executive Officer of Facebook.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by amrothman from Pixabay