NSA Location Data Guidance

The U.S. signals intelligence agency releases guidance on mobile device location services that should not shock anyone versed in cybersecurity. Why the agency did so is the question.   

The National Security Agency (NSA) has issued guidance for those who work for the United States’ (U.S.) security services and military on how to limit their exposure on their mobile devices to the risks of apps and operating systems use of location data. This public guidance is the latest in a series of recommendations and best practices from the previously more secretive agency charged primarily with signals intelligence for the U.S.

The NSA is aiming the guidance at the U.S. Intelligence Community, Department of Defense, and other users of “national security systems” who are usually outside the purview and authority of the U.S. agency empowered to police the cyber and data security of civilian agencies: the U.S. Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA). Perhaps the NSA sees space in the federal scheme to advise those working for national security agencies or in these functions at civilian agencies.

The timing of the document is puzzling, however, unless, of course, this is an exercise in public relations given that it is not exactly a secret that location data may compromise all sorts of data about a person. The NSA is  likely seeking to recraft its image along the lines of the United Kingdom’s National Cyber Security Centre (NCSC), which often issues advice aimed at a general audience. In the fall of 2019, the NSA announced a reorganization resulting in the creation of the Cybersecurity Directorate, “a major organization that unifies NSA’s foreign intelligence and cyberdefense missions.” NSA asserted this new entity would “work to prevent and eradicate threats to national security systems and critical infrastructure, with an initial focus on the defense industrial base and the improvement of our weapons’ security.” Moreover, “[t]he Cybersecurity Directorate will reinvigorate NSA’s white hat mission by sharing critical threat information and collaborating with partners and customers to better equip them to defend against malicious cyber activity,” the agency claimed.

Since June, NSA has issued a range of guidance documents and warnings, including:

On the other hand, presumably, the NSA, other IC agencies, the DOD and other agencies are aware of the dangers proposed by the use of mobile devices. In fact, the programs exposed by former NSA contractor Edward Snowden included the collection and use of metadata, most likely including location data. Moreover, agencies of the DOD, including the Army and Navy, ordered personnel to remove TikTok from their military devices, in part, because the company would be able to collect location data. More relevantly, in a 3 August 2018 memorandum issued by then Deputy Secretary of Defense Patrick Shanahan, the DOD explained “[e]ffective immediately, Defense Department personnel are prohibited from using geolocation features and functionality on government and nongovernment-issued devices, applications and services while in locations designated as operational areas.” This memorandum resulted from the exercise app Strava releasing a heatmap of the exercise routes of people all over the world, including military personnel that highlighted precise locations of some previously secret bases. In 2017, the U.S. Government Accountability Office (GAO) released a report specific to the DOD on the security risks of the Internet of Things, and in 2012 the GAO flagged location data as a potential weak spot in mobile device security.

In the guidance on location data, the NSA conceded

Mitigations reduce, but do not eliminate, location tracking risks in mobile devices. Most users rely on features disabled by such mitigations, making such safeguards impractical. Users should be aware of these risks and take action based on their specific situation and risk tolerance. When location exposure could be detrimental to a mission, users should prioritize mission risk and apply location tracking mitigations to the greatest extent possible. While the guidance in this document may be useful to a wide range of users, it is intended primarily for NSS/DOD system users.

Thereafter, the agency lays out how mobile device users may minimize their exposure and the tradeoffs for disabling location data for certain apps and for entire operating systems, to the extent that is possible.

NSA noted that “[d]ifferent users accept different levels of risk regarding location tracking, but most users have some level of concern…[and] [t]he following general mitigations can be used for those with location sensitivities:

  • Disable location services settings on the device.
  • Disable radios when they are not actively in use: disable BT and turn off Wi-Fi if these capabilities are not needed. Use Airplane Mode when the device is not in use. Ensure BT and Wi-Fi are disabled when Airplane Mode is engaged.
  • Apps should be given as few permissions as possible:
    • Set privacy settings to ensure apps are not using or sharing location data.
    • Avoid using apps related to location if possible, since these apps inherently expose user location data. If used, location privacy/permission settings for such apps should be set to either not allow location data usage or, at most, allow location data usage only while using the app. Examples of apps that relate to location are maps, compasses, traffic apps, fitness apps, apps for finding local restaurants, and shopping apps.
  • Disable advertising permissions to the greatest extent possible:
    • Set privacy settings to limit ad tracking, noting that these restrictions are at the vendor’s discretion.
    • Reset the advertising ID for the device on a regular basis. At a minimum, this should be on a weekly basis.
    • Turn off settings (typically known as Find My or Find My Device settings) that allow a lost, stolen, or misplaced device to be tracked.
    • Minimize web-browsing on the device as much as possible, and set browser privacy/permission location settings to not allow location data usage.
    • Use an anonymizing Virtual Private Network (VPN) to help obscure location.
    • Minimize the amount of data with location information that is stored in the cloud, if possible.
  • If it is critical that location is not revealed for a particular mission, consider the following recommendations:
    • Determine a non-sensitive location where devices with wireless capabilities can be secured prior to the start of any activities. Ensure that the mission site cannot be predicted from this location.
    • Leave all devices with any wireless capabilities (including personal devices) at this non-sensitive location. Turning off the device may not be sufficient if a device has been compromised.
    • For mission transportation, use vehicles without built-in wireless communication capabilities, or turn off the capabilities, if possible.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Photo by Bianca Ackermann on Unsplash

Further Reading and Other Developments (6 June)

Other Developments

First things first, if you would like to receive my Technology Policy Update, email me. You can find some of these Updates from 2019 and 2020 here.

  • A number of tech trade groups are asking the House Appropriations Committee’s Commerce-Justice-Science Subcommittee “to direct the National Institute of Standards and Technology (NIST) to create guidelines that help companies navigate the technical and ethical hurdles of developing artificial intelligence.” They argued:
    • A NIST voluntary framework-based consensus set of best practices would be pro-innovation, support U.S. leadership, be consistent with NIST’s ongoing engagement on AI industry consensus standards development, and align with U.S. support for the OECD AI principles as well as the draft Memorandum to Heads of Executive Departments and Agencies, “Guidance for Regulation of Artificial Intelligence Applications.”
  • The Department of Defense (DOD) “named seven U.S. military installations as the latest sites where it will conduct fifth-generation (5G) communications technology experimentation and testing. They are Naval Base Norfolk, Virginia; Joint Base Pearl Harbor-Hickam, Hawaii; Joint Base San Antonio, Texas; the National Training Center (NTC) at Fort Irwin, California; Fort Hood, Texas; Camp Pendleton, California; and Tinker Air Force Base, Oklahoma.”  The DOD explained “[t]his second round, referred to as Tranche 2, brings the total number of installations selected to host 5G testing to 12…[and] builds on DOD’s previously-announced 5G communications technology prototyping and experimentation and is part of a 5G development roadmap guided by the Department of Defense 5G Strategy.”
  • The Federal Trade Commission announced a $150,000 settlement with “HyperBeard, Inc. [which] violated the Children’s Online Privacy Protection Act Rule (COPPA Rule) by allowing third-party ad networks to collect personal information in the form of persistent identifiers to track users of the company’s child-directed apps, without notifying parents or obtaining verifiable parental consent.”
  • The National Institute of Standards and Technology (NIST) released Special Publication 800-133 Rev. 2, Recommendation for Cryptographic Key Generation that “discusses the generation of the keys to be used with the approved  cryptographic  algorithms…[which] are  either  1) generated  using  mathematical  processing  on  the  output  of  approved  Random  Bit  Generators (RBGs) and  possibly  other  parameters or 2) generated based on keys that are generated in this fashion.”
  • United States Trade Representative (USTR) announced “investigations into digital services taxes that have been adopted or are being considered by a number of our trading partners.” These investigations are “with respect to Digital Services Taxes (DSTs) adopted or under consideration by Austria, Brazil, the Czech Republic, the European Union, India, Indonesia, Italy, Spain, Turkey, and the United Kingdom.” The USTR is accepting comments until 15 July.
  • NATO’s North Atlantic Council released a statement “concerning malicious cyber activities” that have targeted medical facilities stating “Allies are committed to protecting their critical infrastructure, building resilience and bolstering cyber defences, including through full implementation of NATO’s Cyber Defence Pledge.” NATO further pledged “to employ the full range of capabilities, including cyber, to deter, defend against and counter the full spectrum of cyber threats.”
  • The Public Interest Declassification Board (PIDB) released “A Vision for the Digital Age: Modernization of the U.S. National Security Classification and Declassification System” that “provides recommendations that can serve as a blueprint for modernizing the classification and declassification system…[for] there is a critical need to modernize this system to move from the analog to the digital age by deploying advanced technology and by upgrading outdated paper-based policies and practices.”
  • In a Department of State press release, a Declaration on COVID-19, the G7 Science and Technology Ministers stated their intentions “to work collaboratively, with other relevant Ministers to:
    • Enhance cooperation on shared COVID-19 research priority areas, such as basic and applied research, public health, and clinical studies. Build on existing mechanisms to further priorities, including identifying COVID-19 cases and understanding virus spread while protecting privacy and personal data; developing rapid and accurate diagnostics to speed new testing technologies; discovering, manufacturing, and deploying safe and effective therapies and vaccines; and implementing innovative modeling, adequate and inclusive health system management, and predictive analytics to assist with preventing future pandemics.
    • Make government-sponsored COVID-19 epidemiological and related research results, data, and information accessible to the public in machine-readable formats, to the greatest extent possible, in accordance with relevant laws and regulations, including privacy and intellectual property laws.
    • Strengthen the use of high-performance computing for COVID-19 response. Make national high-performance computing resources available, as appropriate, to domestic research communities for COVID-19 and pandemic research, while safeguarding intellectual property.
    • Launch the Global Partnership on AI, envisioned under the 2018 and 2019 G7 Presidencies of Canada and France, to enhance multi-stakeholder cooperation in the advancement of AI that reflects our shared democratic values and addresses shared global challenges, with an initial focus that includes responding to and recovering from COVID-19. Commit to the responsible and human-centric development and use of AI in a manner consistent with human rights, fundamental freedoms, and our shared democratic values.
    • Exchange best practices to advance broadband connectivity; minimize workforce disruptions, support distance learning and working; enable access to smart health systems, virtual care, and telehealth services; promote job upskilling and reskilling programs to prepare the workforce of the future; and support global social and economic recovery, in an inclusive manner while promoting data protection, privacy, and security.
  • The Digital, Culture, Media and Sport Committee’s Online Harms and Disinformation Subcommittee held a virtual meeting, which “is the second time that representatives of the social media companies have been called in by the DCMS Sub-committee in its ongoing inquiry into online harms and disinformation following criticism by Chair Julian Knight about a lack of clarity of evidence and further failures to provide adequate answers to follow-up correspondence.” Before the meeting, the Subcommittee sent a letter to Twitter, Facebook, and Google and received responses. The Subcommittee heard testimony from:
    • Facebook Head of Product Policy and Counterterrorism Monika Bickert
    • YouTube Vice-President of Government Affairs and Public Policy Leslie Miller
    • Google Global Director of Information Policy Derek Slater
    • Twitter Director of Public Policy Strategy Nick Pickles
  • Senators Ed Markey (D-MA), Ron Wyden (D-OR) and Richard Blumenthal (D-CT) sent a letter to AT&T CEO Randall Stephenson “regarding your company’s policy of not counting use of HBO Max, a streaming service that you own, against your customers’ data caps.” They noted “[a]lthough your company has repeatedly stated publicly that it supports legally binding net neutrality rules, this policy appears to run contrary to the essential principle that in a free and open internet, service providers may not favor content in which they have a financial interest over competitors’ content.”
  • The Brookings Institution released what it considers a path forward on privacy legislation and held a webinar on the report with Federal Trade Commissioner (FTC) Christine Wilson and former FTC Commissioner and now Microsoft Vice President and Deputy General Counsel Julie Brill.

Further Reading

  • Google: Overseas hackers targeting Trump, Biden campaigns” – Politico. In what is the latest in a series of attempted attacks, Google’s Threat Analysis Group announced this week that People’s Republic of China affiliated hackers tried to gain access to the campaign of former Vice President Joe Biden and Iranian hackers tried the same with President Donald Trump’s reelection campaign. The group referred the matter to the federal government but said the attacks were not successful. An official from the Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) remarked “[i]t’s not surprising that a number of state actors are targeting our elections…[and] [w]e’ve been warning about this for years.” It is likely the usual suspects will continue to try to hack into both presidential campaigns.
  • Huawei builds up 2-year reserve of ‘most important’ US chips” ­– Nikkei Asian Review. The Chinese tech giant has been spending billions of dollars stockpiling United States’ (U.S.) chips, particularly from Intel for servers and programable chips from Xilinx, the type that is hard to find elsewhere. This latter chip maker is seen as particularly crucial to both the U.S. and the People’s Republic of China (PRC) because it partners with the Taiwan Semiconductor Manufacturing Company, the entity persuaded by the Trump Administration to announce plans for a plant in Arizona. Shortly after the arrest of Huawei CFO Meng Wanzhou in 2018, the company began these efforts and spent almost $24 billion USD last year stockpiling crucial U.S. chips and other components.
  • GBI investigation shows Kemp misrepresented election security” – Atlanta-Journal Constitution. Through freedom of information requests, the newspaper obtained records from the Georgia Bureau of Investigation (GBI) on its investigation at the behest of then Secretary of State Brian Kemp, requested days before the gubernatorial election he narrowly won. At the time, Kemp claimed hackers connected to the Democratic Party were trying to get into the state’s voter database, when it was Department of Homeland Security personnel running a routine scan for vulnerabilities Kemp’s office had agreed to months earlier. The GBI ultimately determined Kemp’s claims did not merit a prosecution. Moreover, even though Kemp’s staff at the time continues to deny these findings, the site did have vulnerabilities, including one turned up by a software company employee.
  • Trump, Biden both want to repeal tech legal protections — for opposite reasons” – Politico. Former Vice President Joe Biden (D) wants to revisit Section 230 because online platforms are not doing enough to combat misinformation, in his view. Biden laid out his views on this and other technology matters for the editorial board of The New York Times in January, at which point he said Facebook should have to face civil liability for publishing misinformation. Given Republican and Democratic discontent with Section 230 and the social media platforms, there may be a possibility legislation is enacted to limit this shield from litigation.
  • Wearables like Fitbit and Oura can detect coronavirus symptoms, new research shows” –The Washington Post. Perhaps wearable health technology is a better approach to determining when a person has contracted COVID-19 than contact tracing apps. A handful of studies are producing positive results, but these studies have not yet undergone the per review process. Still, these devices may be able to determine disequilibrium in one’s system as compared to a baseline, suggesting an infection and a need for a test. This article, however, did not explore possible privacy implications of sharing one’s personal health data with private companies.
  • Singapore plans wearable virus-tracing device for all” – Reuters. For less than an estimated $10 USD for unit, Singapore will soon introduce wearable devices to better track contacts to fight COVID-19. In what may be a sign that the city-state has given up on its contact tracing app, TraceTogether, the Asian nation will soon release these wearables. If it not clear if everyone will be mandated to wear one and what privacy and data protections will be in place.
  • Exclusive: Zoom plans to roll out strong encryption for paying customers” – Reuters. In the same vein as Zoom allowing paying customers to choose where their calls are routing through (e.g. paying customers in the United States could choose a different region with lesser surveillance capabilities), Zoom will soon offer stronger security for paying customers. Of course, should Zoom’s popularity during the pandemic solidify into a dominant competitive position, this new policy of offering end-to-end encryption that the company cannot crack would likely rouse the ire of the governments of the Five Eyes nations. These plans breathe further life into the views of those who see a future in which privacy and security are commodities to be bought and those unable or unwilling to afford them will not enjoy either. Nonetheless, the company may still face a Federal Trade Commission (FTC) investigation into its apparently inaccurate claims that calls were encrypted, which may have violated Section 5 of the FTC Act along with similar investigations by other nations.
  • Russia and China target U.S. protests on social media” – Politico. Largely eschewing doctored material, the Russian Federation and the People’s Republic of China (PRC) are using social media platforms to further drive dissension and division in the United States (U.S.) during the protests by amplifying the messages and points of views of Americans, according to an analysis of one think tank. For example, some PRC officials have been tweeting out “Black Lives Matter” and claims that videos purporting to show police violence are, in fact, police violence. The goal to fan the flames and further weaken Washington. Thus far, the American government and the platforms themselves have not had much of a public response. Additionally, this represents a continued trend of the PRC in seeking to sow discord in the U.S. whereas before this year use of social media and disinformation tended to be confined to issues of immediate concern to Beijing.
  • The DEA Has Been Given Permission To Investigate People Protesting George Floyd’s Death” – BuzzFeed News. The Department of Justice (DOJ) used a little known section of the powers delegated to the agency to task the Drug Enforcement Agency (DEA) with conducting “covert surveillance” of to help police maintain order during the protests following the killing of George Floyd’s, among other duties. BuzzFeed News was given the two page memorandum effectuating this expansion of the DEA’s responsibilities beyond drug crimes, most likely by agency insiders who oppose the memorandum. These efforts could include use of authority granted to the agency to engage in “bulk collection” of some information, a practice the DOJ Office of the Inspector General (OIG) found significant issues with, including the lack of legal analysis on the scope of the sprawling collection practices.
  • Cops Don’t Need GPS Data to Track Your Phone at Protests” – Gizmodo. Underlying this extensive rundown of the types of data one’s phone leaks that is vacuumed up by a constellation of entities is the fact that more law enforcement agencies are buying or accessing these data because the Fourth Amendment’s protections do not apply to private parties giving the government information.
  • Zuckerberg Defends Approach to Trump’s Facebook Posts” – The New York Times. Unlike Twitter, Facebook opted not to flag President Donald Trump’s tweets about the protests arising from George Floyd’s killing last week that Twitter found to be glorifying violence. CEO Mark Zuckerberg reportedly deliberated at length with senior leadership before deciding the tweets did not violate the platform’s terms of service, a decision roundly criticized by Facebook employees, some of whom staged a virtual walkout on 1 June. In a conference call, Zuckerberg faced numerous questions about why the company does not respond more forcefully to tweets that are inflammatory or untrue. His answers that Facebook does not act as an arbiter of truth were not well freceived among many employees.
  • Google’s European Search Menu Draws Interest of U.S. Antitrust Investigators” – The New York Times. Allegedly Department of Justice (DOJ) antitrust investigators are keenly interested in the system Google lives under in the European Union (EU) where Android users are now prompted to select a default search engine instead of just making its Google’s. This system was put in place as a response to the EU’s €4.34 billion fine in 2018 for imposing “illegal restrictions on Android device manufacturers and mobile network operators to cement its dominant position in general internet search.” This may be seen as a way to address competition issues while not breaking up Google as some have called for. However, Google is conducting monthly auctions among the other search engines to be of the three choices given to EU consumers, which allows Google to reap additional revenue.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

NTIA Petitions FCC To Reconsider Ligado Decision

The Trump Administration is asking the FCC to reverse its decision to allow a company to use the L-Band for a wireless system that opponents claim will endanger GPS networks.  

First things first, if you would like to receive my Technology Policy Update, email me. You can find some of these Updates from 2019 and 2020 here.

This week, the National Telecommunications and Information Administration (NTIA), a component agency of the Department of Commerce, filed two petitions with the Federal Communications Commission (FCC), asking the latter agency to stay its decision allowing Ligado to proceed with wireless service using a satellite-terrestrial network utilizing the L-Band opposed by a number of Trump Administration agencies and a number of key Congressional stakeholders. They argue the order would allow Ligado to set up a system that would interfere with the Department of Defense’s (DOD) Global Positioning System (GPS) and civilian federal agency applications of GPS as well. If the FCC denies these petitions, it is possible NTIA could file suit in federal court to block the FCC’s order and Ligado, and it is also conceivable Congress could fold language into the FY 2021 National Defense Authorization Act, or pass standalone legislation, to block the FCC.

The NTIA stated in its press release that it “petitioned the Federal Communications Commission (FCC) to reconsider its Order and Authorization that conditionally granted license modification applications filed by Ligado Networks LLC…[that] permits Ligado to provide terrestrial wireless services that threaten to harm federal government users of the Global Positioning System (GPS) along with a variety of other public and private stakeholders.”

In the petition for a stay, NTIA asked that “Ligado Networks LLC’s (Ligado’s) mobile satellite service (MSS) license modification applications for ancillary terrestrial operations” be paused until the agency’s petition for reconsideration is decided by the FCC because of “executive branch concerns of harmful interference to federal government and other GPS devices.”

In the petition for reconsideration, the NTIA argued it “focuses on the problems in the Ligado Order that are uniquely related to the interests of Department of Defense (DOD) and other federal agencies and their mission-critical users of GPS.” The NTIA added “that the Commission failed to consider the major economic impact its decision will have on civilian GPS users and the American economy…[and] [a]s the lead civil agency for GPS, DOT explained…Ligado’s proposed operations would disrupt a wide range of civil GPS receivers owned and operated by emergency first responders, among others.”

NTIA made the following arguments in its petition:

  • The Ligado Order failed to adequately consider and give appropriate weight to important and valid executive branch concerns about harmful interference to GPS.
  • None of Ligado’s latest mitigation proposals, nor the conditions based on them, have been tested or evaluated by any independent party…[and] [a] more scientific way of resolving these technical disputes could be accomplished through further joint FCC-executive branch or independent testing based on Ligado’s actual network and base station parameters.
  • The license conditions imposed on Ligado will not adequately mitigate the risk of harmful interference to federal GPS devices, will shift the burden of fixing such interference to federal users, and are otherwise impractical for addressing actual impacts to national security systems. In light of the large number of federal GPS devices that potentially would be impacted by Ligado’s network, the FCC conditions, even if modified, will be a high-cost, time consuming effort for Ligado and federal agencies. As written, the condition requiring the repair or replacement of government receivers, is impractical, infeasible, and potentially illegal.

In late April, the FCC’s “decision authorize[d] Ligado to deploy a low-power terrestrial nationwide network in the 1526-1536 MHz, 1627.5-1637.5 MHz, and 1646.5-1656.5 MHz bands that will primarily support Internet of Things (IoT) services.” The agency argued the order “provides regulatory certainty to Ligado, ensures adjacent band operations, including Global Positioning System (GPS), are sufficiently protected from harmful interference, and promotes more efficient and effective use of [the U.S.’s] spectrum resources by making available additional spectrum for advanced wireless services, including 5G.”

Defense and other civilian government stakeholders remained unconvinced. Also, in late April, the chairs and ranking members of the Armed Services Committees penned an op-ed, in which they claimed “the [FCC] has used the [COVID-19] crisis, under the cover of darkness, to approve a long-stalled application by Ligado Networks — a proposal that threatens to undermine our GPS capabilities, and with it, our national security.” Chairs James Inhofe (R-OK) and Adam Smith (D-WA) and Ranking Members Jack Reed (D-RI) and Mac Thornberry (R-TX) asserted:

  • So, we wanted to clarify things: domestic 5G development is critical to our economic competiveness against China and for our national security. The Pentagon is committed working with government and industry to share mid-band spectrum where and when it makes sense to ensure rapid roll-out of 5G.
  • The problem here is that Ligado’s planned usage is not in the prime mid-band spectrum being considered for 5G — and it will have a significant risk of interference with GPS reception, according to the National Telecommunications and Information Administration (NTIA). The signals interference Ligado’s plan would create could cost taxpayers and consumers billions of dollars and require the replacement of current GPS equipment just as we are trying to get our economy back on its feet quickly — and the FCC has just allowed this to happen.

The Ligado application was seen as so important, the first hearing of the Senate Armed Services Committee held after the beginning of the COVID-19 pandemic was on this issue. Not surprisingly the DOD explained the risks of Ligado’s satellite-terrestrial wireless system as it sees them at some length. Under Secretary of Defense for Research and Engineering Michael Griffin asserted at the 6 May hearing:

  • The U.S. Department of Transportation (DOT) conducted a testing program developed over multiple years with stakeholder involvement, evaluating 80 consumer-grade navigation, survey, precision agriculture, timing, space-based, and aviation GPS receivers. This test program was conducted in coordination with DoD testing of military receivers. The results, as documented in the DoT “Adjacent Band Compatibility” study released in March, 2018, demonstrated that even very low power levels from a terrestrial system in the adjacent band will overload the very sensitive equipment required to collect and process GPS signals.  Also, many high precision receivers are designed to receive Global Navigation Satellite System (GNSS) signals not only in the 1559 MHz to 1610 MHz band, but also receive Mobile Satellite Service (MSS) signals in the 1525 MHz to 1559 MHz band to provide corrections to GPS/GNSS to improve accuracy. With the present and future planned ubiquity of base stations for mobile broadband use, the use of GPS in entire metropolitan areas would be effectively blocked.  That is why every government agency having any stake in GPS, as well as dozens of commercial entities that will be harmed if GPS becomes unreliable,  opposed the FCC’s decision. 
  • There are two principal reasons for the Department’s opposition to Ligado’s proposal. The first and most obvious is that we designed and built GPS for reasons of national security, reasons which are at least as valid today as when the system was conceived. The second, less well-known, is that the DoD has a statutory responsibility to sustain and protect the system. Quoting from 10 USC 2281, the Secretary of Defense “…shall provide for the sustainment and operation of the GPS Standard Positioning Service for peaceful civil, commercial, and scientific uses…” and “…may not agree to any restriction of the GPS System proposed by the head of a department or agency of the United States outside DoD that would adversely affect the military potential of GPS.”

A few weeks ago, 32 Senators wrote the FCC expressing their concern that the “Order does not adequately project adjacent band operations – including those related to GPS and satellite communications –  from harmful interference that would impact countless commercial and military activities.” They also took issue “the hurried nature of the circulation and consideration of the Order,” which they claimed occurred during “a national crisis” and “was not conducive to addressing the many technical concerns raised by affected stakeholders.” Given that nearly one-third of the Senate signed the letter, this may demonstrate the breadth of opposition in Congress to the Ligado order.

Earlier this week, the House Armed Services Committee held a conference call with “FCC officials” and Inhofe issued a press release, claiming “I was concerned when I asked the FCC officials on the call if they had convinced any other agency this was good policy or if they had made an attempt to receive a classified briefing on the effects of their decision and their answer was no.”

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

“Paper” Hearing on COVID-19 and Big Data

On April 9, the Senate Commerce, Science, and Transportation Committee held a virtual hearing of sorts as all the proceedings would occur through the written word with the chair, ranking member, and witnesses all submitting statements. Then all the members were to submit written questions to the witnesses who will have 96 business hours to respond or what appears to be 12 days. The questions posed to each witness by each member of the committee have been posted on the hearing webpage as well.

In his written statement, Chair Roger Wicker (R-MS) stated “[a]s the public and private sectors race to develop a vaccine for [COVID-19], government officials and health-care professionals have turned to what is known as “big data” to help fight the global pandemic.” He stated “[i]n recognition of the value of big data, Congress recently authorized the CDC, through the bipartisan coronavirus relief package, to develop a modern data surveillance and analytics system,” a reference to the $500 million appropriated for “for public health data surveillance and analytics infrastructure modernization.”  Wicker said “[t]his system is expected to use public health data inputs – including big data – to track the coronavirus more effectively and reduce its spread.” He added “[s]tate governments are also using big data to monitor the availability of hospital resources and manage supply chains for the distribution of masks and other personal protective medical equipment.”

Wicker remarked,

  • Recent media reports revealed that big data is being used by the mobile advertising industry and technology companies in the United States to track the spread of the virus through the collection of consumer location data.  This location data is purported to be in aggregate form and anonymized so that it does not contain consumers’ personally identifiable information.  It is intended to help researchers identify where large crowds are forming and pinpoint the source of potential outbreaks.  The data may also help predict trends in the transmission of COVID-19 and serve as an early warning system for individuals to self-isolate or quarantine.
  • In addition to these uses, consumer location data is being analyzed to help track the effectiveness of social distancing and stay-at-home guidelines.  Data scientists are also seeking ways to combine artificial intelligence and machine learning technologies with big data to build upon efforts to track patterns, make diagnoses, and identify other environmental or geographic factors affecting the rate of disease transmission.
  • The European Union is turning to big data to stop the spread of the illness as well. Italy, Germany, and others have sought to obtain consumer location data from telecommunications companies to track COVID-19.  To protect consumer privacy, EU member states have committed to using only anonymized and aggregate mobile phone location data.  Although the EU’s General Data Protection Regulation does not apply to anonymized data, EU officials have committed to deleting the data once the public health crisis is over.  

Wicker asserted, “[t]he potential benefits of big data to help contain the virus and limit future outbreaks could be significant.” He stated “[r]educing privacy risks begins with understanding how consumers’ location data – and any other information – is being collected when tracking compliance with social distancing measures.” He contended that “[e]qually important is understanding how that data is anonymized to remove all personally identifiable information and prevent individuals from being re-identified…[and] I look forward to hearing from our witnesses about how consumer privacy can be protected at every stage of the data collection process.”

Wicker stated, “I also look forward to exploring how consumers are notified about the collection of their location information and their ability to control or opt out of this data collection if desired.” He explained “[g]iven the sensitivity of geolocation data, increased transparency into these practices will help protect consumers from data misuse and other unwanted or unexpected data processing.” Wicker added “I hope to learn more about how location data is being publicly disclosed, with whom it is being shared, and what will be done with any identifiable data at the end of this global pandemic.”

Wicker concluded,

Strengthening consumer data privacy through the development of a strong and bipartisan federal data privacy law has been a priority for this Committee.  The collection of consumer location data to track the coronavirus, although well intentioned and possibly necessary at this time, further underscores the need for uniform, national privacy legislation.  Such a law would provide all Americans with more transparency, choice, and control over their data, as well as ways to keep businesses more accountable to consumers when they seek to use their data for unexpected purposes.  It would also provide certainty and clear, workable rules of the road for businesses in all 50 states, and preserve Americans’ trust and confidence that their data will be protected and secure no matter where they live.

Ranking Member Maria Cantwell (D-WA) asserted, “[r]ight now, we must ensure there are enough hospital beds, enough personal protective equipment, and enough ventilators and medical supplies to withstand the full force of this virus as it peaks in communities across our country” in her opening statement. She stated, “[w]e need robust testing, and as the virus finally fades, we’ll need to deploy contact tracing systems so that we can respond quickly to outbreaks and stamp it out for good.” Cantwell claimed, “[d]ata provides incredible insights that can assist us in these efforts, and we should be doing everything possible to harness information in a manner that upholds our values.” She remarked, “[t]o gain and keep the public’s trust about the use of data, a defined framework should be maintained to protect privacy rights…[that] at a minimum, should ensure that information is used:

(1) for a specific limited purpose, with a measurable outcome and an end date,

(2) in a fully transparent manner with strong consumer rights, and

(3) under strict accountability measures.

Cantwell stated, “[w]e must always focus on exactly how we expect technology to help, and how to use data strategically to these ends…[and] [w]e must resist hasty decisions that will sweep up massive, unrelated data sets.” She further argued, “we must guard against vaguely defined and non-transparent government initiatives with our personal data…[b]ecause rights and data surrendered temporarily during an emergency can become very difficult to get back.”

Cantwell expressed her belief that “there are three advantages to data that need to be harnessed at this time: the power to predict, the power to discover, and the power to persuade.” She remarked, “[d]ata helps us build models based on what has come before…[and] [w]e can use these models to identify patterns to help us prepare for what might be next, whether those are predictions of where disease is spreading, estimations of community needs, or coordination of scarce resources.” Cantwell said, “[l]arge publically available data sets also help us identify patterns and solutions that cannot be seen with a more fragmented, less complete picture.” She asserted, “[d]iscoveries and insights that once were hidden can now be brought to light with the help of advanced data analysis techniques.” She said, “[a]nd when there are vital messages to share, data allows us to get those messages out to everyone who needs to hear them…[and] [m]essages about social distancing, exposure risks, and treatment options are just a few of the many types of essential communications that can be informed and enhanced by data analysis.”

Cantwell summed up:

  • The world is now confronting a challenge of tremendous urgency and magnitude. At some point, we will be opening up our society and our economy again. First, we’re going to need robust testing. And when that time comes, we’re also going to need technology, powered by data, to help us safely transition back to a more normal way of life.
  • Our job in Congress is to help provide the tools needed to turn back this disease, and to understand how we marshal innovation and technology in a responsible way to respond to this challenge, both in the short term and for what we are starting to understand may be a very long fight ahead.
  • We are only at the beginning of this fight. We urgently need to plan for the days and, yes, the years ahead; we must discover, test, and distribute new cures faster than ever before; we need our greatest minds, wherever they may be, to collaborate and work together; and we must build unity because ultimately, that is our greatest strength.

University of Washington Professor of Law Ryan Calo explained

In this testimony, I will address some of the ways people and institutions propose to use data analytics and other technology to respond to coronavirus. The first set of examples involves gaining a better understanding of the virus and its effects on American life. By and large I support these efforts; the value proposition is clear and the privacy harms less pronounced. The second set of examples involves the attempt to track the spread of COVID-19 at an individual level using mobile software applications (“apps”). I am more skeptical of this approach as I fear that it threatens privacy and civil liberties while doing little to address the pandemic. Finally, I conclude with the recommendation that, however we leverage data to fight this pandemic, policymakers limit use cases to the emergency itself, and not permit mission creep or downstream secondary uses that surprise the consumer.

Calo said

I am not opposed to leveraging every tool in our technical arsenal to address the current pandemic. We are facing a near unprecedented global crisis. I note in conclusion that there will be measures that are appropriate in this context, but not beyond it. Americans and their representatives should be vigilant that whatever techniques we use today to combat coronavirus do not wind up being used tomorrow to address other behaviors or achieve other goals. To paraphrase the late Justice Robert Jackson, a problem with emergency powers is that they tend to kindle emergencies.

Calo asserted

In national security, critics speak in terms of mission creep, as when vast surveillance powers conferred to fight terrorism end up being used to enforce against narcotics trafficking or unlawful immigration. In consumer privacy, much thought is given to the prospect of secondary use, i.e., the possibility that data collected for one purpose will be used by a company to effectuate a second, more questionable purpose without asking the data subject for additional permissions. No consumer would or should expect that the absence of certain antibodies in their blood, gathered for the purpose of tracing a lethal disease, could lead to higher health insurance premiums down the line. There is also a simpler danger that Americans will become acclimated to more invasive surveillance partnerships between industry and government.14My hope is that policymakers will expressly ensure that any accommodations privacy must concede to the pandemic will not outlive the crisis.

ACT | The App Association Senior Director for Public Policy Graham Dufault explained some of the big data privacy concerns in the COVID-19 crisis:

  • Creating and Using Big Data Sets Consistent with Privacy Expectations. Beyond the Taiwan example described above, other nations are engaging in their own versions of highly targeted surveillance. Israel is tracking citizens’ movements using smartphone location data and even sending text messages to people who were recently near a person known to have been infected with COVID-19, with an order to self-quarantine.While Israeli courts blocked the use of this data to enforce quarantines,11even the use of it to send unsolicited text messages and swiftly apply impromptu quarantines raises some questions.
  • By contrast, in the United States, private companies are leading the charge on big data sets about location, with persistent privacy oversight by policymakers. For example, Google is producing reports on foot traffic patterns using smartphone location data. However, there are limitations to the reports because they only use high-level data indicating a percentage decrease or increase in foot traffic in six different types of locations (e.g., workplaces, retail, and recreation sites)over a given period of time. Their vagueness is in part the result of federal and state privacy law, which generally prohibit deceptive practices, including the disclosure of private data in a manner that is inconsistent with a company’s own privacy policies or where the individual never consented to the disclosure. News articles variously describe these kinds of high-level reports as tracking compliance with stay-at-home orders, but they only do so in an indirect sense and certainly not to the degree to which Taiwan or Israel track compliance, which involves the use of individual location data.
  • With Location Data, Privacy is Possible. Ideally, federal, state, and local governments could enact targeted measures that significantly stem the spread of COVID-19 in high-risk areas and at high-risk times, while enabling certain parts of the economy to open back up where there is mitigation of risk—all with anonymous data. The Private Kit app takes privacy protective steps that may help provide both actionable data and effective anonymity. For example, when a user downloads the app, it clarifies that location data stays on the user’s phone and does not go to a centralized server. Instead, when turned on, the app tracks the user’s location and stores it in an encrypted format—which it apparently sends, again encrypted, directly to other phones when queried. Theoretically, it would be difficult for any single user of the app to discern the identity of the person signified by one of the dots on the map. The problem Private Kit encounters is whether enough people will download this app quickly enough for it to be useful for policymakers and users. Similar ideas, like NextTrace have also cropped up, but the effectiveness of these tools may be limited if a single, popular choice does not soon emerge.
  • The COVID-19 Pandemic Underscores the Need for a National Privacy Law. National privacy legislation should ensure companies are using default privacy measures like those described above. Animating some of the privacy concerns policymakers have expressed about the use of big data to address the COVID-19 pandemic is a (not entirely unfair) lack of trust in how tech-driven companies are using sensitive personal data, especially location data. While many of us worry that governmental intrusions to address the COVID-19 pandemic would be difficult to pull back, policymakers also worry that corporate surveillance efforts could later turn into unexpected uses of sensitive data and exposure to additional risk of unauthorized access. The passage of a strong, national privacy framework could help alleviate the stated concerns with private sector use of data.
  • Healthcare Data Remains Siloed. Through the Connected Health Initiative (CHI), we advocate for patients to be able to share their healthcare data with digital health companies that can help them make use of it. But in general, electronic health records (EHR) companies decline to transfer that data except inside their own network of providers and business associates (BAs), citing Health Insurance Portability and Accountability Act (HIPAA) compliance concerns. The problem with this, of course, is that HIPAA is supposed to make data portable, as the name suggests. And EHRs have emerged as a chokepoint for healthcare data that patients should otherwise be able to use as they wish. Besides harming big data competencies, outdated healthcare policies have also directly harmed patients. It would be a great tragedy if we yanked telehealth and remote physiologic monitoring (RPM)away from patients just as the general public begins to realize their potential. Certainly, the ability to rely on telehealth (defined in Medicare as live voice or video visits between patients and caregivers) is a sudden necessity during the pandemic as caregivers must screen and monitor patients from a distance. Avoiding such basic communications technologies because of fraud or abuse concerns when public health demands patients stay at home would be nothing short of a catastrophic win for red tape. What surprises many of us, however, is just how unprepared our relative inability to make use of digital health has made us for pandemics like COVID-19.

Interactive Advertising Bureau Executive Vice President for Public Policy Dave Grimaldi stated

While self-regulation has been a useful mechanism to encourage responsible data use, federal leadership is now needed to ensure that robust consumer privacy protections apply consistently throughout the country. The time is right for the creation of a new paradigm for data privacy in the United States. To this end, IAB is a key supporter of Privacy for America, a broad industry coalition of top trade organizations and companies representing a wide cross-section of the American economy that advocates for federal omnibus privacy legislation. Privacy for America has released a detailed policy framework to provide members of Congress with a new option to consider as they develop data privacy legislation for the United States. Participants in Privacy for America have met with leaders of Congress, the FTC, the Department of Commerce, the White House, and other key stakeholders to discuss the ways the framework protects consumers while also ensuring that beneficial uses of data can continue to provide vast benefits to the economy and mankind.

Grimaldi claimed

The Privacy for America framework would prohibit, rather than allow consent for, a range of practices that make personal data vulnerable to misuse. Many of these prohibitions would apply not only to companies that engage in these harmful practices directly, but to suppliers of data who have reason to know that the personal information will be used for these purposes.

  • Eligibility Determinations. Determining whether individuals are eligible for benefits like a job or credit are among the most important decisions that companies make. Although many of these decisions are currently regulated by existing sectoral laws (e.g., the Fair Credit Reporting Act), companies can easily purchase data on the open market to evade compliance with these laws. Privacy for America’s framework would prevent this abuse by banning the use of data to make eligibility decisions—about jobs, credit, insurance, healthcare, education, financial aid, or housing—outside these sectoral laws, thereby bolstering and clarifying the protections already in place. It also would provide new tools to regulators to cut off the suppliers of data that undermine these protections. To the extent that companies are unsure about whether a practice is permitted under existing law, they would be able to seek guidance from the FTC.
  • Discrimination. The widespread availability of detailed personal information has increased concerns that this data will be used to discriminate against individuals. The new framework envisioned by Privacy for America would supplement existing anti- discrimination laws by banning outright a particularly pernicious form of discrimination—using data to charge higher prices for goods or services based on personal traits such race, color, religion, national origin, sexual orientation, or gender identity. As discussed below, the framework also would allow individuals to opt out of data personalization, which can contribute to discrimination.
  • Fraud and Deception. For decades, the FTC and the states have pursued cases against companies that engage in fraud and deception. The new framework would focus specifically on the use and supply of data for these purposes. Thus, it would ban a range of fraudulent practices designed to induce the disclosure of personal information and, more generally, material misrepresentations about data privacy and security.
  • Stalking. In recent years, the proliferation of data has made it easier to track the location and activities of individuals for use in stalking. Of note, mobile apps designed for this very purpose have been identified in the marketplace. The framework would outlaw the use of personal information for stalking or other forms of substantial harassment, and would hold these types of apps accountable.
  • Use of Sensitive Data Without Express Consent. Consumers care most about their sensitive data, and companies should have an obligation to protect it. The new framework would prohibit companies from obtaining a range of sensitive information— including health, financial, biometric, and geolocation information, as well as call records, private emails, and device recording and photos—without obtaining consumers’ express consent.
  • Special Protections for Individuals Over 12 and Under 16 (Tweens). The Privacy for America framework includes a robust set of safeguards for data collected from tweens, an age group that needs protection but is actively engaged online and not subject to constant parental oversight. Specifically, the framework would prohibit companies from transferring tween data to third parties when they have actual knowledge of age. It also would ban payment to tweens for personal data, except under a contract to which a parent or legal guardian is a party. Finally, companies would be required to implement data eraser requirements allowing individuals to delete data posted online when they were tweens.

Center for Democracy and Technology Data and Privacy Project Director Michelle Richardson advised

When deciding what types of data practices are appropriate, Congress should remember that privacy is a balancing of equities. We no longer think of privacy as an on-off switch, or something that can be dismissed after a person agrees to a lengthy privacy policy. It instead weighs the intrusion of any product or program against the benefit of the data use, the secondary effects on individuals, and any mitigating steps that can be taken to minimize harms. As policymakers review data collection, use and sharing, they should:

  • Focus on prevention and treatment, not punishment: Past epidemics have demonstrated that fear is not as effective as clear, meaningful information from a reliable source and the ability to voluntarily comply with medical and governmental directives. Successfully fighting the coronavirus will mean ensuring that a government response does not evolve into law enforcement and broad surveillance functions.
  • Ensure accuracy and effectiveness: There does not appear to be a universally accepted definition of “accurate” or “effective” when it comes to predicting, preventing, or responding to the coronavirus. Nevertheless, if a tool or practice is unlikely to provide meaningful and measurable contributions to the coronavirus response, companies and governments should consider alternatives. This is not only because the privacy risks may not be justified but because people may rely on these measures in lieu of those that actually work.
  • Provide actionable information: In a time of crisis, more information isn’t always better. New data collection or novel data uses should inform individual, corporate, or government behavior in a constructive way. Symptom trackers, for example, may tell a person whether he or she should seek medical care. Contact tracing on the other hand, when it relies on insufficiently granular data, may result in unnecessary or unproductive quarantine, testing, and fear.
  • Require corporate and government practices that respect privacy: People are reasonably fearful for their own health and the health of their loved ones. The burden for constructing privacy-protective products and responses must not be on concerned citizens but on companies and governments. That includes:
    • A preference for aggregated data. Individually identifiable information should not be used when less intrusive measures will suffice. If aggregated data will not do, industry best practices in anonymization and de-identification must be applied.
    • Minimizing collection, use, and sharing. When identifiable information is necessary, data processing should be limited when possible.
    • Purpose limitations. Data collected or used for the coronavirus response should not be used for secondary purposes. For corporate actors, this means advertising for commercial purposes or unrelated product development. For government actors, that means any function not directly related to their public health functions.
    • Deletion. Data should be deleted when it is no longer necessary for responding to the coronavirus epidemic or conducting public health research, especially if it is personally identifiable.
  • Build services that serve all populations: Newly released data is confirming that minorities are contracting the coronavirus at a higher rate and are more likely to die from it.58 There are also legitimate questions about how actionable mobility tracking data is for rural, poor, and working class communities that must travel for work or to secure food and medical care. As technology seeks to find solutions to the coronavirus, it is crucial that it does so in a way that serves all demographics and does not exacerbate existing inequalities.
  • Empower individuals when possible: Epidemic response may not always allow for individualized opt-ins or opt-outs of data collection and use. To the extent possible, participation in data based programs should be voluntary and individuals should maintain traditional rights to control one’s data.
  • Be transparent to build trust: People will hesitate to participate in programs that involve their personal information but that are not transparent in how that information will be used. Companies that provide data, or inferences from data, and the governmental entities that use such information, must be transparent to users and residents about how data will be used.
  • Be especially rigorous when considering government action: A coordinated government response is necessary for successfully fighting the coronavirus epidemic, but the United States has an important tradition of recognizing that the powers of the state pose unique threats to privacy and liberty.