Further Reading, Other Developments, and Coming Events (16 February 2021)

Further Reading

  • India cuts internet around New Delhi as protesting farmers clash with police” By Esha Mitra and Julia Hollingsworth — CNN; “Twitter Temporarily Blocked Accounts Critical Of The Indian Government” By Pranav Dixit — BuzzFeed News. Prime Minister Narendra Modi’s government again shut down the internet as a way of managing unrest or discontent with government policies. The parties out of power have registered their opposition, but the majority seems intent on using this tactic time and again. One advocacy organization named India as the nation with the most shutdowns in 2019, by far. The government in New Delhi also pressed Twitter to take down tweets and accounts critical of the proposed changes in agricultural law. Twitter complied per its own policies and Indian law and then later restored the accounts and tweets.
  • Lacking a Lifeline: How a federal effort to help low-income Americans pay their phone bills failed amid the pandemic” By Tony Romm — The Washington Post. An excellent overview of this Federal Communications Commission (FCC) program and its shortcomings. The Trump era FCC blunted and undid Obama era FCC reforms designed to make the eligibility of potential users easier to discern, among other changes. At the end of the day, many enrollees are left with a fixed number of minutes for phone calls and 4GB of data a month, or roughly what my daughter often uses in a day.
  • She exposed tech’s impact on people of color. Now, she’s on Biden’s team.” By Emily Birnbaum — Protocol. The new Deputy Director for Science and Society in the Office of Science and Technology Policy (OSTP) is a former academic and researcher who often focused her studies on the intersection of race and technology, usually how the latter failed minorities. This is part of the Biden Administration’s fulfillment of its campaign pledges to establish a more inclusive White House. It remains to be seen how the administration will balance the views of those critical of big technology with those hailing from big technology as a number of former high ranking employees have already joined or are rumored to be joining the Biden team.
  • Vaccine scheduling sites are terrible. Can a new plan help Chicago fix them?” By Issie Lapowsky — Protocol. As should not be shocking, many jurisdictions across the country have problematic interfaces for signing up for vaccination against COVID-19. It sounds reminiscent of the problems that plagued the Obamacare exchanges rollout in that potentially well thought out policy was marred by a barely thought out public face.
  • Google launches News Showcase in Australia in sign of compromise over media code” By Josh Taylor — The Guardian; “Cracks in media code opposition as Microsoft outflanks Google and Facebook” By Lisa Visentin — The Sydney Morning Herald. Both Google and Canberra seem to be softening their positions as the company signed up a number of major media outlets for its News Showcase, a feature that will be made available in Australia that will compensate the news organizations at an undisclosed level. However, a few major players, Nine, News Corp., and the Australian Broadcasting Corporation, have not joined, with Nine saying it will not. Google’s de-escalation of rhetoric and tactics will likely allow Prime Minister Scott Morrison’s government to relax the proposed legislation that would mandate Google and Facebook compensate Australian news media (i.e., the News Media and Digital Platforms Mandatory Bargaining Code.) Microsoft’s theoretical entrance into the Australian market through Bing if Google and Facebook actually leave or limit their presence seems to be arguing against the latter two companies’ position that the new code is unworkable. It is not clear if Microsoft is acting earnestly or floating a possible scenario in order that the other companies be cast in a bad light. In any event, cristics of the platforms say the fight is not about the technical feasibility of compensating news media but rather about establishing a precedent of paying for content the platforms now get essentially for free. Other content creators and entities could start demanding payment, too. An interesting tidbit from the second article: Canada may soon join Australia and the European Union in enacting legislation requiring Big Tech to pay its media companies for using their content (i.e., “a more equitable digital regulatory framework across platforms and news media” according to a minister.)

Other Developments

  • The Maryland legislature overrode Governor Larry Hogan’s (R) veto, and the first tax on digital advertising has been enacted in the United States. The “Taxation – Tobacco Tax, Sales and Use Tax, and Digital Advertising Gross Revenues Tax” (HB0732) would impose a tax on digital advertising in the state and may be outside a federal bar on certain taxes on internet services. However, if the veto is overridden, there will inevitably be challenges, and quite likely a push in Congress to enact a federal law preempting such digital taxes. Additionally, the primary sponsor of the legislation has introduced another bill barring companies from passing along the costs of the tax to Maryland businesses and consumers.
    • In a bill analysis, the legislature asserted about HB0732:
      • The bill imposes a tax on the annual gross revenues of a person derived from digital advertising services in the State. The bill provides for the filing of the tax returns and making tax payments. The part of the annual gross revenues of a person derived from digital advertising services in the State are to be determined using an apportionment fraction based on the annual gross revenues of a person derived from digital advertising services in the State and the annual gross revenues of a person derived from digital advertising services in the United States. The Comptroller must adopt regulations that determine the state from which revenues from digital advertising services are derived.
      • The digital advertising gross revenues tax is imposed at the following rates:
        • 2.5% of the assessable base for a person with global annual gross revenues of $100.0 million through $1.0 billion;
        • 5% of the assessable base for a person with global annual gross revenues of $1.0 billion through $5.0 billion;
        • 7.5% of the assessable base for a person with global annual gross revenues of $5.0 billion through $15.0 billion; and
        • 10% of the assessable base for a person with global annual gross revenues exceeding $15.0 billion.
    • In his analysis, Maryland’s Attorney General explained:
      • House Bill 732 would enact a new “digital advertising gross revenues tax.” The tax would be “imposed on annual gross revenues of a person derived from digital advertising services in the State.” Digital advertising services are defined in the bill to include “advertisement services on a digital interface, including advertisements in the form of banner advertising, search engine advertising, interstitial advertising, and other comparable advertising services.” The annual gross revenues derived from digital advertising services is set out in a formula in the bill.
      • Attorney General Brian Frosh conceded there will be legal challenges to the new Maryland tax: there are “three grounds on which there is some risk that a reviewing court would find that the taxis unconstitutional: (1) preemption under the federal Internet Tax Freedom Act; (2) the Commerce Clause; and, (3) the First Amendment.”
  • Democratic Members introduced the “Secure Data and Privacy for Contact Tracing Act” (H.R.778/S.199) in both the House and Senate, legislation that “would provide grants to states that choose to use technology as part of contact tracing efforts for COVID-19 if they agree to adopt strong privacy protections for users” per their press release. Representatives Jackie Speier (D-CA) and Debbie Dingell (D-MI) introduced the House bill and Senators Brian Schatz (D-HI) and Tammy Baldwin (D-WI) the Senate version. Speier, Dingell, Schatz, and Baldwin contended “[t]he Secure Data and Privacy for Contact Tracing Actprovides grant funding for states to responsibly develop digital contact tracing technologies consistent with the following key privacy protections:
    • Digital contact tracing tech must be strictly voluntary and provide clear information on intended use.
    • Data requested must be minimized and proportionate to what is required to achieve contact tracing objectives.
    • Data must be deleted after contact tracing processing is complete, or at the end of the declaration of emergency.
    • States must develop a plan for how their digital contact tracing technology compliments more traditional contact tracing efforts and describe efforts to ensure their technology will be interoperable with other states. 
    • States must establish procedures for independent security assessments of digital contact tracing infrastructure and remediate vulnerabilities. 
    • Information gathered must be used strictly for public health functions authorized by the state and cannot be used for punitive measures, such as criminal prosecution or immigration enforcement.
    • Digital contact tracing tech must have robust detection capabilities consistent with CDC guidance on exposure. 
    • Digital contact tracing technology must ensure anonymity, allowing only authorized public health authorities or other authorized parties to have access to personally identifiable information.
  • The chair and ranking member of the Senate Intelligence Committee wrote the heads of the agencies leading the response to the Russian hack of the United States (U.S.) government and private sector entities through SolarWinds, taking them to task for their thus far cloistered, siloed approach. In an unusually blunt letter, Chair Mark Warner (D-VA) and Ranking Member Marco Rubio (R-FL) asked the agencies name a leader to the response triggered when former President Donald Trump triggered the system established in Presidential Policy Directive-41 because “[t]he federal government’s response so far has lacked the leadership and coordination warranted by a significant cyber event, and we have little confidence that we are on the shortest path to recovery.” Warner and Rubio directed this request to Director of National Intelligence Avril Haines, National Security Agency and Cyber Command head General Paul Nakasone, Federal Bureau of Investigation (FBI) Director Christopher Wray, and Cybersecurity and Infrastructure Security Agency (CISA) Acting Director Brandon Wales. Warner and Rubio further asserted:
    • The briefings we have received convey a disjointed and disorganized response to confronting the breach. Taking a federated rather than a unified approach means that critical tasks that are outside the central roles of your respective agencies are likely to fall through the cracks. The threat our country still faces from this incident needs clear leadership to develop and guide a unified strategy for recovery, in particular a leader who has the authority to coordinate the response, set priorities, and direct resources to where they are needed. The handling of this incident is too critical for us to continue operating the way we have been.
  • Huawei filed suit against the Federal Communications Commission’s (FCC) decision to “designate Huawei, as well as its parents, affiliates, and subsidiaries, as companies posing a national security threat to the integrity of our nation’s communications networks and the communications supply chain” through “In the Matter of Protecting Against National Security Threats to the Communications Supply Chain Through FCC Programs – Huawei Designation.” In the petition filed with the United States Court of Appeals for the Fifth Circuit, Huawei said it is “seek[ing] review of the Final Designation Order on the grounds that it exceeds the FCC’s statutory authority; violates federal law and the Constitution; is arbitrary, capricious, and an abuse of discretion, and not supported by substantial evidence, within the meaning of the Administrative Procedure Act, 5 U.S.C. § 701 et seq.; was adopted through a process that failed to provide Petitioners with the procedural protections afforded by the Constitution and the Administrative Procedure Act; and is otherwise contrary to law.”
  • According to unnamed sources, the Biden Administration has decided to postpone indefinitely the Trump Administration’s efforts to forcing ByteDance to sell TikTok as required by a Trump Administration executive order. Last September, it appeared that Oracle and Walmart had reached a deal in principle with ByteDance that quickly raised more questions that it settled (see here for more details and analysis.) There are reports of ByteDance working with the Committee on Foreign Investment in the United States (CFIUS), the inter-agency review group, that ordered ByteDance to spin off TikTok. TikTok and CFIUS are reportedly talking about what an acceptable divestment would look like, but of course, under recently implemented measures, the People’s Republic of China (PRC) would also have to sign off. Nonetheless, White House Press Secretary Jen Psaki remarked at a press conference “[t]here is a rigorous CFIUS process that is ongoing.”
  • The Biden Administration has asked two federal appeals courts to pause lawsuits brought to stop the United States (U.S.) government from enforcing the Trump Administration executive order banning TikTok from the United States (see here for more analysis.)
    • In the status report filed with the United States Court of Appeal for the District of Columbia, TikTok and the Department of Justice (DOJ) explained:
      • Defendants’ counsel informed Plaintiffs’ counsel regarding the following developments: As the Biden Administration has taken office, the Department of Commerce has begun a review of certain recently issued agency actions, including the Secretary’s prohibitions regarding the TikTok mobile application at issue in this case. In relation to those prohibitions, the Department plans to conduct an evaluation of the underlying record justifying those prohibitions. The government will then be better positioned to determine whether the national security threat described in the President’s August 6, 2020 Executive Order, and the regulatory purpose of protecting the security of Americans and their data, continue to warrant the identified prohibitions. The Department of Commerce remains committed to a robust defense of national security as well as ensuring the viability of our economy and preserving individual rights and data privacy.
    • In its unopposed motion, the DOJ asked the United States Court of Appeals for the Third Circuit “hold this case in abeyance, with status reports due at 60-day intervals.” The DOJ used exactly the same language as in the filing in the D.C. Circuit.
  • The Trump Administration’s President’s Council of Advisors on Science and Technology (PCAST) issued a report at the tail end of the  administration, “Industries of the Future Institutes: A New Model for American Science and Technology Leadership,” that “follows up on a recommendation from PCAST’s report, released June 30, 2020, involving the formation of a new type of multi-sector research and development organization: Industries of the Future Institutes (IotFIs)…[and] provides a framework to inform the design of IotFIs and thus should be used as preliminary guidance by funders and as a starting point for discussion among those considering participation.”
    • PCAST “propose[d] a revolutionary new paradigm for multi-sector collaboration—Industries of the Future Institutes (IotFIs)—to address some of the greatest societal challenges of our time and to ensure American science and technology (S&T) leadership for decades to come.” PCAST stated “[b]y driving research and development (R&D) at the intersection of two or more IotF areas, these Institutes not only will advance knowledge in the individual IotF topics, but they also will spur new research questions and domains of inquiry at their confluence.” PCAST added:
      • By engaging multiple disciplines and each sector of the U.S. R&D ecosystem—all within the same agile organizational framework—IotFIs will span the spectrum from discovery research to the development of new products and services at scale. Flexible intellectual property terms will incentivize participation of all sectors, and reduced administrative and regulatory burdens will optimize researcher time for creativity and productivity while maintaining appropriate safety, transparency, integrity, and accountability. IotFIs also will serve as a proving ground for new, creative approaches to organizational structure and function; broadening participation; workforce development; science, technology, engineering, and math education; and methods for engaging all sectors of the American research ecosystem. Ultimately, the fruits of IotFIs will sustain American global leadership in S&T, improve quality of life, and help ensure national and economic security for the future.
  • Per the European Commission’s (EC) request, the European Data Protection Board (EDPB) issued clarifications on the consistent application of the General Data Protection Regulation (GDPR) with a focus on health research. The EDPB explained:
    • The following response of the EDPB to the questions of the European Commission should be considered as a first attempt to take away some of the misunderstandings and misinterpretations as to the application of the GDPR to the domain of scientific health research. Generally speaking, most of these questions call for more time for in-depth analysis and/or a search for examples and best practices and can as yet not be completely answered.
    • In its guidelines (currently in preparation and due in 2021) on the processing personal data for scientific research purposes, the EDPB will elaborate further on these issues while also aiming to provide a more comprehensive interpretation of the various provisions in the GDPR that are relevant for the processing of personal data for scientific research purposes.
    • This will also entail a clarification of the extent and scope of the ‘special derogatory regime’ for the processing of personal data for scientific research purposes in the GDPR. It is important that this regime is not perceived as to imply a general exemption to all requirements in the GDPR in case of processing data for scientific research purposes. It should be taken into account that this regime only aims to provide for exceptions to specific requirements in specific situations and that the use of such exceptions is made dependent on ‘additional safeguards’ (Article 89(1) GDPR) to be in place.
  • The Government Accountability Office (GAO) has assessed how well the Federal Communications Commission (FCC) has rolled out and implemented its Lifeline National Verifier (referred to as Verifier by the GAO) to aid low income people in accessing telecommunications benefits. The Verifier was established in 2016 to address claims that allowing telecommunications carriers to make eligibility determinations for participation in the program to help people obtain lower cost communications had led to waste, fraud, and abuse. House Energy and Commerce Committee Chair Frank Pallone Jr. (D-NJ), Communications and Technology Subcommittee Chair Mike Doyle (D-PA), and six Democratic colleagues on the committee asked the GAO “to review FCC’s implementation of the Verifier.” The GAO explained “[t]his report examines (1) the status of the Verifier; (2) the extent to which FCC coordinated with state and federal stakeholders, educated consumers, and facilitated involvement of tribal stakeholders; and (3) the extent to which the Verifier is meeting its goals.” The GAO concluded:
    • The Lifeline program is an important tool that helps low-income Americans afford vital voice and broadband services. In creating the Lifeline National Verifier, FCC sought to facilitate eligible Americans’ access to Lifeline support while protecting the program from waste, fraud, and abuse. Although USAC, under FCC’s oversight, has made progress to implement the Verifier, many eligible consumers are unaware of it and may be unable to use it. Additionally, tribal governments and organizations do not have the information they need from FCC to effectively assist residents of tribal lands in using the Verifier to enroll in Lifeline, even though Lifeline support is critical to increasing access to affordable telecommunications services on tribal lands. Without FCC developing a plan to educate consumers about the Verifier and empowering tribal governments to assist residents of tribal lands with the Verifier, eligible consumers, especially those on tribal lands, will continue to lack awareness of the Verifier and the ability to use it.
    • Further, without measures and information to assess progress toward some of its goals, FCC lacks information it needs to refine and improve the Verifier. While it is too soon to determine if the Verifier is protecting against fraud, FCC has measures in place to monitor fraud moving forward. However, FCC lacks measures to track the Verifier’s progress toward the intent of its second goal of delivering value to Lifeline consumers. FCC also lacks information to help it assess and improve its efforts to meet the third goal of improving the consumer experience. Additionally, consumers may experience challenges with the Verifier’s online application, such as difficulty identifying the Verifier as a government service, and may be uncomfortable providing sensitive information to a website that does not use a “.gov” domain. Unless FCC identifies and addresses challenges with the Verifier’s manual review process and its online application, it will be limited in its ability to improve the consumer experience. As a result, some eligible consumers may abandon their applications and go without the support they need to access crucial telecommunications services. Given that a majority of Lifeline subscribers live in states without state database connections and therefore must undergo manual review more frequently, ensuring that challenges with the manual review process are resolved is particularly important.
    • The GAO recommended:
      • The Chairman of FCC should develop and implement a plan to educate eligible consumers about the Lifeline program and Verifier requirements that aligns with key practices for consumer education planning. (Recommendation 1)
      • The Chairman of FCC should provide tribal organizations with targeted information and tools, such as access to the Verifier, that equip them to assist residents of tribal lands with their Verifier applications. (Recommendation 2)
      • The Chairman of FCC should identify and use performance measures to track the Verifier’s progress in delivering value to consumers. (Recommendation 3)
      • The Chairman of FCC should ensure that it has quality information on consumers’ experience with the Verifier’s manual review process, and should use that information to improve the consumer experience to meet the Verifier’s goals. (Recommendation 4)
      • The Chairman of FCC should ensure that the Verifier’s online application and support website align with characteristics for leading federal website design, including that they are accurate, clear, understandable, easy to use, and contain a mechanism for users to provide feedback. (Recommendation 5)
      • The Chairman of FCC should convert the Verifier’s online application, checklifeline.org, to a “.gov” domain. (Recommendation 6)

Coming Events

  • The House Appropriations Committee’s Financial Services and General Government Subcommittee will hold an oversight hearing on the Election Assistance Commission (EAC) on 16 February with EAC Chair Benjamin Hovland.
  • On 17 February, the House Energy and Commerce Committee’s Communications and Technology Subcommittee will hold a hearing titled “Connecting America: Broadband Solutions to Pandemic Problems” with these witnesses:
    • Free Press Action Vice President of Policy and General Counsel Matthew F. Wood
    • Topeka Public Schools Superintendent Dr. Tiffany Anderson
    • Communications Workers of America President Christopher M. Shelton
    • Wireless Infrastructure Association President and CEO Jonathan Adelstein
  • On 17 February, the Federal Communications Commission (FCC) will hold an open meeting, its first under acting Chair Jessica Rosenworcel, with this tentative agenda:
    • Presentation on the Emergency Broadband Benefit Program. The Commission will hear a presentation on the creation of an Emergency Broadband Benefit Program. Congress charged the FCC with developing a new $3.2 billion program to help Americans who are struggling to pay for internet service during the pandemic.
    • Presentation on COVID-19 Telehealth Program. The Commission will hear a presentation about the next steps for the agency’s COVID-19 Telehealth program. Congress recently provided an additional $249.95 million to support the FCC’s efforts to expand connected care throughout the country and help more patients receive health care safely.
    • Presentation on Improving Broadband Mapping Data. The Commission will hear a presentation on the work the agency is doing to improve its broadband maps. Congress directly appropriated $65 million to help the agency develop better data for improved maps.
    • Addressing 911 Fee Diversion. The Commission will consider a Notice of Proposed Rulemaking that would implement section 902 of the Don’t Break Up the T-Band Act of 2020, which requires the Commission to take action to help address the diversion of 911 fees by states and other jurisdictions for purposes unrelated to 911. (PS Docket Nos. 20-291, 09-14)
    • Implementing the Secure and Trusted Communications Networks Act. The Commission will consider a Third Further Notice of Proposed Rulemaking that proposes to modify FCC rules consistent with changes that were made to the Secure and Trusted Communications Networks Act in the Consolidated Appropriations Act, 2021. (WC Docket No. 18-89)
  • On 18 February, the House Financial Services will hold a hearing titled “Game Stopped? Who Wins and Loses When Short Sellers, Social Media, and Retail Investors Collide” with Reddit Co-Founder and Chief Executive Officer Steve Huffman testifying along with other witnesses.
  • On 27 July, the Federal Trade Commission (FTC) will hold PrivacyCon 2021.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2021. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Photo by Zachary Peterson on Unsplash

Further Reading, Other Developments, and Coming Events (1 February 2021)

Further Reading

  • Facebook and Apple Are Beefing Over the Future of the Internet” By Gilad Edelman — WIRED. The battle over coming changes to Apple’s iOS continues to escalate. Apple CEO Tim Cook said the changes that will change the app set up for iPhone users to an opt-in system for tracking people across the internet would help protect both privacy and democracy. This latter claim is a shot at Facebook and its role in the rise of extremist groups in the United States and elsewhere. Facebook CEP Mark Zuckerberg claimed this change was of a piece with Apple’s long term interests in driving the app market from a free to paid model that would benefit the Cupertino giant through its 30% fees on all in-app purchases. Zuckerberg also reiterated Facebook’s arguments that such a change by Apple will harm small businesses that will have a harder time advertising. Facebook is also making noise about suing Apple in the same way Epic Games has for its allegedly anti-competitive app store practices. Experts expect Apple’s change will take as much as 10% off of Facebook’s bottom line until it and other advertising players adjust their tactics. This will not be the last shots fired between the two tech giants.
  • Democratic Congress Prepares to Take On Big Tech” By Cecilia Kang — The New York Times. Senator Amy Klobuchar (D-MN) is vowing to introduce antitrust legislation this spring that could rein in big technology companies in the future. Klobuchar’s proposal will receive serious consideration because she now chairs the Senate Judiciary Committee’s subcommittee with jurisdiction over antitrust and competition policy. Klobuchar also plans to release a book this spring with her views on antitrust. Any proposal to reform antitrust law faces a steep uphill battle to 60 votes in the Senate.
  • Pressure builds on Biden, Democrats to revive net neutrality rules” By Tony Romm — The Washington Post. Until the Federal Communications Commission (FCC) has a third Democratic vote, pressure from the left will be on whom the Biden Administration will choose to nominate. Once a Democratic majority is in place, the pressure will be substantial to re-promulgate the Obama Administration net neutrality order.
  • Why Google’s Internet-Beaming Balloons Ran Out of Air” By Aaron Mak — Slate. Among the reasons Alphabet pulled the plug on Loon, its attempt to provide internet service in areas without it, include: the costs, lack of revenue since the areas without service tend to be poorer, the price barriers to people getting 4G devices, and resistance or indifference from governments and regulators.
  • A big hurdle for older Americans trying to get vaccinated: Using the internet” By Rebecca Heilweil — recode. Not surprisingly, the digital divide and basic digital literacy are barriers to the elderly, especially poorer and minorities segment of that demographic, securing online appointments for COVID-19 vaccination.

Other Developments

  • A group of House and Senate Democrats have reintroduced the “Public Health Emergency Privacy Act,” a bill that follows legislation of the same title introduced last spring to address gaps in United States (U.S.) privacy law turned up by the promise of widespread use of COVID-19 tracking apps. And while adoption and usage of these apps have largely underperformed expectations, the gaps and issues have not. And, so Representatives Suzan DelBene (D-WA), Anna Eshoo (D-CA), and Jan Schakowsky (D-IL) and Senators Richard Blumenthal (D-CT) and Mark Warner (D-VA) have introduced the “Public Health Emergency Privacy Act” (S.81) but did not make available bill text, so it is not possible at this point to determine how closely it matches last year’s bill, the “Public Health Emergency Privacy Act” (S.3749/H.R.6866) (see here for my analysis of last year’s bill.) However, in a sign that the bills may be identical or very close in their wording, the summary provided in May 2020 and the one provided last week are exactly the same:
    • Ensure that data collected for public health is strictly limited for use in public health;
    • Explicitly prohibit the use of health data for discriminatory, unrelated, or intrusive purposes, including commercial advertising, e-commerce, or efforts to gate access to employment, finance, insurance, housing, or education opportunities;
    • Prevent the potential misuse of health data by government agencies with no role in public health;
    • Require meaningful data security and data integrity protections – including data minimization and accuracy – and mandate deletion by tech firms after the public health emergency;
    • Protect voting rights by prohibiting conditioning the right to vote based on a medical condition or use of contact tracing apps;
    • Require regular reports on the impact of digital collection tools on civil rights;
    • Give the public control over their participation in these efforts by mandating meaningful transparency and requiring opt-in consent; and
    • Provide for robust private and public enforcement, with rulemaking from an expert agency while recognizing the continuing role of states in legislation and enforcement.
  • The United States Department of Justice (DOJ) filed charges against a United States (U.S.) national for “conspiring with others in advance of the 2016 U.S. Presidential Election to use various social media platforms to disseminate misinformation designed to deprive individuals of their constitutional right to vote.” In its complaint, the DOJ foes out of its way not to mention which candidate in the presidential election the accused was working to elect, contemporaneous reporting on the individual made clear he supported Donald Trump and sought to depress the vote for former Secretary of State Hillary Clinton. In its press release, the DOJ asserted:
    • The complaint alleges that in 2016, Mackey established an audience on Twitter with approximately 58,000 followers. A February 2016 analysis by the MIT Media Lab ranked Mackey as the 107th most important influencer of the then-upcoming Election, ranking his account above outlets and individuals such as NBC News (#114), Stephen Colbert (#119) and Newt Gingrich (#141).
    • As alleged in the complaint, between September 2016 and November 2016, in the lead up to the Nov. 8, 2016, U.S. Presidential Election, Mackey conspired with others to use social media platforms, including Twitter, to disseminate fraudulent messages designed to encourage supporters of one of the presidential candidates (the “Candidate”) to “vote” via text message or social media, a legally invalid method of voting.
    • For example, on Nov. 1, 2016, Mackey allegedly tweeted an image that featured an African American woman standing in front of an “African Americans for [the Candidate]” sign.  The image included the following text: “Avoid the Line. Vote from Home. Text ‘[Candidate’s first name]’ to 59925[.] Vote for [the Candidate] and be a part of history.”  The fine print at the bottom of the image stated: “Must be 18 or older to vote. One vote per person. Must be a legal citizen of the United States. Voting by text not available in Guam, Puerto Rico, Alaska or Hawaii. Paid for by [Candidate] for President 2016.”
    • The tweet included the typed hashtags “#Go [Candidate]” and another slogan frequently used by the Candidate. On or about and before Election Day 2016, at least 4,900 unique telephone numbers texted “[Candidate’s first name]” or some derivative to the 59925 text number, which was used in multiple deceptive campaign images tweeted by the defendant and his co-conspirators.
  • Six European and two North American nations worked in coordinated fashion to take down a botnet. Europol announced that “[l]aw enforcement and judicial authorities worldwide have this week disrupted one of most significant botnets of the past decade: EMOTET…[and] [i]nvestigators have now taken control of its infrastructure in an international coordinated action” per their press release. Europol added:
    • EMOTET has been one of the most professional and long lasting cybercrime services out there. First discovered as a banking Trojan in 2014, the malware evolved into the go-to solution for cybercriminals over the years. The EMOTET infrastructure essentially acted as a primary door opener for computer systems on a global scale. Once this unauthorised access was established, these were sold to other top-level criminal groups to deploy further illicit activities such data theft and extortion through ransomware.
  • On 26 January, Senator Ed Markey (D-MA) “asked Facebook why it continues to recommend political groups to users despite committing to stopping the practice” at an October 2020 hearing. Markey pressed CEO Mark Zuckerberg to “explain the apparent discrepancy between its promises to stop recommending political groups and what it has delivered.” Markey added:
    • Unfortunately, it appears that Facebook has failed to keep commitments on this topic that you made to me, other members of Congress, and your users. You and other senior Facebook officials have committed, and reiterated your commitment, to stop your platform’s practice of recommending political groups. First, on October 28, 2020, you appeared before the U.S. Senate Committee on Commerce, Science, and Transportation and stated that Facebook had stopped recommending groups with political content and social issues. When I raised concerns about Facebook’s system of recommending groups, you stated, “Senator, we have taken the step of stopping recommendations in groups for all political content or social issue groups as a precaution for this.”
    • It does not appear, however, that Facebook has kept these commitments. According to The Markup, Facebook “continued to recommend political groups to its users throughout December[of 2020]” — well after you responded to my question at the Commerce Committee hearing.
    • On 27 January, Zuckerberg announced on an earnings call that the platform would stop recommending political and civic groups to users.
  •  The United States (U.S.) Department of Transportation’s National Highway Traffic Safety Administration “announced the expansion of the Automated Vehicle Transparency and Engagement for Safe Testing (AV TEST) Initiative from a pilot to a full program” according to a press release. NHTSA announced the “new web pilot of the Department initiative to improve the safety and testing transparency of automated driving systems” in June 2020 that “aligns with the Department’s leadership on automated driving system vehicles, including AV 4.0:  Ensuring American Leadership in Automated Vehicle Technologies.”
  • The United Kingdom’s (UK) House of Lords amended the government’s Trade Bill that would allow for an agreement with the United States (U.S.) in a way that would block the U.S.’s position that essentially exports 47 USC 230 (Section 230) to the UK. The Lords agreed to this language:
    • (1)The United Kingdom may only become a signatory to an international trade agreement if the conditions in subsection (2) are satisfied.
    • (2) International trade agreements must be consistent with—
      • (a) other international treaties to which the United Kingdom is a party, and the domestic law of England and Wales (including any changes to the law after the trade agreement is signed), regarding the protection of children and other vulnerable user groups using the internet;
      • (b) the provisions on data protection for children, as set out in the age appropriate design code under section 123 of the Data Protection Act 2018 (age-appropriate design code) and other provisions of that Act which impact children; and(c)online protections provided for children in the United Kingdom that the Secretary of State considers necessary.
    • However, the House of Commons disagreed with this change, arguing “it is not an effective means of ensuring the protection of children online.”
    • In a House of Lords briefing document, it is explained:
      • The bill introduces measures to support the UK in implementing an independent trade policy, having left the European Union. It would:
        • enable the UK to implement obligations arising from acceding to the international Agreement on Government Procurement in its own right;
        • enable the UK to implement in domestic law obligations arising under international trade agreements the UK signs with countries that had an existing international trade agreement with the EU;
        • formally establish a new Trade Remedies Authority;
        • enable HM Revenue and Customs (HMRC) to collect information on the number of exporters in the UK; and
        • enable data sharing between HMRC and other private and public sector bodies to fulfil public functions relating to trade.
  • According to their press release, “a coalition of education advocates petitioned the Federal Communications Commission (FCC) to close the remote learning gap for the estimated 15 to 16 million students who lack home internet access” through the E-rate program. This petition follows an Executive Order (EO) signed by President Joe Biden on the first day of his Administration, calling on the FCC to expand broadband connectivity for children across the United States to help them with schooling and studies.
    • In their petition, the groups argued
      • In one of his first Executive Orders, President Biden stated: “The Federal Communications Commission is encouraged, consistent with applicable law, to increase connectivity options for students lacking reliable home broadband, so that they can continue to learn if their schools are operating remotely.”
      • Consistent with [Biden’s EO], the Commission can dramatically improve circumstances for these underserved students, and for schools all over the country that are struggling to educate all of their students, by taking the temporary, limited measures requested in this Petition.
      • As shown below, these actions are well within the Commission’s authority, and in fact all of the actions requested in this Petition could be taken by the Wireline Competition Bureau on delegated authority.
      • As noted above, the Petitioners ask that the Commission issue a declaratory ruling to clarify that, for the duration of the pandemic, the off-campus use of E-rate-supported services to enable remote learning constitutes an “educational purpose” and is therefore allowed under program rules.
      • The declaratory ruling will allow schools and libraries to extend E -rate-funded broadband networks and services outside of a school or library location during Funding Years 2020 and 2021, without losing E-rate funds they are otherwise eligible to receive. Importantly, this requested action would not require the collection of any additional Universal Service funds.
      • Given the severity of our current national emergency, the Petitioners ask that the Bureau release hundreds of millions of dollars—currently not designated for use but held in the E-rate program—to support remote learning. There is little justification for keeping E-rate funds in reserve when the country is facing such an enormous educational crisis.
      • The Commission should use the program’s existing discount methodologies, which take into account socioeconomic status and rural location, in calculating the amount of funding that applicants may receive.  Applicants will have the incentive to make cost-effective purchases because they will have to pay a share of the total cost of services.  
      • To facilitate the distribution of additional funding, Petitioners ask that the Commission direct the Universal Service Administrative Company (USAC) to establish a “remote learning application window” as soon as practicable for the specific purpose of allowing applicants to submit initial or revised requests for E-rate funding for off-campus services used for educational purposes during Funding Years 2020 and 2021.  
      • The Petitioners ask the Commission to waive all rules necessary to effectuate these actions for remote learning funding applications, including the competitive bidding, eligible services, and application rules, pursuant to section 1.3 of the Commission’s rules.
      • The Petitioners respectfully request expedited review of this petition, so that schools and libraries may take action to deploy solutions as soon as possible.
  • “A group of more than 70 organizations have sent a letter to Congress and the Biden/Harris administration warning against responding to the violence in the U.S. Capitol by renewing injudicious attacks on Section 230 of the Communications Decency Act” per their press release. They further urged “lawmakers to consider impacts on marginalized communities before making changes to Section 230, and call on lawmakers to take meaningful action to hold Big Tech companies accountable, including enforcement of existing anti-trust and civil rights law, and passing Federal data privacy legislation.” The signatories characterized themselves as “racial justice, LGBTQ+, Muslim, prison justice, sex worker, free expression, immigration, HIV advocacy, child protection, gender justice, digital rights, consumer, and global human rights organizations.” In terms of the substance of their argument, they asserted:
    • Gutting Section 230 would make it more difficult for web platforms to combat the type of dangerous rhetoric that led to the attack on the Capitol. And certain carve outs to the law could threaten human rights and silence movements for social and racial justice that are needed now more than ever. 
    • Section 230 is a foundational law for free expression and human rights when it comes to digital speech. It makes it possible for websites and online forums to host the opinions, photos, videos, memes, and creativity of ordinary people, rather than just content that is backed by corporations. 
    • The danger posed by uncareful changes to Section 230 is not theoretical. The last major change to the law, the passage of SESTA/FOSTA in 2018, put lives in danger. The impacts of this law were immediate and destructive, limiting the accounts of sex workers and making it more difficult to find and help those who were being trafficked online. This was widely seen as a disaster that made vulnerable communities less safe and led to widespread removal of speech online.
    • We share lawmakers’ concerns with the growing power of Big Tech companies and their unwillingness to address the harm their products are causing. Google and Facebook are just some of the many companies that compromise the privacy and safety of the public by harvesting our data for their own corporate gain, and allowing advertisers, racists and conspiracy theorists to use that data to target us. These surveillance-based business models are pervasive and an attack on human rights. But claims that Section 230 immunizes tech companies that break the law, or disincentivizes them from removing illegal or policy-violating content, are false. In fact, Amazon has invoked Section 230 to defend itself against a lawsuit over its decision to drop Parler from Amazon Web Services due to unchecked threats of violence on Parler’s platform. Additionally, because Section 230 protects platforms’ decisions to remove objectionable content, the law played a role in enabling the removal of Donald Trump from platforms, who could act without fear of excessive litigation.

Coming Events

  • On 3 February, the Senate Commerce, Science, and Transportation Committee will consider the nomination of Rhode Island Governor Gina Raimondo to be the Secretary of Commerce.
  • On 17 February, the Federal Communications Commission (FCC) will hold an open meeting, its first under acting Chair Jessica Rosenworcel, with this tentative agenda:
    • Presentation on the Emergency Broadband Benefit Program. The Commission will hear a presentation on the creation of an Emergency Broadband Benefit Program. Congress charged the FCC with developing a new $3.2 billion program to help Americans who are struggling to pay for internet service during the pandemic.
    • Presentation on COVID-19 Telehealth Program. The Commission will hear a presentation about the next steps for the agency’s COVID-19 Telehealth program. Congress recently provided an additional $249.95 million to support the FCC’s efforts to expand connected care throughout the country and help more patients receive health care safely.
    • Presentation on Improving Broadband Mapping Data. The Commission will hear a presentation on the work the agency is doing to improve its broadband maps. Congress directly appropriated $65 million to help the agency develop better data for improved maps.
    • Addressing 911 Fee Diversion. The Commission will consider a Notice of Proposed Rulemaking that would implement section 902 of the Don’t Break Up the T-Band Act of 2020, which requires the Commission to take action to help address the diversion of 911 fees by states and other jurisdictions for purposes unrelated to 911. (PS Docket Nos. 20-291, 09-14)
    • Implementing the Secure and Trusted Communications Networks Act. The Commission will consider a Third Further Notice of Proposed Rulemaking that proposes to modify FCC rules consistent with changes that were made to the Secure and Trusted Communications Networks Act in the Consolidated Appropriations Act, 2021. (WC Docket No. 18-89)
  • On 27 July 2021, the Federal Trade Commission (FTC) will hold PrivacyCon 2021.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2021. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Photo by Nikolai Chernichenko on Unsplash

Trump Issues Cloud EO; Its Fate Is Unclear

On the last full day of his administration, then President Donald Trump issued an executive order (EO) “to address the use of United States Infrastructure as a Service (IaaS) products by foreign malicious cyber actors.” This EO follows an Obama Administration EO that set up a formal structure to sanction foreign entities for “significant malicious cyber activities” and seeks to further use the emergency powers granted to the President in the 1970’s to address the threats allegedly posed by malicious actors using IaaS (such as cloud computing) to inflict significant harm on the United States (U.S.)

Given the review the Biden Administration is undertaking, particularly of so-called “midnight” regulations and directives, this EO will undoubtedly be scrutinized by the new White House and possibly modified or even withdrawn. Therefore, it is not certain this EO will be implemented. And this will be especially so for this EO considering that the operative parts of the EO require notice and comment rulemaking, which will depend on how the Biden Administration wants to proceed. And, one can be sure that Amazon, Google, Microsoft, and other cloud and IaaS providers have been and will heavily lobby the White House, the agencies charged with implementing the EO, and Congress to exert pressure from Capitol Hill.

This EO is almost certainly aimed at nation-state hackers, groups affiliated by nation-states, non-criminal hackers, and criminal hackers. The U.S. government is implicitly asserting that these malicious actors are using the relative anonymity and infrastructure currently available through IaaS. The EO requires that IaaS providers keep more complete records on all foreign users, which will undoubtedly be portrayed as an onerous requirements by industry stakeholders, and condition or even prohibit service to certain countries or people if the U.S. determines the risk of malicious activity is too high.

An executive order, if implemented as planned, would require cloud and other IaaS providers to collect more information on foreign users and possibly limit or shut down service for potential malicious cyber-enabled actors.

Executive Order 13984 “Taking Additional Steps To Address the National Emergency With Respect to Significant Malicious Cyber-Enabled Activities” lays out the policy rationale for its issuance:

IaaS products provide persons the ability to run software and store data on servers offered for rent or lease without responsibility for the maintenance and operating costs of those servers. Foreign malicious cyber actors aim to harm the United States economy through the theft of intellectual property and sensitive data and to threaten national security by targeting United States critical infrastructure for malicious cyber-enabled activities. Foreign actors use United States IaaS products for a variety of tasks in carrying out malicious cyber-enabled activities, which makes it extremely difficult for United States officials to track and obtain information through legal process before these foreign actors transition to replacement infrastructure and destroy evidence of their prior activities; foreign resellers of United States IaaS products make it easier for foreign actors to access these products and evade detection.

The EO defines “Infrastructure as a Service Product”

any product or service offered to a consumer, including complimentary or “trial” offerings, that provides processing, storage, networks, or other fundamental computing resources, and with which the consumer is able to deploy and run software that is not predefined, including operating systems and applications. The consumer typically does not manage or control most of the underlying hardware but has control over the operating systems, storage, and any deployed applications. The term is inclusive of “managed” products or services, in which the provider is responsible for some aspects of system configuration or maintenance, and “unmanaged” products or services, in which the provider is only responsible for ensuring that the product is available to the consumer. The term is also inclusive of “virtualized” products and services, in which the computing resources of a physical machine are split between virtualized computers accessible over the internet (e.g., “virtual private servers”), and “dedicated” products or services in which the total computing resources of a physical machine are provided to a single person (e.g., “bare-metal” servers);

And so, the EO requires the Department of Commerce (Commerce) to “propose for notice and comment regulations that require United States IaaS providers to verify the identity of a foreign person that obtains an Account.” The EO suggests the criteria and metrics Commerce should consider using but largely leaves this determination to the agency. And, in the list of considerations, the first leverage point is found that industry will likely use: the discretionary authority Commerce will have to exempt certain U.S. IaaS which may include a finding that the IaaS “complies with security best practices to otherwise deter abuse of IaaS products.” Commerce will also need to propose regulations through a notice and comment rulemaking that requires U.S. IaaS providers to take “special measures” if Commerce determines that reasonable grounds exist to conclude a foreign jurisdiction has either a significant number of resellers or people offering U.S. IaaS for “malicious cyber-enabled activities” or a significant number of people are, in fact, using IaaS for these activities. Commerce may do the same if a foreign person, group of people, or entity are offering or using IaaS for malicious cyber-enabled activities. Another leverage point for U.S. IaaS and other stakeholders appears in this section because Commerce must consider:

(i) whether the imposition of any special measure would create a significant competitive disadvantage, including any undue cost or burden associated with compliance, for United States IaaS providers;

(ii) the extent to which the imposition of any special measure or the timing of the special measure would have a significant adverse effect on legitimate business activities involving the particular foreign jurisdiction or foreign person; and

(iii) the effect of any special measure on United States national security, law enforcement investigations, or foreign policy.

Commerce’s “special measures” consist of prohibiting or conditioning the use of U.S. IaaS. What these may look like may be spelled out in the draft regulations Commerce is required to undertake.

The Departments of Justice (DOJ) and Homeland Security (DHS) must study how to increase information sharing among IaaS providers, other stakeholders, and U.S. agencies with the aim of decreasing malicious cyber-enabled activities. Thereafter, DOJ and DHS would submit a report to the President identifying gaps in authority including protection from legal liability, statutes and regulations that could foster greater sharing of information, and the current landscape of threats posed by the sue of IaaS for malicious cyber-enabled activities.

Commerce also needs to work with some U.S. agencies to identify “funding requirements to support the efforts described in this order and incorporate such requirements into its annual budget submissions to the Office of Management and Budget” (OMB). In other words, agencies will need to fashion their budget requests to OMB to prioritize resources for the execution of this EO, which is not to say this will become their top policy priority. But, depending on the buy-in from OMB, this White House office could exert pressure on agencies to follow through in setting aside funds and executing this EO.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2021. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Photo by Magda Ehlers from Pexels

Further Reading, Other Developments, and Coming Events (12 January 2021)

Further Reading

  • Biden’s NSC to focus on global health, climate, cyber and human rights, as well as China and Russia” By Karen DeYoung — The Washington Post. Like almost every incoming White House, the Biden team has announced a restructuring of the National Security Council (NSC) to better effectuate the President-elect’s policy priorities. To not surprise, the volume on cybersecurity policy will be turned up. Other notable change is plans to take “cross-cutting” approaches to issues that will likely meld foreign and domestic and national security and civil issues, meaning there could be a new look on offensive cyber operations, for example. It is possible President Biden decides to put the genie back in the bottle, so to speak, by re-imposing an interagency decision-making process as opposed to the Trump Administration’s approach of delegating discretion to the National Security Agency/Cyber Command head. Also, the NSC will focus on emerging technology, a likely response to the technology arms race the United States finds itself in against the People’s Republic of China.
  • Exclusive: Pandemic relief aid went to media that promoted COVID misinformation” By Caitlin Dickson — yahoo! news. The consulting firm Alethea Group and the nonprofit Global Disinformation Index are claiming the COVID stimulus Paycheck Protection Program (PPP) provided loans and assistance to five firms that “were publishing false or misleading information about the pandemic, thus profiting off the infodemic” according to an Alethea Group vice president. This report follows an NBC News article claiming that 14 white supremacist and racist organizations have also received PPP loans. The Alethea Group and Global Disinformation Index named five entities who took PPP funds and kept spreading pandemic misinformation: Epoch Media Group, Newsmax Media, The Federalist, Liftable Media, and Prager University.
  • Facebook shuts Uganda accounts ahead of vote” — France24. The social media company shuttered a number of Facebook and Instagram accounts related to government officials in Uganda ahead of an election on account of “Coordinated Inauthentic Behaviour” (CIB). This follows the platform shutting down accounts related to the French Army and Russia seeking to influence events in Africa. These and other actions may indicate the platform is starting to pay the same attention to the non-western world as at least one former employee has argued the platform was negligent at best and reckless at worst in not properly resourcing efforts to police CIB throughout the Third World.
  • China tried to punish European states for Huawei bans by adding eleventh-hour rule to EU investment deal” By Finbarr Bermingham — South China Morning Post. At nearly the end of talks on a People’s Republic of China (PRC)-European Union (EU) trade deal, PRC negotiators tried slipping in language that would have barred entry to the PRC’s cloud computing market to any country or company from a country that restricts Huawei’s services and products. This is alternately being seen as either standard Chinese negotiating tactics or an attempt to avenge the thwarting of the crown jewel in its telecommunications ambitions.
  • Chinese regulators to push tech giants to share consumer credit data – sources” By Julie Zhu — Reuters. Ostensibly in a move to better manage the risks of too much unsafe lending, tech giants in the People’s Republic of China (PRC) will soon need to share data on consumer loans. It seems inevitable that such data will be used by Beijing to further crack down on undesirable people and elements within the PRC.
  • The mafia turns social media influencer to reinforce its brand” By Miles Johnson — The Financial Times. Even Italy’s feared ’Ndrangheta is creating and curating a social media presence.

Other Developments

  • President Donald Trump signed an executive order (EO) that bans eight applications from the People’s Republic of China on much the same grounds as the EOs prohibiting TikTok and WeChat. If this EO is not rescinded by the Biden Administration, federal courts may block its implementation as has happened with the TikTok and WeChat EOs to date. Notably, courts have found that the Trump Administration exceeded its authority under the International Emergency Economic Powers Act (IEEPA), which may also be an issue in the proposed prohibition on Alipay, CamScanner, QQ Wallet, SHAREit, Tencent QQ, VMate, WeChat Pay, and WPS Office. Trump found:
    • that additional steps must be taken to deal with the national emergency with respect to the information and communications technology and services supply chain declared in Executive Order 13873 of May 15, 2019 (Securing the Information and Communications Technology and Services Supply Chain).  Specifically, the pace and pervasiveness of the spread in the United States of certain connected mobile and desktop applications and other software developed or controlled by persons in the People’s Republic of China, to include Hong Kong and Macau (China), continue to threaten the national security, foreign policy, and economy of the United States.  At this time, action must be taken to address the threat posed by these Chinese connected software applications.
    • Trump directed that within 45 days of issuance of the EO, there shall be a prohibition on “any transaction by any person, or with respect to any property, subject to the jurisdiction of the United States, with persons that develop or control the following Chinese connected software applications, or with their subsidiaries, as those transactions and persons are identified by the Secretary of Commerce (Secretary) under subsection (e) of this section: Alipay, CamScanner, QQ Wallet, SHAREit, Tencent QQ, VMate, WeChat Pay, and WPS Office.”
  • The Government Accountability Office (GAO) issued its first statutorily required annual assessment of how well the United States Department of Defense (DOD) is managing its major information technology (IT) procurements. The DOD spent more than $36 billion of the $90 billion the federal government was provided for IT in FY 2020. The GAO was tasked with assessing how well the DOD did in using iterative development, managing costs and schedules, and implementing cybersecurity measures. The GAO found progress in the first two realms but a continued lag in deploying long recommended best practices to ensure the security of the IT the DOD buys or builds. Nonetheless, the GAO focused on 15 major IT acquisitions that qualify as administrative (i.e. “business”) and communications and information security (i.e. “non-business.”) While there were no explicit recommendations made, the GAO found:
    • Ten of the 15 selected major IT programs exceeded their planned schedules, with delays ranging from 1 month for the Marine Corps’ CAC2S Inc 1 to 5 years for the Air Force’s Defense Enterprise Accounting and Management System-Increment 1.
    • …eight of the 10 selected major IT programs that had tested their then-current technical performance targets reported having met all of their targets…. As of December 2019, four programs had not yet conducted testing activities—Army’s ACWS, Air Force’s AFIPPS Inc 1, Air Force’s MROi, and Navy ePS. Testing data for one program, Air Force’s ISPAN Inc 4, were classified.
    • …officials from the 15 selected major IT programs we reviewed reported using software development approaches that may help to limit risks to cost and schedule outcomes. For example, major business IT programs reported using COTS software. In addition, most programs reported using an iterative software development approach and using a minimum deployable product. With respect to cybersecurity practices, all the programs reported developing cybersecurity strategies, but programs reported mixed experiences with respect to conducting cybersecurity testing. Most programs reported using operational cybersecurity testing, but less than half reported conducting developmental cybersecurity testing. In addition, programs that reported conducting cybersecurity vulnerability assessments experienced fewer increases in planned program costs and fewer schedule delays. Programs also reported a variety of challenges associated with their software development and cybersecurity staff.
    • 14 of the 15 programs reported using an iterative software development approach which, according to leading practices, may help reduce cost growth and deliver better results to the customer. However, programs also reported using an older approach to software development, known as waterfall, which could introduce risk for program cost growth because of its linear and sequential phases of development that may be implemented over a longer period of time. Specifically, two programs reported using a waterfall approach in conjunction with an iterative approach, while one was solely using a waterfall approach.
    • With respect to cybersecurity, programs reported mixed implementation of specific practices, contributing to program risks that might impact cost and schedule outcomes. For example, all 15 programs reported developing cybersecurity strategies, which are intended to help ensure that programs are planning for and documenting cybersecurity risk management efforts.
    • In contrast, only eight of the 15 programs reported conducting cybersecurity vulnerability assessments—systematic examinations of an information system or product intended to, among other things, determine the adequacy of security measures and identify security deficiencies. These eight programs experienced fewer increases in planned program costs and fewer schedule delays relative to the programs that did not report using cybersecurity vulnerability assessments.
  • The United States (U.S.) Department of Energy gave notice of a “Prohibition Order prohibiting the acquisition, importation, transfer, or installation of specified bulk-power system (BPS) electric equipment that directly serves Critical Defense Facilities (CDFs), pursuant to Executive Order 13920.” (See here for analysis of the executive order.) The Department explained:
    • Executive Order No. 13920 of May 1, 2020, Securing the United States Bulk-Power System (85 FR 26595 (May 4, 2020)) (E.O. 13920) declares that threats by foreign adversaries to the security of the BPS constitute a national emergency. A current list of such adversaries is provided in a Request for Information (RFI), issued by the Department of Energy (Department or DOE) on July 8, 2020 seeking public input to aid in its implementation of E.O. 13920. The Department has reason to believe, as detailed below, that the government of the People’s Republic of China (PRC or China), one of the listed adversaries, is equipped and actively planning to undermine the BPS. The Department has thus determined that certain BPS electric equipment or programmable components subject to China’s ownership, control, or influence, constitute undue risk to the security of the BPS and to U.S. national security. The purpose of this Order is to prohibit the acquisition, importation, transfer, or subsequent installation of such BPS electric equipment or programmable components in certain sections of the BPS.
  • The United States’ (U.S.) Department of Commerce’s Bureau of Industry and Security (BIS) added the People’s Republic of China’s (PRC) Semiconductor Manufacturing International Corporation (SMIC) to its Entity List in a move intended to starve the company of key U.S. technology needed to manufacture high end semiconductors. Therefore, any U.S. entity wishing to do business with SMIC will need a license which the Trump Administration may not be likely to grant. The Department of Commerce explained in its press release:
    • The Entity List designation limits SMIC’s ability to acquire certain U.S. technology by requiring U.S. exporters to apply for a license to sell to the company.  Items uniquely required to produce semiconductors at advanced technology nodes—10 nanometers or below—will be subject to a presumption of denial to prevent such key enabling technology from supporting China’s military-civil fusion efforts.
    • BIS also added more than sixty other entities to the Entity List for actions deemed contrary to the national security or foreign policy interest of the United States.  These include entities in China that enable human rights abuses, entities that supported the militarization and unlawful maritime claims in the South China Sea, entities that acquired U.S.-origin items in support of the People’s Liberation Army’s programs, and entities and persons that engaged in the theft of U.S. trade secrets.
    • As explained in the Federal Register notice:
      • SMIC is added to the Entity List as a result of China’s military-civil fusion (MCF) doctrine and evidence of activities between SMIC and entities of concern in the Chinese military industrial complex. The Entity List designation limits SMIC’s ability to acquire certain U.S. technology by requiring exporters, reexporters, and in-country transferors of such technology to apply for a license to sell to the company. Items uniquely required to produce semiconductors at advanced technology nodes 10 nanometers or below will be subject to a presumption of denial to prevent such key enabling technology from supporting China’s military modernization efforts. This rule adds SMIC and the following ten entities related to SMIC: Semiconductor Manufacturing International (Beijing) Corporation; Semiconductor Manufacturing International (Tianjin) Corporation; Semiconductor Manufacturing International (Shenzhen) Corporation; SMIC Semiconductor Manufacturing (Shanghai) Co., Ltd.; SMIC Holdings Limited; Semiconductor Manufacturing South China Corporation; SMIC Northern Integrated Circuit Manufacturing (Beijing) Co., Ltd.; SMIC Hong Kong International Company Limited; SJ Semiconductor; and Ningbo Semiconductor International Corporation (NSI).
  • The United States’ (U.S.) Department of Commerce’s Bureau of Industry and Security (BIS) amended its Export Administration Regulations “by adding a new ‘Military End User’ (MEU) List, as well as the first tranche of 103 entities, which includes 58 Chinese and 45 Russian companies” per its press release. The Department asserted:
    • The U.S. Government has determined that these companies are ‘military end users’ for purposes of the ‘military end user’ control in the EAR that applies to specified items for exports, reexports, or transfers (in-country) to the China, Russia, and Venezuela when such items are destined for a prohibited ‘military end user.’
  • The Australia Competition and Consumer Commission (ACCC) rolled out another piece of the Consumer Data Right (CDR) scheme under the Competition and Consumer Act 2010, specifically accreditation guidelines “to provide information and guidance to assist applicants with lodging a valid application to become an accredited person” to whom Australians may direct data holders share their data. The ACCC explained:
    • The CDR aims to give consumers more access to and control over their personal data.
    • Being able to easily and efficiently share data will improve consumers’ ability to compare and switch between products and services and encourage competition between service providers, leading to more innovative products and services for consumers and the potential for lower prices.
    • Banking is the first sector to be brought into the CDR.
    • Accredited persons may receive a CDR consumer’s data from a data holder at the request and consent of the consumer. Any person, in Australia or overseas, who wishes to receive CDR data to provide products or services to consumers under the CDR regime, must be accredited
  • Australia’s government has released its “Data Availability and Transparency Bill 2020” that “establishes a new data sharing scheme for federal government data, underpinned by strong safeguards to mitigate risks and simplified processes to make it easier to manage data sharing requests” according to the summary provided in Parliament by the government’s point person. In the accompanying “Explanatory Memorandum,” the following summary was provided:
    • The Bill establishes a new data sharing scheme which will serve as a pathway and regulatory framework for sharing public sector data. ‘Sharing’ involves providing controlled access to data, as distinct from open release to the public.
    • To oversee the scheme and support best practice, the Bill creates a new independent regulator, the National Data Commissioner (the Commissioner). The Commissioner’s role is modelled on other regulators such as the Australian Information Commissioner, with whom the Commissioner will cooperate.
    • The data sharing scheme comprises the Bill and disallowable legislative instruments (regulations, Minister-made rules, and any data codes issued by the Commissioner). The Commissioner may also issue non-legislative guidelines that participating entities must have regard to, and may release other guidance as necessary.
    • Participants in the scheme are known as data scheme entities:
      • Data custodians are Commonwealth bodies that control public sector data, and have the right to deal with that data.
      • Accredited users are entities accredited by the Commissioner to access to public sector data. To become accredited, entities must satisfy the security, privacy, infrastructure and governance requirements set out in the accreditation framework.
      • Accredited data service providers (ADSPs) are entities accredited by the Commissioner to perform data services such as data integration. Government agencies and users will be able to draw upon ADSPs’ expertise to help them to share and use data safely.
    • The Bill does not compel sharing. Data custodians are responsible for assessing each sharing request, and deciding whether to share their data if satisfied the risks can be managed.
    • The data sharing scheme contains robust safeguards to ensure sharing occurs in a consistent and transparent manner, in accordance with community expectations. The Bill authorises data custodians to share public sector data with accredited users, directly or through an ADSP, where:
      • Sharing is for a permitted purpose – government service delivery, informing government policy and programs, or research and development;
      • The data sharing principles have been applied to manage the risks of sharing; and
      • The terms of the arrangement are recorded in a data sharing agreement.
    • Where the above requirements are met, the Bill provides limited statutory authority to share public sector data, despite other Commonwealth, State and Territory laws that prevent sharing. This override of non-disclosure laws is ‘limited’ because it occurs only when the Bill’s requirements are met, and only to the extent necessary to facilitate sharing.
  • The United Kingdom’s Competition and Markets Authority’s (CMA) is asking interested parties to provide input on the proposed acquisition of British semiconductor company by a United States (U.S.) company before it launches a formal investigation later this year. However, CMA is limited to competition considerations, and any national security aspects of the proposed deal would need to be investigated by Prime Minister Boris Johnson’s government. CMA stated:
    • US-based chip designer and producer NVIDIA Corporation (NVIDIA) plans to purchase the Intellectual Property Group business of UK-based Arm Limited (Arm) in a deal worth $40 billion. Arm develops and licenses intellectual property (IP) and software tools for chip designs. The products and services supplied by the companies support a wide range of applications used by businesses and consumers across the UK, including desktop computers and mobile devices, game consoles and vehicle computer systems.
    • CMA added:
      • The CMA will look at the deal’s possible effect on competition in the UK. The CMA is likely to consider whether, following the takeover, Arm has an incentive to withdraw, raise prices or reduce the quality of its IP licensing services to NVIDIA’s rivals.
  • The Israeli firm, NSO Group, has been accused by an entity associated with a British university of using real-time cell phone data to sell its COVID-19 contact tracing app, Fleming, in ways that may have broken the laws of a handful of nations. Forensic Architecture,  a research agency, based at Goldsmiths, University of London, argued:
    • In March 2020, with the rise of COVID-19, Israeli cyber-weapons manufacturer NSO Group launched a contact-tracing technology named ‘Fleming’. Two months later, a database belonging to NSO’s Fleming program was found unprotected online. It contained more than five hundred thousand datapoints for more than thirty thousand distinct mobile phones. NSO Group denied there was a security breach. Forensic Architecture received and analysed a sample of the exposed database, which suggested that the data was based on ‘real’ personal data belonging to unsuspecting civilians, putting their private information in risk
    • Forensic Architecture added:
      • Leaving a database with genuine location data unprotected is a serious violation of the applicable data protection laws. That a surveillance company with access to personal data could have overseen this breach is all the more concerning.
      • This could constitute a violation of the General Data Protection Regulation (GDPR) based on where the database was discovered as well as the laws of the nations where NSO Group allegedly collected personal data
    • The NSO Group denied the claims and was quoted by Tech Crunch:
      • “We have not seen the supposed examination and have to question how these conclusions were reached. Nevertheless, we stand by our previous response of May 6, 2020. The demo material was not based on real and genuine data related to infected COVID-19 individuals,” said an unnamed spokesperson. (NSO’s earlier statement made no reference to individuals with COVID-19.)
      • “As our last statement details, the data used for the demonstrations did not contain any personally identifiable information (PII). And, also as previously stated, this demo was a simulation based on obfuscated data. The Fleming system is a tool that analyzes data provided by end users to help healthcare decision-makers during this global pandemic. NSO does not collect any data for the system, nor does NSO have any access to collected data.”

Coming Events

  • On 13 January, the Federal Communications Commission (FCC) will hold its monthly open meeting, and the agency has placed the following items on its tentative agenda “Bureau, Office, and Task Force leaders will summarize the work their teams have done over the last four years in a series of presentations:
    • Panel One. The Commission will hear presentations from the Wireless Telecommunications Bureau, International Bureau, Office of Engineering and Technology, and Office of Economics and Analytics.
    • Panel Two. The Commission will hear presentations from the Wireline Competition Bureau and the Rural Broadband Auctions Task Force.
    • Panel Three. The Commission will hear presentations from the Media Bureau and the Incentive Auction Task Force.
    • Panel Four. The Commission will hear presentations from the Consumer and Governmental Affairs Bureau, Enforcement Bureau, and Public Safety and Homeland Security Bureau.
    • Panel Five. The Commission will hear presentations from the Office of Communications Business Opportunities, Office of Managing Director, and Office of General Counsel.
  • On 27 July, the Federal Trade Commission (FTC) will hold PrivacyCon 2021.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2021. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Judith Scharnowski from Pixabay

American and Canadian Agencies Take Differing Approaches On Regulating AI

The outgoing Trump Administration tells agencies to lightly regulate AI; Canada’s privacy regulator calls for strong safeguards and limits on use of AI, including legislative changes.

The Office of Management and Budget (OMB) has issued guidance for federal agencies on how they are to regulate artificial intelligence (AI) not in use by the government. This guidance seeks to align policy across agencies in how they use their existing power to regulate AI according to the Trump Administration’s policy goals. Notably, this memorandum is binding on all federal agencies (including national defense) and even independent agencies such as the Federal Trade Commission (FTC) and Federal Communications Commission (FCC). OMB worked with other stakeholder agencies on this guidance per by Executive Order (EO) 13859, “Maintaining American Leadership in Artificial Intelligence” and issued a draft of the memorandum 11 months ago for comment.

In “Guidance for Regulation of Artificial Intelligence Applications,” OMB “sets out policy considerations that should guide, to the extent permitted by law, regulatory and non-regulatory approaches to AI applications developed and deployed outside of the Federal government.” OMB is directing agencies to take a light touch to regulating AI under its current statutory authorities, being careful to consider costs and benefits and keeping in mind the larger policy backdrop of taking steps to ensure United States (U.S.) dominance in AI in light of competition from the People’s Republic of China (PRC), the European Union, Japan, the United Kingdom, and others. OMB is requiring reports from agencies on how they will use and not use their authority to meet the articulated goals and requirements of this memorandum. However, given the due date for these reports will be well into the next Administration, it is very likely the Biden OMB at least pauses this initiative and probably alters it to meet new policy. It is possible that policy goals to protect privacy, combat algorithmic bias, and protect data are made more prominent in U.S. AI regulation.

As a threshold matter, it bears note that this memorandum uses a definition of statute that is narrower than AI is being popularly discussed. OMB explained that “[w]hile this Memorandum uses the definition of AI recently codified in statute, it focuses on “narrow” (also known as “weak”) AI, which goes beyond advanced conventional computing to learn and perform domain-specific or specialized tasks by extracting information from data sets, or other structured or unstructured sources of information.” Consequently, “[m]ore theoretical applications of “strong” or “general” AI—AI that may exhibit sentience or consciousness, can be applied to a wide variety of cross-domain activities and perform at the level of, or better than a human agent, or has the capacity to self-improve its general cognitive abilities similar to or beyond human capabilities—are beyond the scope of this Memorandum.”

The Trump OMB tells agencies to minimize regulation of AI and take into account how any regulatory action may affect growth and innovation in the field before putting implemented. OMB directs agencies to favor “narrowly tailored and evidence­ based regulations that address specific and identifiable risks” that foster an environment where U.S. AI can flourish. Consequently, OMB bars “a precautionary approach that holds AI systems to an impossibly high standard such that society cannot enjoy their benefits and that could undermine America’s position as the global leader in AI innovation.” Of course, what constitutes “evidence-based regulation” and an “impossibly high standard” are in the eye of the beholder, so this memorandum could be read by the next OMB in ways the outgoing OMB does not agree with. Finally, OMB is pushing agencies to factor potential benefits in any risk calculation, presumably allowing for greater risk of bad outcomes if the potential reward seems high. This would seem to suggest a more hands-off approach on regulating AI.

OMB listed the 10 AI principles agencies must in regulating AI in the private sector:

  • Public trust in AI
  • Public participation
  • Scientific integrity and information quality
  • Risk assessment and management
  • Benefits and costs
  • Flexibility
  • Fairness and non-discrimination
  • Disclosure and transparency
  • Safety and security
  • Interagency coordination

OMB also tells agencies to look at existing federal or state regulation that may prove inconsistent, duplicative, or inconsistent with this federal policy and “may use their authority to address inconsistent, burdensome, and duplicative State laws that prevent the emergence of a national market.”

OMB encouraged agencies to use “non-regulatory approaches” in the event existing regulations are sufficient or the benefits of regulation do not justify the costs. OMB counseled “[i]n these cases, the agency may consider either not taking any action or, instead, identifying non-regulatory approaches that may be appropriate to address the risk posed by certain AI applications” and provided examples of “non-regulatory approaches:”

  • Sector-Specific Policy Guidance or Frameworks
  • Pilot Programs and Experiments
  • Voluntary Consensus Standards
  • Voluntary Frameworks

As noted, the EO under which OMB is acting requires “that implementing agencies with regulatory authorities review their authorities relevant to AI applications and submit plans to OMB on achieving consistency with this Memorandum.” OMB directs:

The agency plan must identify any statutory authorities specifically governing agency regulation of AI applications, as well as collections of AI-related information from regulated entities. For these collections, agencies should describe any statutory restrictions on the collection or sharing of information (e.g., confidential business information, personally identifiable information, protected health information, law enforcement information, and classified or other national security information). The agency plan must also report on the outcomes of stakeholder engagements that identify existing regulatory barriers to AI applications and high-priority AI applications that are within an agency’s regulatory authorities. OMB also requests agencies to list and describe any planned or considered regulatory actions on AI. Appendix B provides a template for agency plans.

Earlier this year, the White House’s Office of Science and Technology Policy (OSTP) released a draft “Guidance for Regulation of Artificial Intelligence Applications,” a draft of this OMB memorandum that would be issued to federal agencies as directed by Executive Order (EO) 13859, “Maintaining American Leadership in Artificial Intelligence.” However, this memorandum is not aimed at how federal agencies use and deploy artificial intelligence (AI) but rather it “sets out policy considerations that should guide, to the extent permitted by law, regulatory and non-regulatory oversight of AI applications developed and deployed outside of the Federal government.” In short, if this draft is issued by OMB as written, federal agencies would need to adhere to the ten principles laid out in the document in regulating AI as part of their existing and future jurisdiction over the private sector. Not surprisingly, the Administration favors a light touch approach that should foster the growth of AI.

EO 13859 sets the AI policy of the government “to sustain and enhance the scientific, technological, and economic leadership position of the United States in AI.” The EO directed OMB and OSTP along with other Administration offices, to craft this draft memorandum for comment. OMB was to “issue a memorandum to the heads of all agencies that shall:

(i) inform the development of regulatory and non-regulatory approaches by such agencies regarding technologies and industrial sectors that are either empowered or enabled by AI, and that advance American innovation while upholding civil liberties, privacy, and American values; and
(ii) consider ways to reduce barriers to the use of AI technologies in order to promote their innovative application while protecting civil liberties, privacy, American values, and United States economic and national security.

A key regulator in a neighbor of the U.S. also weighed in on the proper regulation of AI from the vantage of privacy. The Office of the Privacy Commissioner of Canada (OPC) “released key recommendations…[that] are the result of a public consultation launched earlier this year.” OPC explained that it “launched a public consultation on our proposals for ensuring the appropriate regulation of AI in the Personal Information Protection and Electronic Documents Act (PIPEDA).” OPC’s “working assumption was that legislative changes to PIPEDA are required to help reap the benefits of AI while upholding individuals’ fundamental right to privacy.” It is to be expected that a privacy regulator will see matters differently than a Republican White House, and so it is here. The OPC

In an introductory paragraph, the OPC spelled out the problems and dangers created by AI:

uses of AI that are based on individuals’ personal information can have serious consequences for their privacy. AI models have the capability to analyze, infer and predict aspects of individuals’ behaviour, interests and even their emotions in striking ways. AI systems can use such insights to make automated decisions about individuals, including whether they get a job offer, qualify for a loan, pay a higher insurance premium, or are suspected of suspicious or unlawful behaviour. Such decisions have a real impact on individuals’ lives, and raise concerns about how they are reached, as well as issues of fairness, accuracy, bias, and discrimination. AI systems can also be used to influence, micro-target, and “nudge” individuals’ behaviour without their knowledge. Such practices can lead to troubling effects for society as a whole, particularly when used to influence democratic processes.

The OPC is focused on the potential for AI to be used in a more effective fashion than current data processing to predict, uncover, subvert, and influence the behavior of people in ways not readily apparent. There is also concern for another aspect of AI and other data processing that has long troubled privacy and human rights advocates: the potential for discriminatory treatement.

OPC asserted “an appropriate law for AI would:

  • Allow personal information to be used for new purposes towards responsible AI innovation and for societal benefits;
  • Authorize these uses within a rights based framework that would entrench privacy as a human right and a necessary element for the exercise of other fundamental rights;
  • Create provisions specific to automated decision-making to ensure transparency, accuracy and fairness; and
  • Require businesses to demonstrate accountability to the regulator upon request, ultimately through proactive inspections and other enforcement measures through which the regulator would ensure compliance with the law.

However, the OPC does not entirely oppose the use of AI and is proposing exceptions to the general requirement under Canadian federal law that meaningful consent is required before data processing. The OPC is “recommending a series of new exceptions to consent that would allow the benefits of AI to be better achieved, but within a rights based framework.” OPC stated “[t]he intent is to allow for responsible, socially beneficial innovation, while ensuring individual rights are respected…[and] [w]e recommend exceptions to consent for the use of personal information for research and statistical purposes, compatible purposes, and legitimate commercial interests purposes.” However, the OPC is proposing a number of safeguards:

The proposed exceptions to consent must be accompanied by a number of safeguards to ensure their appropriate use. This includes a requirement to complete a privacy impact assessment (PIA), and a balancing test to ensure the protection of fundamental rights. The use of de-identified information would be required in all cases for the research and statistical purposes exception, and to the extent possible for the legitimate commercial interests exception.

Further, the OPC made the case that enshrining strong privacy rights in Canadian law would not obstruct the development of AI but would, in fact, speed its development:

  • A rights-based regime would not stand in the way of responsible innovation. In fact, it would help support responsible innovation and foster trust in the marketplace, giving individuals the confidence to fully participate in the digital age. In our 2018-2019 Annual Report to Parliament, our Office outlined a blueprint for what a rights-based approach to protecting privacy should entail. This rights-based approach runs through all of the recommendations in this paper.
  • While we propose that the law should allow for uses of AI for a number of new purposes as outlined, we have seen examples of unfair, discriminatory, and biased practices being facilitated by AI which are far removed from what is socially beneficial. Given the risks associated with AI, a rights based framework would help to ensure that it is used in a manner that upholds rights. Privacy law should prohibit using personal information in ways that are incompatible with our rights and values.
  • Another important measure related to this human rights-based approach would be for the definition of personal information in PIPEDA to be amended to clarify that it includes inferences drawn about an individual. This is important, particularly in the age of AI, where individuals’ personal information can be used by organizations to create profiles and make predictions intended to influence their behaviour. Capturing inferred information clearly within the law is key for protecting human rights because inferences can often be drawn about an individual without their knowledge, and can be used to make decisions about them.

The OPC also called for a framework under which people could review and contest automated decisions:

we recommend that individuals be provided with two explicit rights in relation to automated decision-making. Specifically, they should have a right to a meaningful explanation of, and a right to contest, automated decision-making under PIPEDA. These rights would be exercised by individuals upon request to an organization. Organizations should be required to inform individuals of these rights through enhanced transparency practices to ensure individual awareness of the specific use of automated decision-making, as well as of their associated rights. This could include requiring notice to be provided separate from other legal terms.

The OPC also counseled that PIPEDA’s enforcement mechanism and incentives be changed:

PIPEDA should incorporate a right to demonstrable accountability for individuals, which would mandate demonstrable accountability for all processing of personal information. In addition to the measures detailed below, this should be underpinned by a record keeping requirement similar to that in Article 30 of the GDPR. This record keeping requirement would be necessary to facilitate the OPC’s ability to conduct proactive inspections under PIPEDA, and for individuals to exercise their rights under the Act.

The OPC called for the following to ensure “demonstrable accountability:”

  • Integrating privacy and human rights into the design of AI algorithms and models is a powerful way to prevent negative downstream impacts on individuals. It is also consistent with modern legislation, such as the GDPR and Bill 64. PIPEDA should require organizations to design for privacy and human rights by requiring organizations to implement “appropriate technical and organizational measures” that implement PIPEDA requirements prior to and during all phases of collection and processing.
  • In light of the new proposed rights to explanation and contestation, organizations should be required to log and trace the collection and use of personal information in order to adequately fulfill these rights for the complex processing involved in AI. Tracing supports demonstrable accountability as it provides documentation that the regulator could consult through the course of an inspection or investigation, to determine the personal information fed into the AI system, as well as broader compliance.
  • Demonstrable accountability must include a model of assured accountability pursuant to which the regulator has the ability to proactively inspect an organization’s privacy compliance. In today’s world where business models are often opaque and information flows are increasingly complex, individuals are unlikely to file a complaint when they are unaware of a practice that might cause them harm. This challenge will only become more pronounced as information flows gain complexity with the continued development of AI.
  • The significant risks posed to privacy and human rights by AI systems require a proportionally strong regulatory regime. To incentivize compliance with the law, PIPEDA must provide for meaningful enforcement with real consequences for organizations found to be non-compliant. To guarantee compliance and protect human rights, PIPEDA should empower the OPC to issue binding orders and financial penalties.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Photo by Tetyana Kovyrina from Pexels

Further Action On TikTok Divestment and Ban But No Changes

TikTok sues to block the CFIUS order that it divest and the Trump Administration files an appeal of an injunction.

Even though the Trump Administration’s efforts to implement its ban of TikTok have gone nowhere as numerous courts have enjoined the enforcement of the orders, TikTok filed suit against the related order that the company divest Musical.ly primarily on the grounds that the technology that supposedly threatens United States (U.S.) national security is unrelated to the acquisition. Moreover, the day after this suit was filed, a key U.S. agency announced a delay of the divestment order. In a related action, the Trump Administration filed to appeal one of the injunctions blocking it from moving forward on banning the People’s Republic of China (PRC) app. Depending on how long it takes for the federal court to resolve this suit, a Biden Administration Department of Justice (DOJ) may take a different tack than the Trump DOJ.

The day before the divestment order was set to take effect, TikTok asked the United States Court of Appeals for the District of Columbia to review “the Presidential Order Regarding the Acquisition of Musical.ly by ByteDance Ltd., 85 Fed. Reg. 51,297 (Aug. 14, 2020) (the “Divestment Order”), and the related action of the Committee on Foreign Investment in the United States (CFIUS), including its determination to reject mitigation, truncate its review and investigation, and refer the matter to the President.” TikTok asserted:

The Divestment Order and the CFIUS Action seek to compel the wholesale divestment of TikTok, a multi-billion-dollar business built on technology developed by Petitioner ByteDance Ltd. (“ByteDance”), based on the government’s purported national security review of a three-year- old transaction that involved a different business. This attempted taking exceeds the authority granted to Respondents under Section 721, which authorizes CFIUS to review and the President to, at most, prohibit a specified “covered transaction” to address risks to national security created by that transaction. Here, that covered transaction was ByteDance’s acquisition of the U.S. business of another Chinese- headquartered company, Musical.ly—a transaction that did not include the core technology or other aspects of the TikTok business that have made it successful and yet which the Divestment Order now seeks to compel ByteDance to divest.

TikTok also made claims that CFIUS violated the Due Process Clause of the Fifth Amendment, violated the Administrative Procedures Act, and is proposing a “taking” illegal under the Fifth Amendment.

And yet, the Department of the Treasury, the lead agency in the CFIUS process, issued a statement, explaining that the deadline for divestiture had been pushed back by 15 days:

The President’s August 14 Order requires ByteDance and TikTok Inc. to undertake specific divestments and other measures to address the national security risk arising from ByteDance’s acquisition of Musical.ly.  Consistent with the Order, the Committee on Foreign Investment in the United States (CFIUS) has granted ByteDance a 15-day extension of the original November 12, 2020 deadline.  This extension will provide the parties and the Committee additional time to resolve this case in a manner that complies with the Order.   

The Trump Administration may successfully argue that a delay of the order means the court cannot rule on TikTok’s suit. Consequently, this suit may well get pushed into a Biden Administration.

TikTok issued this statement along with the filing of its suit:

For a year, TikTok has actively engaged with CFIUS in good faith to address its national security concerns, even as we disagree with its assessment. In the nearly two months since the president gave his preliminary approval to our proposal to satisfy those concerns, we have offered detailed solutions to finalize that agreement—but have received no substantive feedback on our extensive data privacy and security framework.

Of course, because of the CFIUS divestment order, ByteDance seems to have reached an agreement with Oracle and Walmart, but what they exactly agreed to remains an open question.

In mid-September, the Trump Administration paused its notice for implementing the Executive Order (EO) against TikTok because of agreement in principles of a deal that would permit Oracle and Walmart to control a certain percentage of TikTok in the U.S. However, the details of which entity would control what remain murky with ByteDance arguing that U.S. entities will not control TikTok, but assertions to the opposite being made by the company’s U.S. partners. In the weekend before the EO has set to take effect, it appeared Oracle and Walmart would be able to take a collective 20% stake in a new entity TikTok Global that would operate in the U.S. Walmart has been partnering with Microsoft, but when the tech giant failed in its bid, Walmart began talks with Oracle. ByteDance would have a stake in the company but not majority control according to some sources. However, ByteDance began pushing back on that narrative as President Donald Trump declared after word of a deal leaked “if we find that [Oracle and Walmart] don’t have total control, then we’re not going to approve the deal.” Moreover, $5 billion would be used for some sort of educational fund. However, it is hard to tell what exactly would occur and whether this is supposed to be the “finder’s fee” of sorts Trump had said the U.S. would deserve from the deal.

On 19 September, the U.S. Department of Commerce issued a statement pushing back the effective date of the order against TikTik from 20 September to 27 September because of “recent positive developments.” The same day, the U.S. Department of the Treasury released a statement, explaining:

The President has reviewed a deal among Oracle, Walmart, and TikTok Global to address the national security threat posed by TikTok’s operations. Oracle will be responsible for key technology and security responsibilities to protect all U.S. user data. Approval of the transaction is subject to a closing with Oracle and Walmart and necessary documentation and conditions to be approved by Committee on Foreign Investment in the United States (CFIUS). 

TikTok also released a statement, asserting

We’re pleased that today we’ve confirmed a proposal that resolves the Administration’s security concerns and settles questions around TikTok’s future in the US. Our plan is extensive and consistent with previous CFIUS resolutions, including working with Oracle, who will be our trusted cloud and technology provider responsible for fully securing our users’ data. We are committed to protecting our users globally and providing the highest levels of security. Both Oracle and Walmart will take part in a TikTok Global pre-IPO financing round in which they can take up to a 20% cumulative stake in the company. We will also maintain and expand the US as TikTok Global’s headquarters while bringing 25,000 jobs across the country.

Walmart issued its own statement on 19 September:

While there is still work to do on final agreements, we have tentatively agreed to purchase 7.5% of TikTok Global as well as enter into commercial agreements to provide our ecommerce, fulfillment, payments and other omnichannel services to TikTok Global. Our CEO, Doug McMillon, would also serve as one of five board members of the newly created company. In addition, we would work toward an initial public offering of the company in the United States within the next year to bring even more ownership to American citizens. The final transaction will need to be approved by the relevant U.S. government agencies.

The same day, Oracle and Walmart released a joint statement:

  • The President has announced that ByteDance has received tentative approval for an agreement with the U.S. Government to resolve the outstanding issues, which will now include Oracle and Walmart together investing to acquire 20% of the newly formed TikTok Global business.
  • As a part of the deal, TikTok is creating a new company called TikTok Global that will be responsible for providing all TikTok services to users in United States and most of the users in the rest of the world. Today, the administration has conditionally approved a landmark deal where Oracle becomes TikTok’s secure cloud provider.
  • TikTok Global will be majority owned by American investors, including Oracle and Walmart. TikTok Global will be an independent American company, headquartered in the U.S., with four Americans out of the five member Board of Directors.
  • All the TikTok technology will be in possession of TikTok Global, and comply with U.S. laws and privacy regulations. Data privacy for 100 million American TikTok users will be quickly established by moving all American data to Oracle’s Generation 2 Cloud data centers, the most secure cloud data centers in the world.
  • In addition to its equity position, Walmart will bring its omnichannel retail capabilities including its Walmart.com assortment, eCommerce marketplace, fulfillment, payment and measurement-as-a-service advertising service.
  • TikTok Global will create more than 25,000 new jobs in the Unites States and TikTok Global will pay more than $5 billion in new tax dollars to the U.S. Treasury.
  • TikTok Global, together with Oracle, SIG, General Atlantic, Sequoia, Walmart and Coatue will create an educational initiative to develop and deliver an AI-driven online video curriculum to teach children from inner cities to the suburbs, a variety of courses from basic reading and math to science, history and computer engineering.
  • TikTok Global will have an Initial Public Offering (IPO) in less than 12 months and be listed on a U.S. Exchange. After the IPO, U.S. ownership of TikTok Global will increase and continue to grow over time.

A day later, Oracle went further in a statement to the media claiming, “ByteDance will have no ownership in TikTok Global,” which is a different message than the one the company was sending. For example, in a blog post, ByteDance stated “[t]he current plan does not involve the transfer of any algorithms or technology…[but] Oracle has the authority to check the source code of TikTok USA.”

On a related note, the DOJ filed a notice of appeal of an injunction barring the implementation of the TikTok issued in late October. Three TikTok influencers had filed suit and lost their motion for a preliminary injunction. However, after District Court of the District of Columbia granted TikTok’s request to stop the Department of Commerce from enforcing the first part of the order implementing the ban, the three influencers revised their motion and refiled.

Judge Wendy Beetlestone found that the Trump Administration exceeded its powers under the International Emergency Economic Powers Act (IEEPA) in issuing part of its TikTok order effectuating the ban set to take effect on 12 November:

  • Any provision of internet hosting services, occurring on or after 11:59 p.m. eastern standard time on November 12, 2020, enabling the functioning or optimization of the TikTok mobile application[;]
  • Any provision of content delivery network services, occurring on or after 11:59 p.m. eastern standard time on November 12, 2020, enabling the functioning or optimization of the TikTok mobile application[;]
  • Any provision of directly contracted or arranged internet transit or peering services, occurring on or after 11:59 p.m. eastern standard time on November 12, 2020, enabling the functioning or optimization of the TikTok mobile application[;and]
  • Any utilization, occurring on or after 11:59 p.m. eastern standard time on November 12, 2020, of the TikTok mobile application’s constituent code, functions, or services in the functioning of software or services developed and/or accessible within the land and maritime borders of the United States and its territories.

Beetlestone found that the limit on the use of IEEPA powers to regulate information is clearly implicated by Commerce’s order, which proposes to do just that. Consequently, this is not a legal use of IEEPA powers. The judge also found the plaintiffs would be irreparably harmed through a loss of their audiences and brand sponsorships:

Plaintiffs challenge the Commerce Identification on both statutory and constitutional grounds. First, they contend that the Commerce Identification violates both the First and Fifth Amendments to the U.S. Constitution. They then contend that the Commerce Identification violates the Administrative Procedure Act,5 U.S.C. §701 et seq.,as it is both arbitrary and capricious, see id.§706(2)(A), and ultra vires, see id. § 706(2)(C). Plaintiffs’ ultra vires claim consists of three separate arguments: (1) the Commerce Identification contravenes IEEPA’s “informational materials” exception, 50 U.S.C. § 1702(b)(3); (2) the Commerce Identification contravenes IEEPA’s prohibition on the regulation of “personal communication[s] . . . not involv[ing] a transfer of anything of value,” id. § 1702(b)(1), and (3) the Commerce Identification is not responsive to the national emergency declared in the ICTS Executive Order, and therefore requires the declaration of a new national emergency to take effect, see id. §1701(b).

In the first injunction granted against the TikTok ban, the court found that TikTok’s claims on the misuse of IEEPA, 50 U.S.C. §§ 1701–08, the primary authority President Donald Trump relied on in his executive order banning the app, were unpersuasive. The court conceded “IEEPA contains a broad grant of authority to declare national emergencies and to prohibit certain transactions with foreign countries or foreign nationals that pose risks to the national security of the United States.” But, the court noted “IEEPA also contains two express limitations relevant here: the “authority granted to the President . . . does not include the authority to regulate or prohibit, directly or indirectly” either (a) the importation or exportation of “information or informational materials”; or (b) “personal communication[s], which do[] not involve a transfer of anything of value.” The court concluded:

In sum, the TikTok Order and the Secretary’s prohibitions will have the intended effect of stopping U.S. users from communicating (and thus sharing data) on TikTok. To be sure, the ultimate purpose of those prohibitions is to protect the national security by preventing China from accessing that data and skewing content on TikTok. And the government’s actions may not constitute direct regulations or prohibitions of activities carved out by 50 U.S.C. 1702(b). But Plaintiffs have demonstrated that they are likely to succeed on their claim that the prohibitions constitute indirect regulations of “personal communication[s]” or the exchange of “information or informational materials.”

After considering the risks of irreparable harm to TikTok and the equities and public interest, the court decided:

Weighing these interests together with Plaintiffs’ likelihood of succeeding on their IEEPA claim and the irreparable harm that Plaintiffs (and their U.S. users) will suffer absent an injunction, the Court concludes that a preliminary injunction is appropriate.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Photo by Olivier Bergeron from Pexels

Courts Further Block TikTok and WeChat Bans

Two Trump Administration measures to strike at the PRC remain unimplemented.

The Trump Administration has suffered more setbacks in its efforts to move forward with its bans on applications from the People’s Republic of China (PRC). United States’ (U.S.) courts continue to block enforcement of orders prohibiting TikTok and WeChat on national security grounds. Courts have been skeptical of the rationale and reasons offered by the Trump Administration and have not allowed a single portion of either order to take effect.

In a decision late last month, a magistrate judge in San Francisco rejected the Trump Administration’s request to essentially reverse the injunction on the ban of WeChat. Magistrate Judge Laurel Beeler explained:

The government moved to stay the preliminary injunction, and it submitted additional information (that it could not have reasonably submitted earlier) that the Secretary of Commerce considered in identifying the prohibited transactions. The plaintiffs submitted additional information too. On this record, the court denies the motion to stay. The government’s additional evidence does not alter the court’s previous holding that the plaintiffs are entitled to a preliminary injunction.

Beeler said the Department of Commerce presented additional evidence on the threats to national security posed by WeChat and Tencent, chiefly because of the PRC’s access to the data amassed and used by PRC companies, and the Department of Homeland Security’s Cybersecurity Infrastructure and Security Agency presented evidence on potential threats the app poses, including as a vehicle for PRC misinformation, a means of introducing malicious code, and the exposure of data of Americans to the PRC.

One of the plaintiff’s experts identified “four targeted measures to address the government’s concerns about WeChat:

  • First, WeChat could partner with a U.S. cloud provider to store data, which would allow a relatively secure place for user data and easy audits to detect unauthorized access to data.
  • Second, regular compliance audits would mitigate data-security risks.
  • Third, it is industry best practice to have stringent corporate or external oversight over management and personnel with access to user data.
  • Fourth, WeChat could use end-to-end encryption. These measures do not eliminate all risks of data leaks to the Chinese government, but they meet the industry’s current standards.

This expert further noted

that the government’s concern about WeChat’s surveillance capabilities could be addressed by an independent third party’s review and audit of WeChat’s source codes. Banning WeChat downloads is dangerous because it increases security risks to users: software needs updates to fix bugs, and if bugs are not fixed, WeChat users’ devices and data are subject to attack. Security concerns about government employees are addressed through narrower bans of those employees’ use of the WeChat app. Otherwise, data protection generally requires best practices such as end-to-end encryption, protecting consumer data and metadata (in the manner of Europe’s General Data Protection Regulation or California’s Consumer Privacy Act), and supporting research into making traffic analysis more difficult. Tech companies such as Facebook and Google, which collect data and sell it to data brokers, also pose surveillance concerns. If China wants U.S. users’ private information, it can buy it from those data brokers. Effectively banning WeChat does not protect U.S. user data from criminals or China.

Beeler found:

The government’s new evidence does not meaningfully alter its earlier submissions. The court’s assessment of the First Amendment analysis and the risks to national security — on this record — are unchanged.

[T]he record does not support the conclusion that the government has “narrowly tailored” the prohibited transactions to protect its national-security interests. Instead, the record, on balance, supports the conclusion that the restrictions “burden substantially more speech than is necessary to further the government’s legitimate interests.”

Consequently, Beeler denied the motion to lift the injunction against the WeChat order.

Moreover, the United States Court of Appeals for the Ninth Circuit declined to stay’s Beeler’s injunction barring implementation of the WeChat ban, as requested by the Trump Administration.

On 19 September, Beeler granted a preliminary injunction against the Trump Administration’s implementation of the WeChat order. As explained in a footnote, “[t]he plaintiffs are U.S. WeChat Users Alliance, a nonprofit formed to challenge the WeChat Executive Order, and individual and business users.” In short, they contended that the WeChat ban

(1) violates the First Amendment to the U.S. Constitution,

(2) violates the Fifth Amendment,

(3) violates the Religious Freedom Restoration Act, 42 U.S.C. § 2000bb(1)(a),

(4) was not a lawful exercise of the President’s and the Secretary’s authority under IEEPA— which allows the President to prohibit “transactions” in the interest of national security — because the IEEPA, 50 U.S.C. § 1702(b)(1), does not allow them to regulate personal communications, and

(5) violates the Administrative Procedures Act (“APA”) because the Secretary exceeded his authority under the IEEPA and should have promulgated the rule through the notice-and-comment rulemaking procedures in 5 U.S.C. § 553(b).

The judge granted the motion for a preliminary injunction “on the ground that the plaintiffs have shown serious questions going to the merits of the First Amendment claim, the balance of hardships tips in the plaintiffs’ favor, and the plaintiffs establish sufficiently the other elements for preliminary-injunctive relief.” The judge seemed most persuaded by this claim and summarized the plaintiffs’ argument:

  • First, they contend, effectively banning WeChat — which serves as a virtual public square for the Chinese-speaking and Chinese-American community in the United States and is (as a practical matter) their only means of communication — forecloses meaningful access to communication in their community and thereby operates as a prior restraint on their right to free speech that does not survive strict scrutiny.
  • Second, even if the prohibited transactions are content-neutral time-place-or-manner restrictions, they do not survive intermediate scrutiny because the complete ban is not narrowly tailored to address the government’s significant interest in national security.

In a decision from last week, a new federal court has found reason to block the TikTok ban on new grounds. Three TikTok influencers  had filed suit and lost their motion for a preliminary injunction. However, after District Court of the District of Columbia granted TikTok’s request to stop the Department of Commerce from enforcing the first part of the order implementing the ban, the three influencers revised their motion and refiled.

Judge Wendy Beetlestone found that the Trump Administration exceeded its powers under the International Emergency Economic Powers Act (IEEPA) in issuing part of its TikTok order effectuating the ban set to take effect on 12 November:

  • Any provision of internet hosting services, occurring on or after 11:59 p.m. eastern standard time on November 12, 2020, enabling the functioning or optimization of the TikTok mobile application[;]
  • Any provision of content delivery network services, occurring on or after 11:59 p.m. eastern standard time on November 12, 2020, enabling the functioning or optimization of the TikTok mobile application[;]
  • Any provision of directly contracted or arranged internet transit or peering services, occurring on or after 11:59 p.m. eastern standard time on November 12, 2020, enabling the functioning or optimization of the TikTok mobile application[; and]
  • Any utilization, occurring on or after 11:59 p.m. eastern standard time on November 12, 2020, of the TikTok mobile application’s constituent code, functions, or services in the functioning of software or services developed and/or accessible within the land and maritime borders of the United States and its territories.

Beetlestone found that the limit on the use of IEEPA powers to regulate information is clearly implicated by Commerce’s order, which proposes to do just that. Consequently, this is not a legal use of IEEPA powers. The judge also found the plaintiffs would be irreparably harmed through a loss of their audiences and brand sponsorships

Plaintiffs challenge the Commerce Identification on both statutory and constitutional grounds. First, they contend that the Commerce Identification violates both the First and Fifth Amendments to the U.S. Constitution. They then contend that the Commerce Identification violates the Administrative Procedure Act,5 U.S.C. §701 et seq.,as it is both arbitrary and capricious, see id.§706(2)(A), and ultra vires, see id. § 706(2)(C). Plaintiffs’ ultra vires claim consists of three separate arguments: (1) the Commerce Identification contravenes IEEPA’s “informational materials” exception, 50 U.S.C. § 1702(b)(3); (2) the Commerce Identification contravenes IEEPA’s prohibition on the regulation of “personal communication[s] . . . not involv[ing] a transfer of anything of value,” id. § 1702(b)(1), and (3) the Commerce Identification is not responsive to the national emergency declared in the ICTS Executive Order, and therefore requires the declaration of a new national emergency to take effect, see id. §1701(b).

In the first injunction granted against the TikTok ban, the court found that TikTok’s claims on the misuse of IEEPA , 50 U.S.C. §§ 1701–08, the primary authority President Donald Trump relied on in his executive order banning the app, were unpersuasive. The court conceded “IEEPA contains a broad grant of authority to declare national emergencies and to prohibit certain transactions with foreign countries or foreign nationals that pose risks to the national security of the United States.” But, the court noted “IEEPA also contains two express limitations relevant here: the “authority granted to the President . . . does not include the authority to regulate or prohibit, directly or indirectly” either (a) the importation or exportation of “information or informational materials”; or (b) “personal communication[s], which do[] not involve a transfer of anything of value.” The court concluded:

In sum, the TikTok Order and the Secretary’s prohibitions will have the intended effect of stopping U.S. users from communicating (and thus sharing data) on TikTok. To be sure, the ultimate purpose of those prohibitions is to protect the national security by preventing China from accessing that data and skewing content on TikTok. And the government’s actions may not constitute direct regulations or prohibitions of activities carved out by 50 U.S.C. 1702(b). But Plaintiffs have demonstrated that they are likely to succeed on their claim that the prohibitions constitute indirect regulations of “personal communication[s]” or the exchange of “information or informational materials.”

After considering the risks of irreparable harm to TikTok and the equities and public interest, the court decided:

Weighing these interests together with Plaintiffs’ likelihood of succeeding on their IEEPA claim and the irreparable harm that Plaintiffs (and their U.S. users) will suffer absent an injunction, the Court concludes that a preliminary injunction is appropriate.

On 18 September, the Trump Administration issued orders barring TikTok and WeChat pursuant to the “Executive Order on Addressing the Threat Posed by TikTok” and “Executive Order on Addressing the Threat Posed by WeChat” that bar any transactions with the companies that made, distribute, and operate TikTok and WeChat respectively. The U.S. Department of Commerce (Commerce) issued orders effectuating the executive orders.

In a press release, Commerce explained:

As of September 20, 2020, the following transactions are prohibited:

  1. Any provision of service to distribute or maintain the WeChat or TikTok mobile applications, constituent code, or application updates through an online mobile application store in the U.S.;
  2. Any provision of services through the WeChat mobile application for the purpose of transferring funds or processing payments within the U.S.

As of September 20, 2020, for WeChat and as of November 12, 2020, for TikTokthe following transactions are prohibited:

  1. Any provision of internet hosting services enabling the functioning or optimization of the mobile application in the U.S.;
  2. Any provision of content delivery network services enabling the functioning or optimization of the mobile application in the U.S.;
  3. Any provision directly contracted or arranged internet transit or peering services enabling the function or optimization of the mobile application within the U.S.;
  4. Any utilization of the mobile application’s constituent code, functions, or services in the functioning of software or services developed and/or accessible within the U.S.

Commerce added:

Any other prohibitive transaction relating to WeChat or TikTok may be identified at a future date. Should the U.S. Government determine that WeChat’s or TikTok’s illicit behavior is being replicated by another app somehow outside the scope of these executive orders, the President has the authority to consider whether additional orders may be appropriate to address such activities. The President has provided until November 12 for the national security concerns posed by TikTok to be resolved. If they are, the prohibitions in this order may be lifted.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by iXimus from Pixabay

Further Reading, Other Developments, and Coming Events (27 October)

Further Reading

  •  “The Police Can Probably Break Into Your Phone” By Jack Nicas — The New York Times. So, about “Going Dark.” Turns out nations and law enforcement officials have either oversold the barrier that default end-to-end encryption on phones creates or did not understand the access that police were already getting to many encrypted phones. This piece is based in large part on the Upturn report showing that United States (U.S.) law enforcement agencies have multiple means of hacking into encrypted or protected smartphones. The point is made that the issue is really that encryption makes it harder to get into phones and is quite pricey. If an iPhone or Android user stores data in the cloud, then getting access is not a problem. But having it encrypted on a phone requires serious technological means to access. But, this article points to another facet of the Upturn report: police have very little in the way of policy or guidance on how to handle data in ways that respect privacy and possibly even the laws of their jurisdictions.
  • Pornhub Doesn’t Care” By Samantha Cole and Emanuel Maiberg — Vice. One of the world’s biggest pornography sites seems to have a poor track record at taking down non-consensual pornography. A number of women were duped into filming pornography they were told would not be distributed online or only in certain jurisdictions. The proprietor lied and now many of them are faced with having these clips turn up again and again on Pornhub and other sites even if they use digital fingerprinting of such videos. These technological screening methods can be easily defeated. Worse still, Pornhub, and its parent company, Mindgeek, did not start responding to requests from these women to have their videos taken down until they began litigating against the man who had masterminded the filming of the non-consensual videos.
  • ‘Machines set loose to slaughter’: the dangerous rise of military AI” By Frank Pasquale — The Guardian. This long read lays out some of the possibilities that may come to pass if artificial intelligence is used to create autonomous weapons or robots. Most of the outcomes sound like science fiction, but then who could have foreseen a fleet of drones in the Middle East operated by the United States.
  • How The Epoch Times Created a Giant Influence Machine” By Kevin Roose — The New York Times. An interesting tale of how a fringe publication may be on its way to being one of the biggest purveyors of right wing material online.
  • Schools Clamored for Seesaw’s App. That Was Good News, and Bad News.” By Stephanie Clifford — The New York Times. The pandemic has led to the rise of another educational app.

Other Developments

  • The United Kingdom’s (UK) Parliamentary Business, Energy and Industrial Strategy (BEIS) Committee wrote a number of companies, including technology firms, “to seek answers in relation to the Committee’s inquiry exploring the extent to which businesses in the UK are exploiting the forced labour of Uyghur in the Xinjiang region of China” according to the committee’s press release. The committee wrote to Amazon and TikTok because as the chair of the committee, Minister of Parliament Nusrat Ghani asserted:
    • The Australian Strategic Policy Institute’s (ASPI) ‘Uyghur’s for Sale’ report names 82 foreign and Chinese companies directly or indirectly benefiting from the exploitation of Uyghur workers in Xinjiang. The companies listed in the Australian Strategic Policy Institute’s report span industries including the fashion, retail and information technology sectors. On the BEIS Committee, we are determined to ask prominent businesses operating in Britain in these sectors what they are doing to ensure their profits are not on the back of forced labour in China. These businesses are trusted by many British consumers and I hope they will repay this faith by coming forward to answer these questions and also take up the opportunity to give evidence to the Business Committee in public.
    • In its March report, the ASPI argued:
      • The Chinese government has facilitated the mass transfer of Uyghur and other ethnic minority citizens from the far west region of Xinjiang to factories across the country. Under conditions that strongly suggest forced labour, Uyghurs are working in factories that are in the supply chains of at least 82 well-known global brands in the technology, clothing and automotive sectors, including Apple, BMW, Gap, Huawei, Nike, Samsung, Sony and Volkswagen.
      • This report estimates that more than 80,000 Uyghurs were transferred out of Xinjiang to work in factories across China between 2017 and 2019, and some of them were sent directly from detention camps. The estimated figure is conservative and the actual figure is likely to be far higher. In factories far away from home, they typically live in segregated dormitories, undergo organised Mandarin and ideological training outside working hours, are subject to constant surveillance, and are forbidden from participating in religious observances. Numerous sources, including government documents, show that transferred workers are assigned minders and have limited freedom of movement.
      • China has attracted international condemnation for its network of extrajudicial ‘re-education camps’ in Xinjiang. This report exposes a new phase in China’s social re-engineering campaign targeting minority citizens, revealing new evidence that some factories across China are using forced Uyghur labour under a state-sponsored labour transfer scheme that is tainting the global supply chain.
  • A group of nations worked together to find and apprehend individuals accused of laundering ill-gotten funds for cyber criminals. The United States (U.S.) indicted the accused. Europol explained:
    • An unprecedented international law enforcement operation involving 16 countries has resulted in the arrest of 20 individuals suspected of belonging to the QQAAZZ criminal network which attempted to launder tens of millions of euros on behalf of the world’s foremost cybercriminals. 
    • Some 40 house searches were carried out in Latvia, Bulgaria, the United Kingdom, Spain and Italy, with criminal proceedings initiated against those arrested by the United States, Portugal, the United Kingdom and Spain. The largest number of searches in the case were carried out in Latvia in operations led by the Latvian State Police (Latvijas Valsts Policija). Bitcoin mining equipment was also seized in Bulgaria.
    • This international sweep follows a complex investigation led by the Portuguese Judicial Police (Polícia Judiciária) together with the United States Attorney Office for the Western District of Pennsylvania and the FBI’s Pittsburgh Field Office, alongside the Spanish National Police (Policia Nacional) and the regional Catalan police (Mossos D’esquadra) and law enforcement authorities from the United Kingdom, Latvia, Bulgaria, Georgia, Italy, Germany, Switzerland, Poland, Czech Republic, Australia, Sweden, Austria and Belgium with coordination efforts led by Europol. 
    • The U.S. Department of Justice (DOJ) claimed:
      • Comprised of several layers of members from Latvia, Georgia, Bulgaria, Romania, and Belgium, among other countries, the QQAAZZ network opened and maintained hundreds of corporate and personal bank accounts at financial institutions throughout the world to receive money from cybercriminals who stole it from bank accounts of victims.  The funds were then transferred to other QQAAZZ-controlled bank accounts and sometimes converted to cryptocurrency using “tumbling” services designed to hide the original source of the funds.  After taking a fee of up to 40 to 50 percent, QQAAZZ returned the balance of the stolen funds to their cybercriminal clientele.  
      • The QQAAZZ members secured these bank accounts by using both legitimate and fraudulent Polish and Bulgarian identification documents to create and register dozens of shell companies which conducted no legitimate business activity. Using these registration documents, the QQAAZZ members then opened corporate bank accounts in the names of the shell companies at numerous financial institutions around the world, thereby generating hundreds of QQAAZZ-controlled bank accounts available to receive stolen funds from cyber thieves.
      • QQAAZZ advertised its services as a “global, complicit bank drops service” on Russian-speaking online cybercriminal forums where cybercriminals gather to offer or seek specialized skills or services needed to engage in a variety of cybercriminal activities. The criminal gangs behind some of the world’s most harmful malware families (e.g.: Dridex, Trickbot, GozNym, etc.) are among those cybercriminal groups that benefited from the services provided by QQAAZZ. 
  • Representatives Anna Eshoo (D-CA) and Bobby L. Rush (D-IL), and Senator Ron Wyden (D-OR) wrote the Privacy and Civil Liberties Oversight Board (PCLOB) asking that the privacy watchdog “investigate the federal government’s surveillance of recent protests, the legal authorities for that surveillance, the government’s adherence to required procedures in using surveillance equipment, and the chilling effect that federal government surveillance has had on protesters.”
    • They argued:
      • Many agencies have or may have surveilled protesters, according to press reports and agency documents.
        • The Customs and Border Protection (CBP) deployed various aircraft –including AS350 helicopters, a Cessna single-engine airplane, and Predator drones –that logged 270 hours of aerial surveillance footage over 15 cities, including Minneapolis, New York City, Buffalo, Philadelphia, Detroit, and Washington, D.C.
        • The FBI flew Cessna 560 aircraft over protests in Washington, D.C., in June, and reporting shows that the FBI has previously equipped such aircraft with ‘dirt boxes,’ equipment that can collect cell phone location data, along with sophisticated cameras for long-range, persistent video surveillance.
        • In addition to specific allegations of protester surveillance, the Drug Enforcement Agency (DEA) was granted broad authority to “conduct covert surveillance ”over protesters responding to the murder of Mr. Floyd.
    • Eshoo, Rush, and Wyden claimed:
      • Recent surveillance of protests involves serious threats to liberty and requires a thorough investigation. We ask that PCLOB thoroughly investigate, including by holding public hearings, the following issues and issue a public report about its findings:
        • (1) Whether and to what extent federal government agencies surveilled protests by collecting or processing personal information of protesters.
        • (2) What legal authorities agencies are using as the basis for surveillance, an unclassified enumeration of claimed statutory or other authorities, and whether agencies followed required procedures for using surveillance equipment, acquiring and processing personal data, receiving appropriate approvals, and providing needed transparency.
        • (3) To what extent the threat of surveillance has a chilling effect on protests.
  • Ireland’s Data Protection Commission (DPC) has opened two inquiries into Facebook and Instagram for potential violations under the General Data Protection Regulation (GDPR) and Ireland’s Data Protection Act 2018. This is not the only regulatory action the DPC has against Facebook, which is headquartered in Dublin. The DPC is reportedly trying to stop Facebook from transferring personal data out of the European Union (EU) and into the United States (U.S.) using standard contractual clauses (SCC) in light of the EU-U.S. Privacy Shield being struck down. The DPC stated “Instagram is a social media platform which is used widely by children in Ireland and across Europe…[and] [t]he DPC has been actively monitoring complaints received from individuals in this area and has identified potential concerns in relation to the processing of children’s personal data on Instagram which require further examination.
    • The DPC explained the two inquiries:
      • This Inquiry will assess Facebook’s reliance on certain legal bases for its processing of children’s personal data on the Instagram platform. The DPC will set out to establish whether Facebook has a legal basis for the ongoing processing of children’s personal data and if it employs adequate protections and or restrictions on the Instagram platform for such children. This Inquiry will also consider whether Facebook meets its obligations as a data controller with regard to transparency requirements in its provision of Instagram to children.
      • This Inquiry will focus on Instagram profile and account settings and the appropriateness of these settings for children. Amongst other matters, this Inquiry will explore Facebook’s adherence with the requirements in the GDPR in respect to Data Protection by Design and Default and specifically in relation to Facebook’s responsibility to protect the data protection rights of children as vulnerable persons.
  • The United States’ National Institute of Standards and Technology (NIST) issued a draft version of the Cybersecurity Profile for the Responsible Use of Positioning, Navigation and Timing (PNT) Services (NISTIR 8323). Comments are due by 23 November.
    • NIST explained:
      • NIST has developed this PNT cybersecurity profile to help organizations identify systems, networks, and assets dependent on PNT services; identify appropriate PNT services; detect the disruption and manipulation of PNT services; and manage the associated risks to the systems, networks, and assets dependent on PNT services. This profile will help organizations make deliberate, risk-informed decisions on their use of PNT services.
    • In its June request for information (RFI), NIST explained “Executive Order 13905, Strengthening National Resilience Through Responsible Use of Positioning, Navigation, and Timing Services, was issued on February 12, 2020 and seeks to protect the national and economic security of the United States from disruptions to PNT services that are vital to the functioning of technology and infrastructure, including the electrical power grid, communications infrastructure and mobile devices, all modes of transportation, precision agriculture, weather forecasting, and emergency response.” The EO directed NIST “to develop and make available, to at least the appropriate agencies and private sector users, PNT profiles.”

Coming Events

  • The Senate Commerce, Science, and Transportation Committee will hold a hearing on 28 October regarding 47 U.S.C. 230 titled “Does Section 230’s Sweeping Immunity Enable Big Tech Bad Behavior?” with testimony from:
    • Jack Dorsey, Chief Executive Officer of Twitter;
    • Sundar Pichai, Chief Executive Officer of Alphabet Inc. and its subsidiary, Google; and 
    • Mark Zuckerberg, Chief Executive Officer of Facebook.
  • On 29 October, the Federal Trade Commission (FTC) will hold a seminar titled “Green Lights & Red Flags: FTC Rules of the Road for Business workshop” that “will bring together Ohio business owners and marketing executives with national and state legal experts to provide practical insights to business and legal professionals about how established consumer protection principles apply in today’s fast-paced marketplace.”
  • On 10 November, the Senate Commerce, Science, and Transportation Committee will hold a hearing to consider nominations, including Nathan Simington’s to be a Member of the Federal Communications Commission.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

“How Encryption Works” by Afsal CMK is licensed under CC BY 4.0

TikTok Sues Trump Administration

TikTok files a longshot lawsuit that may soon be moot if the company’s operations in the U.S. are sold.     

No one in the White House or Administration should be terribly surprised that TikTok decided to sue over the 6 August “Executive Order on Addressing the Threat Posed by TikTok.” The company is alleging the President and his Administration exceeded the bounds of authority granted by Congress and violated the company’s rights under the United States (U.S.) Constitution. The company wants a court to stop the Trump Administration from moving forward with implementing the executive order (EO) and for the court to deem the EO unconstitutional and illegal. It is possible the court rules on whether it will enjoin the Trump Administration in the short term, but it will likely take much more time to decide on the substance of the case. In any event, this suit could soon be moot if ByteDance sells off its U.S. operations of TikTok to a U.S. company, for the EO would likely be rescinded in such a case.

The EO bars all transactions between U.S. entities and people, starting 45 days after issuance of the EO, with TikTok and their subsidiaries. Specifically, “to the extent permitted under applicable law: any transaction [is prohibited] by any person, or with respect to any property, subject to the jurisdiction of the United States, with [ByteDance], or its subsidiaries, in which any such company has any interest…” The Trump Administration claimed:

TikTok, a video-sharing mobile application owned by the Chinese company ByteDance Ltd., has reportedly been downloaded over 175 million times in the United States and over one billion times globally.  TikTok automatically captures vast swaths of information from its users, including Internet and other network activity information such as location data and browsing and search histories.  This data collection threatens to allow the Chinese Communist Party access to Americans’ personal and proprietary information — potentially allowing China to track the locations of Federal employees and contractors, build dossiers of personal information for blackmail, and conduct corporate espionage.

In the suit filed in United States federal court in Northern California, TikTok is asking for an injunction to stop enforcement of the EO and a declaration that it is illegal. The company specifically asserts:

The executive order and, necessarily, any implementing regulations are unlawful and unconstitutional for a number of independent reasons:

  • By banning TikTok with no notice or opportunity to be heard (whether before or after the fact), the executive order violates the due process protections of the Fifth Amendment.
  • The order is ultra vires because it is not based on a bona fide national emergency and authorizes the prohibition of activities that have not been found to pose “an unusual and extraordinary threat.”
  • The order is ultra vires because its prohibitions sweep broadly to prohibit any transactions with ByteDance, although the purported threat justifying the order is limited to TikTok, just one of ByteDance’s businesses.
  • The order is ultra vires because it restricts personal communications and the transmission of informational materials, in direct violation of International Emergency Economic Powers Act (IEEPA).
  • IEEPA lacks any intelligible principle to guide or constrain the President’s action and thereby violates the non-delegation doctrine, as the President’s overbroad and unjustified claim of authority in this matter confirms.
  • By demanding that Plaintiffs make a payment to the U.S. Treasury as a condition for the sale of TikTok, the President has taken Plaintiffs’ property without compensation in violation of the Fifth Amendment.
  • By preventing TikTok Inc. from operating in the United States the executive order violates TikTok Inc.’s First Amendment rights in its code, an expressive means of communication.

In a press release, TikTok contended

To be clear, we far prefer constructive dialogue over litigation. But with the [EO] threatening to bring a ban on our US operations – eliminating the creation of 10,000 American jobs and irreparably harming the millions of Americans who turn to this app for entertainment, connection, and legitimate livelihoods that are vital especially during the pandemic – we simply have no choice.

It bears note that rarely have suits against the use of a President’s use of IEEPA succeeded, notably on many of the same grounds TikTok is using. Courts have rejected claims that a President’s use of these powers violate the Fifth and First Amendments and the non-delegation doctrine.

Additionally, a TikTok employee has filed suit against the Trump Administration, making some of the same arguments against the EO, but contending further

Given the severe civil and criminal penalties in place for violating the Executive Order, and the overbroad nature of its language, it is obvious that TikTok and its employees, as well as other companies involved in the process of distributing wages and salaries to U.S. employees, such as ADP, banks, and credit companies, would not dare to engage in any activity that might be construed as a violation. The broad language of the order necessarily will create a chilling effect for any person or entity that has contracted with or that does business with TikTok.

Of course, there is litigation pending against TikTok for alleged violations, including one case before the same court in Northern California. A college student filed suit, arguing:

Unknown to its users, however, is that TikTok also includes Chinese surveillance software. TikTok clandestinely has vacuumed up and transferred to servers in China vast quantities of private and personally-identifiable user data that can be employed to identify, profile and track the location and activities of users in the United States now and in the future. TikTok also has surreptitiously taken user content, such as draft videos never intended for publication, without user knowledge or consent. In short, TikTok’s lighthearted fun comes at a heavy cost. Meanwhile, TikTok unjustly profits from its secret harvesting of private and personally-identifiable user data by, among other things, using such data to derive vast targeted-advertising revenues and profits. Its conduct violates statutory, Constitutional, and common law privacy, data, and consumer protections.

The plaintiff asserted TikTok violated the following U.S. and California laws and common law legal doctrines:

  • Computer Fraud and Abuse Act, 18 U.S.C. § 1030
  • California Comprehensive Data Access and Fraud Act, Cal. Pen. C. § 502
  • Right to Privacy – California Constitution
  • Intrusion upon Seclusion
  • California Unfair Competition Law, Bus. & Prof. C. §§ 17200 et seq.
  • California False Advertising Law, Bus. & Prof. C. §§ 17500 et seq.
  • Negligence
  • Restitution / Unjust Enrichment

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

U.S. Orders ByteDance To Unwind Deal That Created TikTok

The Trump Administration ups the ante with TikTok and orders its parent to divest the app that formed the core of the popular short video sharing platform.

In an order issued late last week, the Trump Administration completed its retrospective review of ByteDance’s acquisition of the app Musical.ly that became TikTok. The decision on whether ByteDance’s acquisition threatened the national security of the United States (U.S.) is separate from the executive order released earlier in the week banning the app. The Trump Administration is giving ByteDance 90 days to sell Musical.ly, a move that may well impair TikTok in nations other than the U.S. It is not immediately clear how this order affects the executive order issued a week earlier barring all transactions with TikTok.

The Committee on Foreign Investment in the United States (CFIUS) has been reviewing ByteDance’s acquisition on national security grounds, but the fact that the CFIUS process wrapped up the same week the Trump Administration issued an order banning TikTok in the U.S. is curious to say the least. There have been media accounts for some time that the CFIUS agencies were looking at the ByteDance deal because of increasing tensions with the People’s Republic of China (PRC).  While it is not a frequent occurrence, there is precedent for a retrospective use of the CFIUS process. For example, in March 2019, the Trump Administration ordered Kunlun, a PRC gaming firm, to spin off Grindr, a LGBTQ dating app, for similar national security reasons.

In the order, the Trump Administration makes the case that “[t]here is credible evidence that leads me to believe that ByteDance Ltd., an exempted company with limited liability incorporated under the laws of the Cayman Islands (“ByteDance”), through acquiring all interests in musical.​ly, an exempted company with limited liability incorporated under the laws of the Cayman Islands (“Musical.​ly”), might take action that threatens to impair the national security of the United States.” The Trump Administration has been expressing concern that PRC companies have been sharing the personal data of users, many of whom are Americans, with the PRC government because of recent changes in law that require information sharing with authorities in Beijing.

In the “Order Regarding the Acquisition of Musical.ly by ByteDance Ltd,” President Donald Trump stated

The transaction resulting in the acquisition by ByteDance of Musical.​ly, to the extent that Musical.​ly or any of its assets is used in furtherance or support of, or relating to, Musical.​ly’s activities in interstate commerce in the United States (“Musical.​ly in the United States”), is hereby prohibited, and ownership by ByteDance of any interest in Musical.​ly in the United States, whether effected directly or indirectly through ByteDance, or through ByteDance’s subsidiaries, affiliates, or Chinese shareholders, is also prohibited.

Moreover, ByteDance is under an obligation to destroy user data before selling. Specifically, the order directs

Immediately upon divestment, ByteDance shall certify in writing to CFIUS that it has destroyed all data that it is required to divest…as well as all copies of such data wherever located, and CFIUS is authorized to require auditing of ByteDance on terms it deems appropriate in order to ensure that such destruction of data is complete.

Moreover, during the 90 day period preceding the sale, CFIUS is authorized to take necessary steps to ensure ByteDance’s compliance.

The week before, the White House acted against two popular applications from the PRC on account of purported national security issues created by Americans downloading and using them. The White House issued an “Executive Order on Addressing the Threat Posed by TikTok” and an “Executive Order on Addressing the Threat Posed by WeChat” that bar any transactions with the companies that made, distribute, and operate TikTok and WeChat respectively, the former being much more popular in the United States (U.S.) than the latter. These bans are also of a piece with the Trump Administration’s narrative that the PRC is responsible for COVID-19 and poses an existential threat to western democracy. In response, the PRC is likely to increase pressure on U.S. and foreign firms operating in that nation or with supply chains rooted in the PRC. In any event, it is not clear how effective these directives will be and the companies being targeted are almost certain to sue to stop enforcement.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.