Further Reading, Other Development, and Coming Events (8 December)

Further Reading

  • Facebook failed to put fact-check labels on 60% of the most viral posts containing Georgia election misinformation that its own fact-checkers had debunked, a new report says” By Tyler Sonnemaker — Business Insider. Despite its vows to improve its managing of untrue and false content, the platform is not consistently taking down such material related to the runoffs for the Georgia Senate seats. The group behind this finding argues it is because Facebook does not want to. What is left unsaid is that engagement drives revenue, and so, Facebook’s incentives are not to police all violations. Rather it would be to take down enough to be able to say their doing something.
  • Federal Labor Agency Says Google Wrongly Fired 2 Employees” By Kate Conger and Noam Scheiber — The New York Times. The National Labor Relations Board (NLRB) has reportedly sided with two employees Google fired for activities that are traditionally considered labor organizing. The two engineers had been dismissed for allegedly violating the company’s data security practices when they researched the company’s retention of a union-busting firm and sought to alert others about organizing. Even though Google is vowing to fight the action, which has not been finalized, it may well settle given the view of Big Tech in Washington these days. This action could also foretell how a Biden Administration NLRB may look at the labor practices of these companies.
  • U.S. states plan to sue Facebook next week: sources” By Diane Bartz — Reuters. We could see state and federal antitrust suits against Facebook this week. One investigation led by New York Attorney General Tish James could include 40 states although the grounds for alleged violations have not been leaked at this point. It may be Facebook’s acquisition of potential rivals Instagram and WhatsApp that have allowed it to dominate the social messaging market. The Federal Trade Commission (FTC) may also file suit, and, again, the grounds are unknown. The European Commission (EC) is also investigating Facebook for possible violations of European Union (EU) antitrust law over the company’s use of the personal data it holds and uses and about its operation of it online marketplace.
  • The Children of Pornhub” By Nicholas Kristof — The New York Times. This column comprehensively traces the reprehensible recent history of a Canadian conglomerate Mindgeek that owns Pornhub where one can find reams of child and non-consensual pornography. Why Ottawa has not cracked down on this firm is a mystery. The passage and implementation of the “Allow States and Victims to Fight Online Sex Trafficking Act of 2017” (P.L. 115-164) that narrowed the liability shield under 47 USC 230 has forced the company to remove content, a significant change from its indifference before the statutory change in law. Kristof suggests some easy, common sense changes Mindgeek could implement to combat the presence of this illegal material, but it seems like the company will do enough to say it is acting without seriously reforming its platform. Why would it? There is too much money to be made. Additionally, those fighting against this sort of material have been pressuring payment platforms to stop doing business with Mindgeek. PayPal has foresworn any  interaction, and due to pressure Visa and Mastercard are “reviewing” their relationship with Mindgeek and Pornhub. In a statement to a different news outlet, Pornhub claimed it is “unequivocally committed to combating child sexual abuse material (CSAM), and has instituted a comprehensive, industry-leading trust and safety policy to identify and eradicate illegal material from our community.” The company further claimed “[a]ny assertion that we allow CSAM is irresponsible and flagrantly untrue….[w]e have zero tolerance for CSAM.”
  • Amazon and Apple Are Powering a Shift Away From Intel’s Chips” By Don Clark — The New York Times. Two tech giants have chosen new faster, cheaper chips signaling a possible industry shift away from Intel, the firm that has been a significant player for decades. Intel will not go quietly, of course, and a key variable is whether must have software and applications are rewritten to accommodate the new chips from a British firm, Arm.

Other Developments

  • The Government Accountability Office (GAO) and the National Academy of Medicine (NAM) have released a joint report on artificial intelligence in healthcare, consisting of GAO’s Technology Assessment: Artificial Intelligence in Health Care: Benefits and Challenges of Technologies to Augment Patient Care and NAM’s Special Publication: Advancing Artificial Intelligence in Health Settings Outside the Hospital and Clinic. GAO’s report “discusses three topics: (1) current and emerging AI tools available for augmenting patient care and their potential benefits, (2) challenges to the development and adoption of these tools, and (3) policy options to maximize benefits and mitigate challenges to the use of AI tools to augment patient care.” NAM’s “paper aims to provide an analysis of: 1) current technologies and future applications of AI in HSOHC, 2) the logistical steps and challenges involved in integrating AI- HSOHC applications into existing provider workflows, and 3) the ethical and legal considerations of such AI tools, followed by a brief proposal of potential key initiatives to guide the development and adoption of AI in health settings outside the hospital and clinic (HSOHC).
    • The GAO “identified five categories of clinical applications where AI tools have shown promise to augment patient care: predicting health trajectories, recommending treatments, guiding surgical care, monitoring patients, and supporting population health management.” The GAO “also identified three categories of administrative applications where AI tools have shown promise to reduce provider burden and increase the efficiency of patient care: recording digital clinical notes, optimizing operational processes, and automating laborious tasks.” The GAO stated:
      • This technology assessment also identifies challenges that hinder the adoption and impact of AI tools to augment patient care, according to stakeholders, experts, and the literature. Difficulties accessing sufficient high-quality data may hamper innovation in this space. Further, some available data may be biased, which can reduce the effectiveness and accuracy of the tools for some people. Addressing bias can be difficult because the electronic health data do not currently represent the general population. It can also be challenging to scale tools up to multiple locations and integrate them into new settings because of differences in institutions and the patient populations they serve. The limited transparency of AI tools used in health care can make it difficult for providers, regulators, and others to determine whether an AI tool is safe and effective. A greater dispersion of data across providers and institutions can make securing patient data difficult. Finally, one expert described how existing case law does not specifically address AI tools, which can make providers and patients reticent to adopt them. Some of these challenges are similar to those identified previously by GAO in its first publication in this series, such as the lack of high-quality, structured data, and others are more specific to patient care, such as liability concerns.
    • The GAO “described six policy options:”
      • Collaboration. Policymakers could encourage interdisciplinary collaboration between developers and health care providers. This could result in AI tools that are easier to implement and use within an existing workflow.
      • Data Access. Policymakers could develop or expand high-quality data access mechanisms. This could help developers address bias concerns by ensuring data are representative, transparent, and equitable.
      • Best Practices. Policymakers could encourage relevant stakeholders and experts to establish best practices (such as standards) for development, implementation, and use of AI technologies. This could help with deployment and scalability of AI tools by providing guidance on data, interoperability, bias, and formatting issues.
      • Interdisciplinary Education. Policymakers could create opportunities for more workers to develop interdisciplinary skills. This could allow providers to use AI tools more effectively, and could be accomplished through a variety of methods, including changing medical curricula or grants.
      • Oversight Clarity. Policymakers could collaborate with relevant stakeholders to clarify appropriate oversight mechanisms. Predictable oversight could help ensure that AI tools remain safe and effective after deployment and throughout their lifecycle.
      • Status Quo. Policymakers could allow current efforts to proceed without intervention.
    • NAM claimed
      • Numerous AI-powered health applications designed for personal use have been shown to improve patient outcomes, building predictions based on large volumes of granular, real-time, and individualized behavioral and medical data. For instance, some forms of telehealth, a technology that has been critical during the COVID-19 pandemic, benefit considerably from AI software focused on natural language processing, which enables efficient triaging of patients based on urgency and type of illness. Beyond patient-provider communication, AI algorithms relevant to diabetic and cardiac care have demonstrated remarkable efficacy in helping patients manage their blood glucose levels in their day-to-day lives and in detecting cases of atrial fibrillation. AI tools that monitor and synthesize longitudinal patient behaviors are also particularly useful in psychiatric care, where of the exact timing of interventions is often critical. For example, smartphone-embedded sensors that track location and proximity of individuals can alert clinicians of possible substance use, prompting immediate intervention. On the population health level, these individual indicators of activity and health can be combined with environmental- and system-level data to generate predictive insight into local and global health trends. The most salient example of this may be the earliest warnings of the COVID-19 outbreak, issued in December 2019 by two private AI technology firms.
      • Successful implementation and widespread adoption of AI applications in HSOHC requires careful consideration of several key issues related to personal data, algorithm development, and health care insurance and payment. Chief among them are data interoperability, standardization, privacy, ameliorating systemic biases in algorithms, reimbursement of AI- assisted services, quality improvement, and integration of AI tools into provider workflows. Overcoming these challenges and optimizing the impact of AI tools on clinical outcomes will involve engaging diverse stakeholders, deliberately designing AI tools and interfaces, rigorously evaluating clinical and economic utility, and diffusing and scaling algorithms across different health settings. In addition to these potential logistical and technical hurdles, it is imperative to consider the legal and ethical issues surrounding AI, particularly as it relates to the fair and humanistic deployment of AI applications in HSOHC. Important legal considerations include the appropriate designation of accountability and liability of medical errors resulting from AI- assisted decisions for ensuring the safety of patients and consumers. Key ethical challenges include upholding the privacy of patients and their data—particularly with regard to non-HIPAA covered entities involved in the development of algorithms—building unbiased AI algorithms based on high-quality data from representative populations, and ensuring equitable access to AI technologies across diverse communities.
  • The National Institute of Standards and Technology (NIST) published a “new study of face recognition technology created after the onset of the COVID-19 pandemic [that] shows that some software developers have made demonstrable progress at recognizing masked faces.” In Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face Recognition Accuracy with Face Masks Using Post-COVID-19 Algorithms (NISTIR 8331), NIST stated the “report augments its predecessor with results for more recent algorithms provided to NIST after mid-March 2020.” NIST said that “[w]hile we do not have information on whether or not a particular algorithm was designed with face coverings in mind, the results show evidence that a number of developers have adapted their algorithms to support face recognition on subjects potentially wearing face masks.” NIST stated that
    • The following results represent observations on algorithms provided to NIST both before and after the COVID-19 pandemic to date. We do not have information on whether or not a particular algorithm was designed with face coverings in mind. The results documented capture a snapshot of algorithms submitted to the FRVT 1:1 in face recognition on subjects potentially wearing face masks.
      • False rejection performance: All algorithms submitted after the pandemic continue to give in-creased false non-match rates (FNMR) when the probes are masked. While a few pre-pandemic algorithms still remain within the most accurate on masked photos, some developers have submit-ted algorithms after the pandemic showing significantly improved accuracy and are now among the most accurate in our test.
      • Evolution of algorithms on face masks: We observe that a number of algorithms submitted since mid-March 2020 show notable reductions in error rates with face masks over their pre-pandemic predecessors. When comparing error rates for unmasked versus masked faces, the median FNMR across algorithms submitted since mid-March 2020 has been reduced by around 25% from the median pre-pandemic results. The figure below presents examples of developer evolution on both masked and unmasked datasets. For some developers, false rejection rates in their algorithms submitted since mid-March 2020 decreased by as much as a factor of 10 over their pre-pandemic algorithms, which is evidence that some providers are adapting their algorithms to handle facemasks. However, in the best cases, when comparing results for unmasked images to masked im-ages, false rejection rates have increased from 0.3%-0.5% (unmasked) to 2.4%-5% (masked).
      • False acceptance performance: As most systems are configured with a fixed threshold, it is necessary to report both false negative and false positive rates for each group at that threshold. When comparing a masked probe to an unmasked enrollment photo, in most cases, false match rates (FMR) are reduced by masks. The effect is generally modest with reductions in FMR usually being smaller than a factor of two. This property is valuable in that masked probes do not impart adverse false match security consequences for verification.
      • Mask-agnostic face recognition: All 1:1 verification algorithms submitted to the FRVT test since the start of the pandemic are evaluated on both masked and unmasked datasets. The test is de-signed this way to mimic operational reality: some images will have masks, some will not (especially enrollment samples from a database or ID card). And to the extent that the use of protective masks will exist for some time, our test will continue to evaluate algorithmic capability on verifying all combinations of masked and unmasked faces.
  • The government in London has issued a progress report on its current cybersecurity strategy that has another year to run. The Paymaster General assessed how well the United Kingdom (UK) has implemented the National Cyber Security Strategy 2016 to 2021 and pointed to goals yet to be achieved. This assessment comes in the shadow of the pending exit of the UK from the European Union (EU) and Prime Minister Boris Johnson’s plans to increase the UK’s role in select defense issues, including cyber operations. The Paymaster General stated:
    • The global landscape has changed significantly since the publication of the National Cyber Security Strategy Progress Report in May 2019. We have seen unprecedented levels of disruption to our way of life that few would have predicted. The COVID-19 pandemic has increased our reliance on digital technologies – for our personal communications with friends and family and our ability to work remotely, as well as for businesses and government to continue to operate effectively, including in support of the national response.
    • These new ways of living and working highlight the importance of cyber security, which is also underlined by wider trends. An ever greater reliance on digital networks and systems, more rapid advances in new technologies, a wider range of threats, and increasing international competition on underlying technologies and standards in cyberspace, emphasise the need for good cyber security practices for individuals, businesses and government.
    • Although the scale and international nature of these changes present challenges, there are also opportunities. With the UK’s departure from the European Union in January 2020, we can define and strengthen Britain’s place in the world as a global leader in cyber security, as an independent, sovereign nation.
    • The sustained, strategic investment and whole of society approach delivered so far through the National Cyber Security Strategy has ensured we are well placed to respond to this changing environment and seize new opportunities.
    • The Paymaster General asserted:
      • [The] report has highlighted growing risks, some accelerated by the COVID-19 pandemic, and longer-term trends that will shape the environment over the next decade:
      • Ever greater reliance on digital networks and systems as daily life moves online, bringing huge benefits but also creating new systemic and individuals risks.
      • Rapid technological change and greater global competition, challenging our ability to shape the technologies that will underpin our future security and prosperity.
      • A wider range of adversaries as criminals gain easier access to commoditised attack capabilities and cyber techniques form a growing part of states’ toolsets.
      • Competing visions for the future of the internet and the risk of fragmentation, making consensus on norms and ethics in cyberspace harder to achieve.
      • In February 2020 the Prime Minister announced the Integrated Review of Security, Defence, Development and Foreign Policy. This will define the government’s ambition for the UK’s role in the world and the long-term strategic aims of our national security and foreign policy. It will set out the way in which the UK will be a problem-solving and burden-sharing nation, and a strong direction for recovery from COVID-19, at home and overseas.
      • This will help to shape our national approach and priorities on cyber security beyond 2021. Cyber security is a key element of our international, defence and security posture, as well as a driving force for our economic prosperity.
  • The University of Toronto’s Citizen Lab published a report on an Israeli surveillance firm that uses “[o]ne of the widest-used—but least appreciated” means of surveilling people (i.e., “leveraging of weaknesses in the global mobile telecommunications infrastructure to monitor and intercept phone calls and traffic.” Citizen Lab explained that an affiliate of the NSO Group, “Circles is known for selling systems to exploit Signaling System 7 (SS7) vulnerabilities, and claims to sell this technology exclusively to nation-states.” Citizen Lab noted that “[u]nlike NSO Group’s Pegasus spyware, the SS7 mechanism by which Circles’ product reportedly operates does not have an obvious signature on a target’s phone, such as the telltale targeting SMS bearing a malicious link that is sometimes present on a phone targeted with Pegasus.” Citizen Lab found that
    • Circles is a surveillance firm that reportedly exploits weaknesses in the global mobile phone system to snoop on calls, texts, and the location of phones around the globe. Circles is affiliated with NSO Group, which develops the oft-abused Pegasus spyware.
    • Circles, whose products work without hacking the phone itself, says they sell only to nation-states. According to leaked documents, Circles customers can purchase a system that they connect to their local telecommunications companies’ infrastructure, or can use a separate system called the “Circles Cloud,” which interconnects with telecommunications companies around the world.
    • According to the U.S. Department of Homeland Security, all U.S. wireless networks are vulnerable to the types of weaknesses reportedly exploited by Circles. A majority of networks around the globe are similarly vulnerable.
    • Using Internet scanning, we found a unique signature associated with the hostnames of Check Point firewalls used in Circles deployments. This scanning enabled us to identify Circles deployments in at least 25 countries.
    • We determine that the governments of the following countries are likely Circles customers: Australia, Belgium, Botswana, Chile, Denmark, Ecuador, El Salvador, Estonia, Equatorial Guinea, Guatemala, Honduras, Indonesia, Israel, Kenya, Malaysia, Mexico, Morocco, Nigeria, Peru, Serbia, Thailand, the United Arab Emirates (UAE), Vietnam, Zambia, and Zimbabwe.
    • Some of the specific government branches we identify with varying degrees of confidence as being Circles customers have a history of leveraging digital technology for human rights abuses. In a few specific cases, we were able to attribute the deployment to a particular customer, such as the Security Operations Command (ISOC) of the Royal Thai Army, which has allegedly tortured detainees.
  • Senators Ron Wyden (D-OR) Elizabeth Warren (D-MA) Edward J. Markey (D-MA) and Brian Schatz (D-HI) “announced that the Department of Homeland Security (DHS) will launch an inspector general investigation into Customs and Border Protection’s (CBP) warrantless tracking of phones in the United States following an inquiry from the senators earlier this year” per their press release.
    • The Senators added:
      • As revealed by public contracts, CBP has paid a government contractor named Venntel nearly half a million dollars for access to a commercial database containing location data mined from applications on millions of Americans’ mobile phones. CBP officials also confirmed the agency’s warrantless tracking of phones in the United States using Venntel’s product in a September 16, 2020 call with Senate staff.
      • In 2018, the Supreme Court held in Carpenter v. United States that the collection of significant quantities of historical location data from Americans’ cell phones is a search under the Fourth Amendment and therefore requires a warrant.
      • In September 2020, Wyden and Warren successfully pressed for an inspector general investigation into the Internal Revenue Service’s use of Venntel’s commercial location tracking service without a court order.
    • In a letter, the DHS Office of the Inspector General (OIG) explained:
      • We have reviewed your request and plan to initiate an audit that we believe will address your concerns. The objective of our audit is to determine if the Department of Homeland Security (DHS) and it [sic] components have developed, updated, and adhered to policies related to cell-phone surveillance devices. In addition, you may be interested in our audit to review DHS’ use and protection of open source intelligence. Open source intelligence, while different from cell phone surveillance, includes the Department’s use of information provided by the public via cellular devices, such as social media status updates, geo-tagged photos, and specific location check-ins.
    • In an October letter, these Senators plus Senator Sherrod Brown (D-OH) argued:
      • CBP is not above the law and it should not be able to buy its way around the Fourth Amendment. Accordingly, we urge you to investigate CBP’s warrantless use of commercial databases containing Americans’ information, including but not limited to Venntel’s location database. We urge you to examine what legal analysis, if any, CBP’s lawyers performed before the agency started to use this surveillance tool. We also request that you determine how CBP was able to begin operational use of Venntel’s location database without the Department of Homeland Security Privacy Office first publishing a Privacy Impact Assessment.
  • The American Civil Liberties Union (ACLU) has filed a lawsuit in a federal court in New York City, seeking an order to compel the United States (U.S.) Department of Homeland Security (DHS), U.S. Customs and Border Protection (CBP), and U.S. Immigration and Customs Enforcement (ICE) “to release records about their purchases of cell phone location data for immigration enforcement and other purposes.” The ACLU made these information requests after numerous media accounts showing that these and other U.S. agencies were buying location data and other sensitive information in ways intended to evade the bar in the Fourth Amendment against unreasonable searches.
    • In its press release, the ACLU asserted:
      • In February, The Wall Street Journal reported that this sensitive location data isn’t just for sale to commercial entities, but is also being purchased by U.S. government agencies, including by U.S. Immigrations and Customs Enforcement to locate and arrest immigrants. The Journal identified one company, Venntel, that was selling access to a massive database to the U.S. Department of Homeland Security, U.S. Customs and Border Protection, and ICE. Subsequent reporting has identified other companies selling access to similar databases to DHS and other agencies, including the U.S. military.
      • These practices raise serious concerns that federal immigration authorities are evading Fourth Amendment protections for cell phone location information by paying for access instead of obtaining a warrant. There’s even more reason for alarm when those agencies evade requests for information — including from U.S. senators — about such practices. That’s why today we asked a federal court to intervene and order DHS, CBP, and ICE to release information about their purchase and use of precise cell phone location information. Transparency is the first step to accountability.
    • The ACLU explained in the suit:
      • Multiple news sources have confirmed these agencies’ purchase of access to databases containing precise location information for millions of people—information gathered by applications (apps) running on their smartphones. The agencies’ purchases raise serious concerns that they are evading Fourth Amendment protections for cell phone location information by paying for access instead of obtaining a warrant. Yet, more than nine months after the ACLU submitted its FOIA request (“the Request”), these agencies have produced no responsive records. The information sought is of immense public significance, not only to shine a light on the government’s use of powerful location-tracking data in the immigration context, but also to assess whether the government’s purchase of this sensitive data complies with constitutional and legal limitations and is subject to appropriate oversight and control.
  • Facebook’s new Oversight Board announced “the first cases it will be deliberating and the opening of the public comment process” and “the appointment of five new trustees.” The cases were almost all referred by Facebook users and the new board is asking for comments on the right way to manage what may be objectionable content. The Oversight Board explained it “prioritizing cases that have the potential to affect lots of users around the world, are of critical importance to public discourse or raise important questions about Facebook’s policies.”
    • The new trustees are:
      • Kristina Arriaga is a globally recognized advocate for freedom of expression, with a focus on freedom of religion and belief. Kristina is president of the advisory firm Intrinsic.
      • Cherine Chalaby is an expert on internet governance, international finance and technology, with extensive board experience. As Chairman of ICANN, he led development of the organization’s five-year strategic plan for 2021 to 2025.
      • Wanda Felton has over 30 years of experience in the financial services industry, including serving as Vice Chair of the Board and First Vice President of the Export-Import Bank of the United States.
      • Kate O’Regan is a former judge of the Constitutional Court of South Africa and commissioner of the Khayelitsha Commission. She is the inaugural director of the Bonavero Institute of Human Rights at the University of Oxford.
      • Robert Post is an American legal scholar and Professor of Law at Yale Law School, where he formerly served as Dean. He is a leading scholar of the First Amendment and freedom of speech.

Coming Events

  • The National Institute of Standards and Technology (NIST) will hold a webinar on the Draft Federal Information Processing Standards (FIPS) 201-3 on 9 December.
  • On 9 December, the Senate Commerce, Science, and Transportation Committee will hold a hearing titled “The Invalidation of the EU-US Privacy Shield and the Future of Transatlantic Data Flows” with the following witnesses:
    • The Honorable Noah Phillips, Commissioner, Federal Trade Commission
    • Ms. Victoria Espinel, President and Chief Executive Officer, BSA – The Software Alliance
    • Mr. James Sullivan, Deputy Assistant Secretary for Services, International Trade Administration, U.S. Department of Commerce
    • Mr. Peter Swire, Elizabeth and Tommy Holder Chair of Law and Ethics, Georgia Tech Scheller College of Business, and Research Director, Cross-Border Data Forum
  • The Senate Judiciary Committee will hold an executive session at which the “Online Content Policy Modernization Act” (S.4632), a bill to narrow the liability shield in 47 USC 230, may be marked up.
  • On 10 December, the Federal Communications Commission (FCC) will hold an open meeting and has released a tentative agenda:
    • Securing the Communications Supply Chain. The Commission will consider a Report and Order that would require Eligible Telecommunications Carriers to remove equipment and services that pose an unacceptable risk to the national security of the United States or the security and safety of its people, would establish the Secure and Trusted Communications Networks Reimbursement Program, and would establish the procedures and criteria for publishing a list of covered communications equipment and services that must be removed. (WC Docket No. 18-89)
    • National Security Matter. The Commission will consider a national security matter.
    • National Security Matter. The Commission will consider a national security matter.
    • Allowing Earlier Equipment Marketing and Importation Opportunities. The Commission will consider a Notice of Proposed Rulemaking that would propose updates to its marketing and importation rules to permit, prior to equipment authorization, conditional sales of radiofrequency devices to consumers under certain circumstances and importation of a limited number of radiofrequency devices for certain pre-sale activities. (ET Docket No. 20-382)
    • Promoting Broadcast Internet Innovation Through ATSC 3.0. The Commission will consider a Report and Order that would modify and clarify existing rules to promote the deployment of Broadcast Internet services as part of the transition to ATSC 3.0. (MB Docket No. 20-145)

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Gerd Altmann from Pixabay

Further Reading, Other Developments, and Coming Events (11 November)

Further Reading

  • ICE, IRS Explored Using Hacking Tools, New Documents Show” By Joseph Cox — Vice. Federal agencies other than the Federal Bureau of Investigation (FBI) and the Intelligence Community (IC) appear to be interesting in utilizing some of the capabilities offered by the private sector to access devices or networks in the name of investigating cases.
  • China’s tech industry relieved by Biden win – but not relaxed” By Josh Horwitz and Yingzhi Yang — Reuters. While a Biden Administration will almost certainly lower the temperature between Beijing and Washington, the People’s Republic of China is intent on addressing the pressure points used by the Trump Administration to inflict pain on its technology industry.
  • Trump Broke the Internet. Can Joe Biden Fix It?” By Gilad Edelman — WIRED. This piece provides a view of the waterfront in technology policy under a Biden Administration.
  • YouTube is awash with election misinformation — and it isn’t taking it down” By Rebecca Heilweil — Recode. For unexplained reasons, YouTube seems to have avoided the scrutiny facing Facebook and Twitter on their content moderation policies. Whether the lack of scrutiny is a reason is not clear, but the Google owned platform had much more election-related misinformation than the other social media platforms.
  • Frustrated by internet service providers, cities and schools push for more data” By Cyrus Farivar — NBC News. Internet service providers are not helping cities and states identify families eligible for low-cost internet to help children attend school virtually. They have claimed these data are proprietary, so jurisdictions have gotten creative about identifying such families.

Other Developments

  • The Consumer Product Safety Commission’s (CPSC) Office of the Inspector General (OIG) released its annual Federal Information Security Modernization Act (FISMA) audit and found “that although management continues to make progress in implementing the FISMA requirements much work remains to be done.” More particularly, it was “determined that the CPSC has not implemented an effective information security program and practices in accordance with FISMA requirements.” The OIG asserted:
    • The CPSC information security program was not effective because the CPSC has not developed a holistic formal approach to manage information security risks or to effectively utilize information security resources to address previously identified information security deficiencies. Although the CPSC has begun to develop an Enterprise Risk Management (ERM) program to guide risk management practices at the CPSC, explicit guidance and processes to address information security risks and integrate those risks into the broader agency-wide ERM program has not been developed.
    • In addition, the CPSC has not leveraged the relevant information security risk management guidance prescribed by NIST to develop an approach to manage information security risk.
    • Further, as asserted by CPSC personnel, the CPSC has limited resources to operate the information security program and to address the extensive FISMA requirements and related complex cybersecurity challenges.
    • Therefore, the CPSC has not dedicated the resources necessary to fully address these challenges and requirements. The CPSC began addressing previously identified information security deficiencies but was not able to address all deficiencies in FY 2020.
  • The United States (U.S.) Department of Justice (DOJ) announced the seizure of 27 websites allegedly used by Iran’s Islamic Revolutionary Guard Corps (IRGC) “to further a global covert influence campaign…in violation of U.S. sanctions targeting both the Government of Iran and the IRGC.” The DOJ contended:
    • Four of the domains purported to be genuine news outlets but were actually controlled by the IRGC and targeted audiences in the United States, to covertly influence United States policy and public opinion, in violation of the Foreign Agents Registration Act (FARA). The remainder targeted audiences in other parts of the world.  This seizure warrant follows an earlier seizure of 92 domains used by the IRGC for similar purposes.
  • The United Nations (UN) Special Rapporteur on the right to privacy Joseph Cannataci issued his annual report that “constitutes  a  preliminary  assessment  as  the  evidence  base required to reach definitive conclusions on whether privacy-intrusive, anti-COVID-19 measures are necessary and proportionate in a democratic society is not yet available.” Cannataci added “[a] more definitive report is planned for mid-2021, when 16 months of evidence will be available to allow a more accurate assessment.” He “addresse[d]  two  particular  aspects  of  the impact of COVID-19 on the right to privacy: data protection and surveillance.” The Special Rapporteur noted:
    • While the COVID-19 pandemic has generated much debate about the value of contact tracing and reliance upon technology that track citizens and those they encounter, the use of information and technology is not new in managing public health emergencies. What is concerning in some States are reports of how technology is being used and the degree of intrusion and control being exerted over citizens –possibly to little public health effect.
    • The Special Rapporteur concluded:
      • It is far too early to assess definitively whether some COVID-19-related measures might be unnecessary or disproportionate. The Special Rapporteur will continue to monitor the impact of surveillance in epidemiology on the right to privacy and report to the General Assembly in 2021. The main privacy risk lies in the use of non-consensual methods, such as those outlined in the section on hybrid systems of surveillance, which could result in function creep and be used for other purposes that may be privacy intrusive.
      • Intensive and omnipresent technological surveillance is not the panacea for pandemic situations such as COVID-19. This has been especially driven home by those countries in which the use of conventional contact-tracing methods, without recourse to smartphone applications, geolocation or other technologies, has proven to be most effective in countering the spread of COVID-19.
      • If a State decides that technological surveillance is necessary as a response to the global COVID-19 pandemic, it must make sure that, after proving both the necessity and proportionality of the specific measure, it has a law that explicitly provides for such surveillance measures (as in the example of Israel).
      • A State wishing to introduce a surveillance measure for COVID-19 purposes, should not be able to rely on a generic provision in law, such as one stating that the head of the public health authority may “order such other action be taken as he [or she] may consider appropriate”. That does not provide explicit and specific safeguards which are made mandatory both under the provisions of Convention 108 and Convention 108+, and based on the jurisprudence of the European Court of Human Rights. Indeed, if the safeguard is not spelled out in sufficient detail, it cannot be considered an adequate safeguard.
  • The University of Toronto’s Citizen Lab issued its submission to the Government of Canada’s “public consultation on the renewal of its Responsible Business Conduct (RBC) strategy, which is intended to provide guidance to the Government of Canada and Canadian companies active abroad with respect to their business activities.” Citizen Lab addressed “Canadian technology companies and the threat they pose to human rights abroad” and noted two of its reports on Canadian companies whose technologies were used to violate human rights:
    • In 2018, the Citizen Lab released a report documenting Netsweeper installations on public IP networks in ten countries that each presented widespread human rights concerns. This research revealed that Netsweeper technology was used to block: (1) political content sites, including websites linked to political groups, opposition groups, local and foreign news, and regional human rights issues in Bahrain, Kuwait, Yemen, and UAE; (2) LGBTQ content as a result of Netsweeper’s pre-defined ‘Alternative Lifestyles’ content category, as well as Google searches for keywords relating to LGBTQ content (e.g., the words “gay” or “lesbian”) in the UAE, Bahrain, and Yemen; (3) non-pornographic websites under the mis-categorization of sites like the World Health Organization and the Center for Health and Gender Equity as “pornography”; (4) access to news reporting on the Rohingya refugee crisis and violence against Muslims from multiple news outlets for users in India; (5) Blogspot-hosted websites in Kuwait by categorizing them as “viruses” as well as a range of political content from local and foreign news and a website that monitors human rights issues in the region; and (6) websites like Date.com, Gay.com (the Los Angeles LGBT Center), Feminist.org, and others through categorizing them as “web proxies.” 
    • In 2018, the Citizen Lab released a report documenting the use of Sandvine/Procera devices to redirect users in Turkey and Syria to spyware, as well as the use of such devices to hijack the Internet users’ connections in Egypt, redirecting them to revenue-generating content. These examples highlight some of the ways in which this technology can be used for malicious purposes. The report revealed how Citizen Lab researchers identified a series of devices on the networks of Türk Telekom—a large and previously state-owned ISP in Turkey—being used to redirect requests from users in Turkey and Syria who attempted to download certain common Windows applications like antivirus software and web browsers. Through the use of Sandvine/Procera technology, these users were instead redirected to versions of those applications that contained hidden malware. 
    • Citizen Lab made a number of recommendations:
      • Reform Canadian export law:  
        • Clarify that all Canadian exports are subject to the mandatory analysis set out in section 7.3(1) and section 7.4 of the Export and Import Permits Act (EIPA). 
        • Amend section 3(1) the EIPA such that the human rights risks of an exported good or technology provide an explicit basis for export control.
        • Amend the EIPA to include a ‘catch-all’ provision that subjects cyber-surveillance technology to export control, even if not listed on the Export Control List, when there is evidence that the end-use may be connected with internal repression and/or the commission of serious violations of international human rights or international humanitarian law. 
      • Implement mandatory human rights due diligence legislation:
        • Similar to the French duty of vigilance law, impose a human rights due diligence requirement on businesses such that they are required to perform human rights risk assessments, develop mitigation strategies, implement an alert system, and develop a monitoring and public reporting scheme. 
        • Ensure that the mandatory human rights due diligence legislation provides a statutory mechanism for liability where a company fails to conform with the requirements under the law. 
      • Expand and strengthen the Canadian Ombudsperson for Responsible Enterprise (CORE): 
        • Expand the CORE’s mandate to cover technology sector businesses operating abroad.
        • Expand the CORE’s investigatory mandate to include the power to compel companies and executives to produce testimony, documents, and other information for the purposes of joint and independent fact-finding.
        • Strengthen the CORE’s powers to hold companies to account for human rights violations abroad, including the power to impose fines and penalties and to impose mandatory orders.
        • Expand the CORE’s mandate to assist victims to obtain legal redress for human rights abuses. This could include the CORE helping enforce mandatory human rights due diligence requirements, imposing penalties and/or additional statutory mechanisms for redress when requirements are violated.
        • Increase the CORE’s budgetary allocations to ensure that it can carry out its mandate.
  • A week before the United States’ (U.S.) election, the White House’s Office of Science and Technology Policy (OSTP) issued a report titled “Advancing America’s Global Leadership in Science and Technology: Trump Administration Highlights from the Trump Administration’s First Term: 2017-2020,” that highlights the Administration’s purported achievements. OSTP claimed:
    • Over the past four years, President Trump and the entire Administration have taken decisive action to help the Federal Government do its part in advancing America’s global science and technology (S&T) preeminence. The policies enacted and investments made by the Administration have equipped researchers, health professionals, and many others with the tools to tackle today’s challenges, such as the COVID-19 pandemic, and have prepared the Nation for whatever the future holds.

Coming Events

  • On 17 November, the Senate Judiciary Committee will reportedly hold a hearing with Facebook CEO Mark Zuckerberg and Twitter CEO Jack Dorsey on Section 230 and how their platforms chose to restrict The New York Post article on Hunter Biden.
  • On 18 November, the Federal Communications Commission (FCC) will hold an open meeting and has released a tentative agenda:
    • Modernizing the 5.9 GHz Band. The Commission will consider a First Report and Order, Further Notice of Proposed Rulemaking, and Order of Proposed Modification that would adopt rules to repurpose 45 megahertz of spectrum in the 5.850-5.895 GHz band for unlicensed operations, retain 30 megahertz of spectrum in the 5.895-5.925 GHz band for the Intelligent Transportation Systems (ITS) service, and require the transition of the ITS radio service standard from Dedicated Short-Range Communications technology to Cellular Vehicle-to-Everything technology. (ET Docket No. 19-138)
    • Further Streamlining of Satellite Regulations. The Commission will consider a Report and Order that would streamline its satellite licensing rules by creating an optional framework for authorizing space stations and blanket-licensed earth stations through a unified license. (IB Docket No. 18-314)
    • Facilitating Next Generation Fixed-Satellite Services in the 17 GHz Band. The Commission will consider a Notice of Proposed Rulemaking that would propose to add a new allocation in the 17.3-17.8 GHz band for Fixed-Satellite Service space-to-Earth downlinks and to adopt associated technical rules. (IB Docket No. 20-330)
    • Expanding the Contribution Base for Accessible Communications Services. The Commission will consider a Notice of Proposed Rulemaking that would propose expansion of the Telecommunications Relay Services (TRS) Fund contribution base for supporting Video Relay Service (VRS) and Internet Protocol Relay Service (IP Relay) to include intrastate telecommunications revenue, as a way of strengthening the funding base for these forms of TRS and making it more equitable without increasing the size of the Fund itself. (CG Docket Nos. 03-123, 10-51, 12-38)
    • Revising Rules for Resolution of Program Carriage Complaints. The Commission will consider a Report and Order that would modify the Commission’s rules governing the resolution of program carriage disputes between video programming vendors and multichannel video programming distributors. (MB Docket Nos. 20-70, 17-105, 11-131)
    • Enforcement Bureau Action. The Commission will consider an enforcement action.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Photo by Brett Sayles from Pexels

Further Reading and Other Developments (13 June)

First things first, if you would like to receive my Technology Policy Update, email me. You can find some of these Updates from 2019 and 2020 here.

Other Developments

  • The University of Toronto’s Citizen Lab alleged that an Indian information technology (IT) firm has been running a hacking for hire operation possibly utilized by multinationals to target non-profits, journalists, and advocacy groups:
    • Dark Basin is a hack-for-hire group that has targeted thousands of individuals and hundreds of institutions on six continents. Targets include advocacy groups and journalists, elected and senior government officials, hedge funds, and multiple industries.
    • Dark Basin extensively targeted American nonprofits, including organisations working on a campaign called #ExxonKnew, which asserted that ExxonMobil hid information about climate change for decades.
    • We also identify Dark Basin as the group behind the phishing of organizations working on net neutrality advocacy, previously reported by the Electronic Frontier Foundation.
  • The Massachusetts Institute of Technology (MIT) and the University of Michigan (UM) “released a report on the security of OmniBallot, an Internet voting and ballot delivery system produced by Democracy Live…[that] has been deployed in Delaware, West Virginia, and other jurisdictions.” MIT and UM “The full technical report contains detailed recommendations for jurisdictions, but here’s what individual voters can do to help reduce risks to their security and privacy:
    • Your safest option is to avoid using OmniBallot. Either vote in person or request a mail-in absentee ballot, if you can. Mail-in ballots are a reasonably safe option, provided you check them for accuracy and adhere to all relevant deadlines.
    • If you can’t do that, your next-safest option is to use OmniBallot to download a blank ballot and print it, mark it by hand, and mail it back or drop it off. Always double-check that you’ve marked your ballot correctly, and confirm the mailing address with your local jurisdiction. 
    • If you are unable to mark your ballot by hand, OmniBallot can let you mark it on-screen. However, this option (as used in Delaware and West Virginia) will send your identity and secret ballot selections over the Internet to Democracy Live’s servers even if you return your ballot through the mail. This increases the risk that your choices may be exposed or manipulated, so we recommend that voters only use online marking as a last resort. If you do mark your ballot online, be sure to print it, carefully check that the printout is marked the way you intended, and physically return it.
    • If at all possible, do not return your ballot through OmniBallot’s website or by email or fax. These return modes cause your vote to be transmitted over the Internet, or via networks attached to the Internet, exposing the election to a critical risk that votes will be changed, at wide scale, without detection. Recent recommendations from DHS, the bi-parisan findings of the Senate Intelligence Committee, and the consensus of the National Academies of Science, Engineering, and Medicine accord with our assessment that returning ballots online constitutes a severe security risk.
  • The “Justice in Policing Act of 2020” (H.R.7120/S.3912) was introduced this week in response to the protests and disparate policing practices towards African Americans primarily and would bar the use of facial recognition technology for body cameras, patrol car cameras, or other cameras authorized and regulated under the bill. The House Oversight and Reform Committee has held a series of hearings this Congress on facial recognition technology, with Members on both sides of the aisle saying they want legislation regulating the government’s use of it. As of yet, no such legislation has been introduced. Facial recognition technology language was also a major factor in privacy legislation dying last year in Washington state and was outright removed to avoid the same fate this year.
  • The Government Accountability Office (GAO) released “ELECTRONIC HEALTH RECORDS: Ongoing Stakeholder Involvement Needed in the Department of Veterans Affairs’ Modernization Effort” a week after Secretary of Veterans Affairs Robert Wilkie informed the House Appropriations Committee that the electronic health record rollout has been paused due to COVID-19. Nevertheless, the GAO concluded:
    • VA met its schedule for making the needed system configuration decisions that would enable the department to implement its new EHR system at the first VA medical facility, which was planned for July 2020. In addition, VA has formulated a schedule for making the remaining EHR system configuration decisions before implementing the system at additional facilities planned for fall 2020. VA’s EHRM program was generally effective in establishing decisionmaking procedures that were consistent with applicable federal standards for internal control.
    • However, VA did not always ensure the involvement of relevant stakeholders, including medical facility clinicians and staff, in the system configuration decisions. Specifically, VA did not always clarify terminology and include adequate detail in descriptions of local workshop sessions to medical facility clinicians and staff to ensure relevant representation at local workshop meetings. Participation of such stakeholders is critical to ensuring that the EHR system is configured to meet the needs of clinicians and support the delivery of clinical care.
  • The GAO recommended
    • For implementation of the EHR system at future VA medical facilities, we recommend that the Secretary of VA direct the EHRM Executive Director to clarify terminology and include adequate detail in descriptions of local workshop sessions to facilitate the participation of all relevant stakeholders including medical facility clinicians and staff. (Recommendation 1)
  • Europol and the European Union Intellectual Property Office released a report to advise law enforcement agencies and policymakers “in the shape of a case book and presents case examples showing how intellectual property (IP) crime is linked to other forms of criminality, including money laundering, document fraud, cybercrime, fraud, drug production and trafficking and terrorism.”
  • The New York University Stern Center for Business and Human Rights released its latest report on social media titled “Who Moderates the Social Media Giants? A Call to End Outsourcing” that calls for major reforms in how these companies moderate content so as to improve the online ecosystem and the conditions, pay, and efficiacy of those actually doing the work. The report claimed “[d]espite the centrality of content moderation, however, major social media companies have marginalized the people who do this work, outsourcing the vast majority of it to third-party vendors…[and] [a] close look at this situation reveals three main problems:
    • In some parts of the world distant from Silicon Valley, the marginalization of content moderation has led to social media companies paying inadequate attention to how their platforms have been misused to stoke ethnic and religious violence. This has occurred in places ranging from Myanmar to Ethiopia. Facebook, for example, has expanded into far-flung markets, seeking to boost its user-growth numbers, without having sufficient moderators in place who understand local languages and cultures.
    • The peripheral status of moderators undercuts their receiving adequate counseling and medical care for the psychological side effects of repeated exposure to toxic online content. Watching the worst social media has to offer leaves many moderators emotionally debilitated. Too often, they don’t get the support or benefits they need and deserve.
    • The frequently chaotic outsourced environments in which moderators work impinge on their decisionmaking. Disputes with quality-control reviewers consume time and attention and contribute to a rancorous atmosphere.
  • The National Institute of Standards and Technology (NIST) “requests review and comments on the four-volume set of documents: Special Publication (SP) 800-63-3 Digital Identity Guidelines, SP 800-63A Enrollment and Identity Proofing, SP 800-63B Authentication and Lifecycle Management, and SP 800-63C Federation and Assertions…[that] presents the controls and technical requirements to meet the digital identity management assurance levels specified in each volume.” NIST “is requesting comments on the document in response to agency and industry implementations, industry and market innovation and the current threat environment.” Comments are due by 10 August.
  • The Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) updated its Cyber Risks to Next Generation 911 White Paper and released Cyber Risks to 911: Telephony Denial of Service and PSAP Ransomware Poster. CISA explained:
    • Potential cyber risks to Next Generation 9-1-1 (NG9-1-1) systems do not undermine the benefits of NG9-1-1. Nevertheless, cyber risks present a new level of exposure that PSAP administrators must understand and actively manage as a part of a comprehensive risk management program. Systems are already under attack. As cyber threats grow in complexity and sophistication, attacks could be more severe against NG9-1-1 systems as attackers can launch multiple distributed attacks with greater automation from a broader geography and against more targets.  This document provides an overview of the cyber risk landscape, offers an approach for assessing and managing risk, and provides additional cybersecurity resources. 
  • The Government Accountability Office (GAO) released a number of technology reports:
    • The GAO recommended that the Department of Energy’s (DOE) National Nuclear Security Administration (NNSA) “should incorporate additional management controls to better oversee and coordinate NNSA’s microelectronics activities. Such management controls could include investing the microelectronics coordinator with increased responsibility and authority, developing an overarching management plan, and developing a mission need statement and a microelectronics requirements document.”
  • The GAO found that
    • The Department of Homeland Security (DHS) has taken steps to implement selected leading practices in its transition from waterfall, an approach that historically delivered useable software years after program initiation, to Agile software development, which is focused on incremental and rapid delivery of working software in small segments. As shown below, this quick, iterative approach is to deliver results faster and collect user feedback continuously.
    • DHS has fully addressed one of three leading practice areas for organization change management and partially addressed the other two. Collectively, these practices advise an organization to plan for, implement, and measure the impact when undertaking a significant change. The department has fully defined plans for transitioning to Agile development. DHS has partially addressed implementation—the department completed 134 activities but deferred roughly 34 percent of planned activities to a later date. These deferred activities are in progress or have not been started. With respect to the third practice, DHS clarified expected outcomes for the transition, such as reduced risk of large, expensive IT failures. However, these outcomes are not tied to target measures. Without these, DHS will not know if the transition is achieving its desired results.
    • DHS has also addressed four of the nine leading practices for adopting Agile software development. For example, the department has modified its acquisition policies to support Agile development methods. However, it needs to take additional steps to, among other things, ensure all staff are appropriately trained and establish expectations for tracking software code quality. By fully addressing leading practices, DHS can reduce the risk of continued problems in developing and acquiring current, as well as, future IT systems.
  • The GAO rated “[t]he Department of Defense’s (DOD) current initiative to transition to Internet Protocol version 6 (IPv6), which began in April 2017, follows at least two prior attempts to implement IPv6 that were halted by DOD.”
    • In February 2019, DOD released its own IPv6 planning and implementation guidance that listed 35 required transition activities, 18 of which were due to be completed before March 2020. DOD completed six of the 18 activities as of March 2020. DOD officials acknowledged that the department’s transition time frames were optimistic; they added that they had thought that the activities’ deadlines were reasonable until they started performing the work. Without an inventory, a cost estimate, or a risk analysis, DOD significantly reduced the probability that it could have developed a realistic transition schedule. Addressing these basic planning requirements would supply DOD with needed information that would enable the department to develop realistic, detailed, and informed transition plans and time frames.

Further Reading

  • Amid Pandemic and Upheaval, New Cyberthreats to the Presidential Election” – The New York Times. Beyond disinformation and misinformation campaigns, United States’ federal and state officials are grappling with a range of cyber-related threats including some states’ insistence on using online voting, which the Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) deemed “high risk” in an unreleased assessment the agency softened before distribution to state lection officials. There are also worries that Russian or other nation-state hackers could assess voting databases in ways that would call election day results into question, or other hackers could break in, lock, and then ransom such data bases. CISA and other stakeholders have articulated concerns about the security of voting machines, apps, and systems currently used by states. 
  • Microsoft won’t sell police its facial-recognition technology, following similar moves by Amazon and IBM” – The Washington Post. The three tech giants responded to pressure from protestors to stop selling facial recognition technology to police departments with Microsoft being the latest to make this pledge. The companies have said they will not sell this technology until there is a federal law regulating it. The American Civil Liberties Union said in its press release “Congress and legislatures nationwide must swiftly stop law enforcement use of face recognition, and companies like Microsoft should work with the civil rights community  — not against it — to make that happen…[and] [t]his includes Microsoft halting its current efforts to advance legislation that would legitimize and expand the police use of facial recognition in multiple states nationwide.” The above mentioned “Justice in Policing Act of 2020” (H.R.7120/S.3912) would not regulate the technology per se but would ban its use from body and car cameras. However, the companies said nothing about selling this technology to federal agencies such as US Immigration and Customs Enforcement. And, IBM, unlike Amazon and Microsoft, announced it was leaving the facial recognition field altogether. However, AI Clearview, the controversial facial recognition firm, has not joined this pledge.
  • ICE Outlines How Investigators Rely on Third-Party Facial Recognition Services” – Nextgov. In a recently released privacy impact assessment, US Immigration and Customs Enforcement’s Homeland Security Investigations (HSI) explained its use of US and state government and commercial recognition databases and technologies. The agency claimed this is to be used only after agents have exhausted more traditional means of identifying suspects and others and only if relevant to the investigation. The agency claimed “ICE HSI primarily uses this law enforcement tool to identify victims of child exploitation and human trafficking, subjects engaged in the online and sexual exploitation of children, subjects engaged in financial fraud schemes, identity and benefit fraud, and those identified as members of transnational criminal organizations.” Given what some call abuses and others call mistakes in US surveillance programs, it is probable ICE will exceed the limits it is setting on the use of this technology absent meaningful, independent oversight.
  • Zoom confirms Beijing asked it to suspend activists over Tiananmen Square meetings” – Axios. In a statement, Zoom admitted it responded to pressure from the People’s Republic of China (PRC) to shut down 4 June meetings to commemorate Tiananmen Square inside and outside the PRC, including in the United States if enough PRC nationals were participating. It is not hard to imagine the company being called to task in Washington and in western Europe for conforming to Beijing’s wishes. The company seems to be vowing to develop technology to block participants by country as opposed to shutting down meetings and a process to consider requests by nations to block certain content illegal within their borders.
  • Coronavirus conspiracy theorists threaten 5G cell towers, DHS memo warns” – CyberScoop. The Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) has warned telecommunications companies they should establish or better still already have in place security protocols to protect equipment, especially 5G gear, from sabotage arising from the conspiracy theory that 5G transmission either compromises immune systems making one more susceptible to COVID-19 or actually spreads the virus. There have been a spate of attacks in the United Kingdom, and a number of Americans are advocating for this theory, including actor Woody Harrelson.  
  • Police Officers’ Personal Info Leaked Online” – Associated Press. At the same time police are facing protestors in the streets of many American cities and towns, the sensitive personal information of some officers have been posted online, possibly putting them and their families at risk.
  • Facebook Helped the FBI Hack a Child Predator” – Vice’s Motherboard. In a story apparently leaked by Facebook, it is revealed that the company hired a third-party hacker to help reveal a nefarious, technologically adept person who was terrorizing and extorting female minors through the development of a zero-day exploit. This is supposedly the first time Facebook engaged in conduct such as this to help law enforcement authorities. The company revealed it routinely tracks problematic users, including those exploiting children. This article would seem tailor-made to push back on the narrative being propagated by the Department of Justice and other nations’ law enforcement agencies that tech companies opposing backdoors in encrypted systems helps sexual predators. There are also the usual concerns that any exploit of a platform or technology people use to remain private will ultimately be used broadly by law enforcement agencies often to the detriment of human rights activists, dissidents, and journalists.
  • Amazon, Facebook and Google turn to deep network of political allies to battle back antitrust probes” – The Washington Post. These tech companies are utilizing means beyond traditional lobbying and public relations to wage the battle against US and state governments investigating them for possible antitrust and anticompetitive practices.
  • One America News, the Network That Spreads Conspiracies to the West Wing” – The New York Times. The upstart media outlet has received a boost in recent days by being promoted by President Donald Trump who quoted its as of yet unproven allegations that a Buffalo man knocked down by police was an antifa agitator. The outlet has received preferential treatment from the White House and is likely another means by which the White House will seek to get its message out.
  • EU says China behind ‘huge wave’ of Covid-19 disinformation” – The Guardian. European Commission Vice President Vĕra Jourová called out the People’s Republic of China (PRC) along with the Russian Federation for spreading prodigious amounts of disinformation in what is likely a shift for Brussels towards a more adversarial stance versus the PRC. As recently as March, an European Union body toned down a report on PRC activities, but this development seems to be a change of course.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Gerd Altmann from Pixabay

Federal Court Rules Against Suspicionless Searches At Border and In Airports

A U.S. District Court held that U.S. Customs and Border Protection (CPB) and U.S. Immigration and Customs Enforcement’s (ICE) current practices for searches of smartphones and computers at the U.S. border are unconstitutional and the agency must have reasonable suspicion before conducting such a search. However, the Court declined the plaintiffs’ request that the information taken off of their devices be expunged by the agencies. This ruling follows a Department of Homeland Security Office of the Inspector General (OIG) report that found CPB “did not always conduct searches of electronic devices at U.S. ports of entry according to its Standard Operating Procedures” and asserted that “[t]hese deficiencies in supervision, guidance, and equipment management, combined with a lack of performance measures, limit [CPB’s] ability to detect and deter illegal activities related to terrorism; national security; human, drug, and bulk cash smuggling; and child pornography.”

In terms of a legal backdrop, the United States Supreme Court has found that searches and seizures of electronic devices at borders and airports are subject to lesser legal standards than those conducted elsewhere in the U.S. under most circumstances. Generally, the government’s interest in securing the border against the flow of contraband and people not allowed to enter allow considerable leeway to the warrant requirements for many other types of searches. However, in recent years two federal appeals courts (the Fourth and Ninth Circuits) have held that searches of electronic devices require suspicion on the part of government agents while another appeals court (the Eleventh Circuit) held differently. Consequently, there is not a uniform legal standard for these searches.

The case was brought by the American Civil Liberties Union (ACLU) and the Electronic Frontier Foundation (EFF) on behalf of 10 U.S. citizens and one legal permanent resident who had had their phones and computers searched by CBP or ICE agents upon entering the U.S., typically at airports. The ACLU argued these searches violated the Fourth Amendment’s because the agents did not obtain search warrants before conducting the searches of the devices for contraband. The plaintiffs further alleged the searches violated the First Amendment because “warrantless searches of travelers’ electronic devices unconstitutionally chill the exercise of speech and associational rights” according to their complaint. The agencies claimed that such searches require neither a warrant nor probable cause and that the First Amendment claim held no water, a position a number of federal appeals courts have held.

The Court noted that

In January 2018, CBP updated its policy to distinguish between two different types of searches, “basic” and “advanced,” and to require reasonable suspicion or a national security concern for any advanced search, but no showing of cause for a basic search. Under this policy, an advanced search is defined as “any search in which an officer connects external equipment, through a wired or wireless connection, to an electronic device, not merely to gain access to the device, but to review, copy and/or analyze its contents.” The parameters of an advanced search are clearer given this definition than that adopted for a basic search, which is merely defined as “any border search that is not an advanced search.” CBP and ICE use the same definitions of basic and advanced searches and ICE policy also requires reasonable suspicion to perform an advanced search.

The Court stated that

Although the border search exception and the search incident to arrest exception are similar, narrow exceptions to the search warrant requirement, the Court recognizes the governmental interests are different at the border and holds that reasonable suspicion and not the heightened warrant requirement supported by probable cause that Plaintiffs seek here and as applied to the search in Riley is warranted here.

The Court added that

Moreover, the reasonable suspicion that is required for the currently defined basic search and advanced search is a showing of specific and articulable facts, considered with reasonable inferences drawn from those facts, that the electronic devices contains contraband. Although this may be “a close question” on which at least two Circuits disagree…the Court agrees that this formulation is consistent with the government’s interest in stopping contraband at the border and the long-standing distinction that the Supreme Court has made between the search for contraband, a paramount interest at the border, and the search of evidence of past or future crimes at the border, which is a general law enforcement interest not unique to the border.

The Court explained the relief the plaintiffs sought:

  • declaration that CPB and ICE’s policies violate the First and Fourth Amendment facially and have violated Plaintiffs’ First and Fourth Amendment rights by authorizing and conducting searches of electronic devices absent a warrant supported by probable cause, and
  • declarations that CPB and ICE’s policies violate the Fourth Amendment facially and have violated Plaintiffs’ Fourth Amendment rights by authorizing and conducting the confiscation of electronic devices absent probable cause

The Court stated that this relief is granted to the extent that it is declaring “that the CBP and ICE policies for “basic” and “advanced” searches, as presently defined, violate the Fourth Amendment to the extent that the policies do not require reasonable suspicion that the devices contain contraband for both such classes of non-cursory searches and/or seizure of electronic devices; and that the non-cursory searches and/or seizures of Plaintiffs’ electronic devices, without such reasonable suspicion, violated the Fourth Amendment.”

However, the Court declined to institute a nationwide injunction preventing [CPB and ICE] from “searching electronic devices absent a warrant supported by probable cause that the devices contain contraband or evidence of a violation of immigration or customs laws,”…and b) an injunction preventing Defendants from confiscating electronic devices, with the intent to search the devices after the travelers leave the border, without probable cause and without promptly seeking a warrant for the search.” The Court asserted that briefing on the issues would be needed before such relief could be granted.