Further Reading, Other Developments, and Coming Events (29 October)

Further Reading

  •  “Cyberattacks hit Louisiana government offices as worries rise about election hacking” By Eric Geller — Politico. The Louisiana National Guard located and addressed a remote access trojan, a common precursor to ransomware attacks, in some of the state’s systems. This may or may not have been the beginning stages of an election day attack, and other states have made similar discoveries.
  • Kicked off Weibo? Here’s what happens next.” By Shen Lu — Rest of World. Beijing is increasingly cracking down on dissent on Weibo, the People’s Republic of China’s (PRC) version of Twitter. People get banned for posting content critical of the PRC government or pro-Hong Kong. Some are allowed back and are usually banned again. Some buy burner accounts inevitably to get also get banned.
  • Inside the campaign to ‘pizzagate’ Hunter Biden” By Ben Collins and Brandy Zadrozny — NBC News. The sordid tale of how allies or advocates of the Trump Campaign have tried to propagate rumors of illegal acts committed by Hunter Biden in an attempt to smear former Vice President Joe Biden as was done to former Secretary of State Hillary Clinton in 2016.
  • Russians Who Pose Election Threat Have Hacked Nuclear Plants and Power Grid” By Nicole Perlroth — The New York Times. Some of Russia’s best hackers have been prowling around state and local governments’ systems for unknown ends. These are the same hackers, named Dragonfly or Energetic Bear by researchers, who have penetrated a number of electric utilities and the power grid in the United States, including a nuclear plant. It is not clear what these hackers want to do, which worries U.S. officials and cybersecurity experts and researchers.
  • Activists Turn Facial Recognition Tools Against the Police” By Kashmir Hill — The New York Times. In an interesting twist, protestors and civil liberties groups are adopting facial recognition technology to try to identify police officers who attack protestors or commit acts of violence who refuse to identify themselves.

Other Developments

  • The United Kingdom’s Information Commissioner’s Office (ICO) has completed its investigation into the data brokering practices of Equifax, Transunion, and Experian and found widespread privacy and data protection violations. Equifax and Transunion were amendable to working with the ICO to correct abuses and shutter illegal products and businesses, but Experian was not. In the words of the ICO, Experian “did not accept that they were required to make the changes set out by the ICO, and as such were not prepared to issue privacy information directly to individuals nor cease the use of credit reference data for direct marketing purposes.” Consequently, Experian must affect specified changes within nine months or face “a fine of up to £20m or 4% of the organisation’s total annual worldwide turnover.” The ICO investigated using its powers under the British Data Protection Act 2018 and the General Data Protection Regulation (GDPR).
    • The ICO found widespread problems in the data brokering businesses of the three firms:
      • The investigation found how the three CRAs were trading, enriching and enhancing people’s personal data without their knowledge. This processing resulted in products which were used by commercial organisations, political parties or charities to find new customers, identify the people most likely to be able to afford goods and services, and build profiles about people.
      • The ICO found that significant ‘invisible’ processing took place, likely affecting millions of adults in the UK. It is ‘invisible’ because the individual is not aware that the organisation is collecting and using their personal data. This is against data protection law.
      • Although the CRAs varied widely in size and practice, the ICO found significant data protection failures at each company. As well as the failure to be transparent, the regulator found that personal data provided to each CRA, in order for them to provide their statutory credit referencing function, was being used in limited ways for marketing purposes. Some of the CRAs were also using profiling to generate new or previously unknown information about people, which is often privacy invasive.
      • Other thematic failings identified were:
        • Although the CRAs did provide some privacy information on their websites about their data broking activities, their privacy information did not clearly explain what they were doing with people’s data;
        • Separately, they were using certain lawful bases incorrectly for processing people’s data.
      • The ICO issued its report “Investigation into data protection compliance in the direct marketing data broking sector,” with these key findings:
        • Key finding 1: The privacy information of the CRAs did not clearly explain their processing with respect to their marketing services. CRAs have to revise and improve their privacy information. Those engaging in data broking activities must ensure that their privacy information is compliant with the GDPR.
        • Key finding 2: In the circumstances we assessed the CRAs were incorrectly relying on an exception from the requirement to directly provide privacy information to individuals (excluding where the data processed has come solely from the open electoral register or would be in conflict with the purpose of processing – such as suppression lists like the TPS). To comply with the GDPR, CRAs have to ensure that they provide appropriate privacy information directly to all the individuals for whom they hold personal data in their capacity as data brokers for direct marketing purposes. Those engaging in data broking activities must ensure individuals have the information required by Article 14.
        • Key finding 3: The CRAs were using personal data collected for credit referencing purposes for direct marketing purposes. The CRAs must not use this data for direct marketing purposes unless this has been transparently explained to individuals and they have consented to this use. Where the CRAs are currently using personal data obtained for credit referencing purposes for direct marketing, they must stop using it.
        • Key finding 4: The consents relied on by Equifax were not valid under the GDPR. To comply with the GDPR, CRAs must ensure that the consent is valid, if they intend to rely on consent obtained by a third party. Those engaging in data broking activities must ensure that any consents they use meet the standard of the GDPR.
        • Key finding 5: Legitimate interest assessments (LIAs) conducted by the CRAs in respect of their marketing services were not properly weighted. The CRAs must revise their LIAs to reconsider the balance of their own interests against the rights and freedoms of individuals in the context of their marketing services. Where an objective LIA does not favour the interests of the organisation, the processing of that data must stop until that processing can be made lawful. Those engaging in data broking activities must ensure that LIAs are conducted objectively taking into account all factors.
        • Key finding 6: In some cases Experian was obtaining data on the basis of consent and then processing it on the basis of legitimate interests. Switching from consent to legitimate interests in this situation is not appropriate. Where personal data is collected by a third party and shared for direct marketing purposes on the basis of consent, then the appropriate lawful basis for subsequent processing for these purposes will also be consent. Experian must therefore delete any data supplied to it on the basis of consent that it is processing on the basis of legitimate interests.
  • The United States (U.S.) Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA), the Federal Bureau of Investigation (FBI), and the U.S. Cyber Command Cyber National Mission Force (CNMF) issued a joint advisory on the “the tactics, techniques, and procedures (TTPs) used by North Korean advanced persistent threat (APT) group Kimsuky—against worldwide targets—to gain intelligence on various topics of interest to the North Korean government.” CISA, FBI, and CNMF stated “individuals and organizations within this target profile increase their defenses and adopt a heightened state of awareness…[and] [p]articularly important mitigations include safeguards against spearphishing, use of multi-factor authentication, and user awareness training.” The agencies noted:
    • This advisory describes known Kimsuky TTPs, as found in open-source and intelligence reporting through July 2020. The target audience for this advisory is commercial sector businesses desiring to protect their networks from North Korean APT activity.
    • The agencies highlighted the key findings:
      • Kimsuky is most likely tasked by the North Korean regime with a global intelligence gathering mission.
      • Kimsuky employs common social engineering tactics, spearphishing, and watering hole attacks to exfiltrate desired information from victims.
      •  Kimsuky is most likely to use spearphishing to gain initial access into victim hosts or networks.
      • Kimsuky conducts its intelligence collection activities against individuals and organizations in South Korea, Japan, and the United States.
      • Kimsuky focuses its intelligence collection activities on foreign policy and national security issues related to the Korean peninsula, nuclear policy, and sanctions.
      • Kimsuky specifically targets:
        • Individuals identified as experts in various fields,
        • Think tanks, and
        • South Korean government entities.
  • European Data Protection Supervisor (EDPS) Wojciech Wiewiórowski made remarks at the European Union Agency for Cybersecurity’s (ENISA) Annual Privacy Forum and advocated for a European Union (EU) moratorium on the rollout of new technology like facial recognition and artificial intelligence (AI) until this “development can be reconciled with the values and fundamental rights that are at the foundation of our democratic societies.” He claimed the EU could maintain the rights of its people while taking the lead in cutting edge technologies. Wiewiórowski asserted:
    • Now we are entering a new phase of contactless tracking of individuals in public areas. Remote facial recognition technology has developed quickly; so much so that some authorities and private entities want to use it in many places. If this all becomes true, we could be tracked everywhere in the world.
    • I do not believe that such a development can be reconciled with the values and fundamental rights that are at the foundation of our democratic societies. The EDPS therefore, together with other authorities, supports a moratorium on the rollout of such technologies. The aim of this moratorium would be twofold. Firstly, an informed and democratic debate would take place. Secondly, the EU and Member States would put in place all the appropriate safeguards, including a comprehensive legal framework, to guarantee the proportionality of the respective technologies and systems in relation to their specific use.
    • As an example, any new regulatory framework for AI should, in my view:
      • apply both to EU Member States and to EU institutions, offices, bodies and agencies;
      • be designed to protect individuals, communities and society as a whole, from any negative impact;
      • propose a robust and nuanced risk classification scheme, ensuring that any significant potential harm posed by AI applications is matched with appropriate mitigating measures.
    • We must ensure that Europe’s leading role in AI, or any other technology in development, does not come at the cost of our fundamental rights. Europe must remain true to its values and provide the grounds for innovation. We will only get it right if we ensure that technology serves both individuals and society.
    • Faced with these developments, transparency is a starting point for proper debate and assessment. Transparency for citizens puts them in a position to understand what they are subject to, and to decide whether they want to accept the infringements of their rights.
  • The Office of the Privacy Commissioner of Canada (OPC) and “its international counterparts” laid out their thinking on “stronger privacy protections and greater accountability in the development and use of facial recognition technology and artificial intelligence (AI) systems” at the recent Global Privacy Assembly. The OPC summarized the two resolutions adopted at the assembly:
    • the resolution on facial recognition technology acknowledges that this technology can benefit security and public safety. However, it asserts that facial recognition can erode data protection, privacy and human rights because it is highly intrusive and enables widespread surveillance that can produce inaccurate results. The resolution also calls on data protection authorities to work together to develop principles and expectations that strengthen data protection and ensure privacy by design in the development of innovative uses of this technology.
    • a resolution on the development and use of AI systems that urges organizations developing or using them to ensure human accountability for AI systems and address adverse impacts on human rights. The resolution encourages governments to amend personal data protection laws to make clear legal obligations for accountability in the development and use of AI. It also calls on governments, public authorities and other stakeholders to work with data protection authorities to ensure legal compliance, accountability and ethics in the development and use of AI systems.
  • The Alliance for Securing Democracy (ASD) at the German Marshall Fund of the United States (GMFUS) issued a report, “A Future Internet for Democracies: Contesting China’s Push for Dominance in 5G, 6G, and the Internet of Everything” that “provides a roadmap for contesting China’s growing dominance in this critical information arena across infrastructure, application, and governance dimensions—one that doubles down on geostrategic interests and allied cooperation.” ASD stated “[a]n allied approach that is rooted firmly in shared values and resists an authoritarian divide-and-conquer strategy is vital for the success of democracies in commercial, military, and governance domains.” ASD asserted:
    • The United States and its democratic allies are engaged in a contest for the soul of the Future Internet. Conceived as a beacon of free expression with the power to tear down communication barriers across free and unfree societies alike, the Internet today faces significant challenges to its status as the world’s ultimate connector.1 In creating connectivity and space for democratic speech, it has also enabled new means of authoritarian control and the suppression of human rights through censorship and surveillance. As tensions between democracies and the People’s Republic of China (PRC) heat up over Internet technologies, the prospect of a dichotomous Inter-net comes more sharply into focus: a democratic Internet where information flows freely and an authoritarian Internet where it is tightly controlled—separated not by an Iron Curtain, but a Silicon one. The Future Internet is deeply enmeshed in the dawning information contest between autocracies and democracies.2 It is the base layer—the foundation—on which communication takes place and the entry point into narrative and societal influence. How the next generation of Internet technologies are created, defined, governed, and ultimately used will have an outsized impact on this information contest—and the larger geopolitical contest—between democracy and authoritarianism.
    • ASD found:
      • The Chinese Communist Party (CCP) has a history of creating infrastructure dependence and using it for geopolitical leverage. As such, China’s global market dominance in Future Internet infrastructure carries unacceptable risks for democracies.
      • The contest to shape 6G standards is already underway, with China leading the charge internationally. As the United States ponders how it ended up on the back foot on 5G, China is moving ahead with new proposals that would increase authoritarian control and undermine fundamental freedoms.
      • The battle over the Future Internet is playing out in the Global South. As more developed nations eschew Chinese network equipment, democracies’ response has largely ignored this global build-out of networks and applications in the proving ground of the developing world that threaten both technological competitiveness and universal rights.
      • China is exporting “technology to anticipate crime”—a dystopian future police state. “Minority report”-style pre-criminal arrests decimate the practice of the rule of law centered in the presumption of innocence.
      • Personal Data Exfiltration: CCP entities see “Alternative Data” as “New Oil” for AI-driven applications in the Internet-of-Everything. These applications provide new and expanded avenues for mass data collection, as much as they depend on this data to succeed–giving China the means and the motivation to vacuum up the world’s data.
      • Data in, propaganda out: Future Internet technology presents opportunities to influence the information environment, including the development of information applications that simultaneously perform big data collection. Chinese companies are building information platforms into application technologies, reimagining both the public square and private locales as tools for propaganda.
      • Already victims of intellectual property theft by China, the United States and its democratic partners are ill-prepared to secure sensitive information as the Future Internet ecosystem explodes access points. This insecurity will continue to undermine technological competitiveness and national security and compound these effects in new ways.
      • China outnumbers the United States nearly two-to-one on participation in and leadership of critical international Future Internet standards-setting efforts. Technocratic standards bodies are becoming unlikely loci of great power technical competition, as Beijing uses leadership posts to shape the narrative and set the course for the next generation of Internet technologies to support China’s own technological leadership, governance norms, and market access.
      • The world’s oldest UN agency is being leveraged as a propaganda mouthpiece for the CCP’s AI and Future Internet agenda, whitewashing human rights abuses under a banner of “AI for Good.” The upshot is an effort to shape the UN Sustainable Development agenda to put economic development with authoritarian technology–not individual liberty—at their center.
      • A symbiotic relationship has developed between China’s Belt and Road Initiative and UN agencies involved in Future Internet and digital development. In this way, China leverages the United Nations enterprise to capture market dominance in next generation technologies.
  • A Dutch think tank has put together the “(best) practices of Asian countries and the United States in the field of digital connectivity” in the hopes of realizing European Commission President Ursula von der Leyen’s goal of making the next ten years “Europe’s Digital Decade.” The Clingendael Institute explained that the report “covers a wide range of topics related to digital regulation, the e-economy, and telecommunications infrastructure.” The Clingendael Institute asserted:
    • Central to the debate and any policy decision on digital connectivity are the trade-offs concerning privacy, business interests and national security. While all regulations are a combination of these three, the United States (US) has taken a path that prioritises the interests of businesses. This is manifested, for example, in the strong focus on free data flows, both personal and non-personal, to strengthen companies’ competitive advantage in collecting and using data to develop themselves. China’s approach, by contrast, strongly focuses on state security, wherein Chinese businesses are supported and leveraged to pre-empt threats to the country and, more specifically, to the Chinese Communist Party. This is evident from its strict data localisation requirements to prevent any data from being stored outside its borders and a mandatory security assessment for cross-border transfers. The European Union represents a third way, emphasising individuals’ privacy and a human-centred approach that puts people first, and includes a strong focus on ethics, including in data-protection regulations. This Clingendael Report aims to increase awareness and debate about the trade-offs of individual, state and business interests in all subsets of digital connectivity. This is needed to reach a more sustainable EU approach that will outlast the present decade. After all, economic competitiveness is required to secure Europe and to further its principled approach to digital connectivity in the long term. The analysis presented here covers a wide range of topics within digital connectivity’s three subsets: regulation; business; and telecommunications infrastructure. Aiming to contribute to improved European policy-making, this report discusses (best) practices of existing and rising digital powers in Asia and the United States. In every domain, potential avenues for cooperation with those countries are explored as ways forward for the EU.
    • Findings show that the EU and its member states are slowly but steadily moving from being mainly a regulatory power to also claiming their space as a player in the digitalised world. Cloud computing initiative GAIA-X is a key example, constituting a proactive alternative to American and Chinese Cloud providers that is strongly focused on uniting small European initiatives to create a strong and sustainable Cloud infrastructure. Such initiatives, including also the more recent Next Generation Internet (NGI), not only help defend and push European digital norms and standards, but also assist the global competitiveness of European companies and business models by facilitating the availability of large data-sets as well as scaling up. Next to such ‘EU only’ initiatives, working closely together with like-minded partners will benefit the EU and its member states as they seek to finetune and implement their digital strategies. The United States and Asian partners, particularly Japan, South Korea, India and Singapore, are the focus of attention here.

Coming Events

  • On 10 November, the Senate Commerce, Science, and Transportation Committee will hold a hearing to consider nominations, including Nathan Simington’s to be a Member of the Federal Communications Commission.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by David Peterson from Pixabay

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s