Other Developments, Further Reading, Coming Events (29 April 2021)

Other Developments

  • The House passed an amended version of the “Cyber Diplomacy Act of 2021” (H.R.1251) that would establish a Bureau of International Cyberspace Policy and Cyberspace Policy Coordinating Committee inside the United States (U.S.) Department of State. The Secretary of State would also need to craft and issue an International Strategy for Cyberspace.
    • House Foreign Affairs Committee Chair Gregory Meeks (D-NY) asserted during debate:
      • Now more than ever, we need a senior cyber diplomat who can support American efforts to keep the internet open, interoperable, reliable, and secure. To demonstrate how seriously the United States takes these issues, it is vital that we strengthen the State Department’s tools to address the challenges in cyberspace to American foreign policy. The State Department needs a bureau capable and focused on tackling the growing global challenges of cybersecurity, the digital economy, and internet freedom in order to be better  prepared  to  advance  America’s  international interests on cyber policy.
    • House Foreign Affairs Committee Ranking Member Michael McCaul (R-TX) contended:
      • The Cyber Diplomacy Act gives the State Department the necessary tools to work with our allies and partners to stop the spread of misinformation, to stop the cyberattacks, and to stop the imposition of their so-called cyber security.
      • …[A] new ambassador will be given the authority to establish critical cyber norms and standards that do not exist today to help define what is good behavior and what is bad.
      • Let me say that when the SolarWinds attack occurred, in the past, there were no consequences to bad behavior with the Russians or the Chinese, and I was very supportive and proud that President Biden struck back with sanctions against Russia for this bad behavior. That is what this office is really all about.
      • Without these clear guidelines, it is not possible to mount a strong response to our adversaries’ destructive behavior. This bill is long overdue. To me, it is the last piece in terms of our cyber role in the Federal Government, now taking it to the international stage with our allies around the world.
  • The European Data Protection Board (EDPB) has issued the final version of “Guidelines 8/2020 on the targeting of social media users.” The EDPB explained:
    • Targeting of social media users may involve a variety of different actors which, for the purposes of these guidelines, shall be divided into four groups: social media providers, their users, targeters and other actors which may be involved in the targeting process. The importance of correctly identifying the roles and responsibilities of the various actors has recently been highlighted with the judgments in Wirtschaftsakademie and Fashion ID of the Court of Justice of the European Union (CJEU). Both judgments demonstrate that the interaction between social media providers and other actors may give rise to joint responsibilities under EU data protection law.
    • Taking into account the case law of the CJEU, as well as the provisions of the GDPR regarding joint controllers and accountability, the present guidelines offer guidance concerning the targeting of social media users, in particular as regards the responsibilities of targeters and social media providers. Where joint responsibility exists, the guidelines will seek to clarify what the distribution of responsibilities might look like between targeters and social media providers on the basis of practical examples.
    • The main aim of these guidelines is therefore to clarify the roles and responsibilities among the social media provider and the targeter. In order to do so, the guidelines also identify the potential risks for the rights and freedoms of individuals (section 3), the main actors and their roles (section 4), and tackle the application of key data protection requirements (such as lawfulness and transparency, DPIA, etc.) as well as key elements of arrangements between social media providers and the targeters.
    • Nevertheless, the scope of these Guidelines covers the relationships between the registered users of a social network, its providers, as well as the targeters. Thorough analysis, of scenarios, such as individuals that are not-registered with social media providers does not fall under the scope of the present guidelines.
  • Senators Amy Klobuchar (D-MN) and Ben Ray Luján (D-NM) wrote Twitter CEO Jack Dorsey and Facebook CEO Mark Zuckerberg “highlighting a new report issued by the Center for Countering Digital Hate, which found that approximately 65 percent of anti-vaccine content on Facebook and Twitter can be attributed to the “Disinformation Dozen” – 12 individuals who play leading roles in spreading digital disinformation about coronavirus vaccines.” Klobuchar and Luján asserted:
    • We must urgently work to ensure Americans receive accurate and reliable information about coronavirus vaccines. A crucial step to increase vaccine confidence is to address primary spreaders of this vaccine disinformation, including the twelve accounts — referred to as the “Disinformation Dozen” and are responsible for a majority of disinformation — in a swift and decisive manner. 
      • Are your platforms aware of these twelve sources that appear to be repeatedly spreading false or misleading information about the coronavirus vaccine efficacy?
      • What are your specific standards for removing accounts that repeatedly violate your policies on vaccine misinformation? Please address specifically whether the content shared on each of those twelve accounts violate those standards.
      • Who at your company is responsible for (a) setting vaccine disinformation policies and (b) enforcing those policies? Please provide specific name(s).
      • How are you ensuring your content moderation policies are effective for rural, minority, and non-English communities? Please provide proof of investment in these programs in terms of resource allocation, specific data on campaign efficacy, and number of full & contract level employees allocated exclusively to those efforts.
  • The Federal Trade Commission (FTC) posted on its blog a posting titled “Aiming for truth, fairness, and equity in your company’s use of AI” providing its answer to the question “how can we harness the benefits of AI without inadvertently introducing bias or other unfair outcomes?” The FTC asserted that “while the sophisticated technology may be new, the FTC’s attention to automated decision making is not…[and] [t]he FTC has decades of experience enforcing three laws important to developers and users of AI:
    • Section 5 of the FTC Act. The FTC Act prohibits unfair or deceptive practices. That would include the sale or use of – for example – racially biased algorithms.
    • Fair Credit Reporting Act. The FCRA comes into play in certain circumstances where an algorithm is used to deny people employment, housing, credit, insurance, or other benefits.
    • Equal Credit Opportunity Act. The ECOA makes it illegal for a company to use a biased algorithm that results in credit discrimination on the basis of race, color, religion, national origin, sex, marital status, age, or because a person receives public assistance.
    • The FTC further asserted:
      • Start with the right foundation. With its mysterious jargon (think: “machine learning,” “neural networks,” and “deep learning”) and enormous data-crunching power, AI can seem almost magical. But there’s nothing mystical about the right starting point for AI: a solid foundation. If a data set is missing information from particular populations, using that data to build an AI model may yield results that are unfair or inequitable to legally protected groups. From the start, think about ways to improve your data set, design your model to account for data gaps, and – in light of any shortcomings – limit where or how you use the model.
      • Watch out for discriminatory outcomes. Every year, the FTC holds PrivacyCon, a showcase for cutting-edge developments in privacy, data security, and artificial intelligence. During PrivacyCon 2020, researchers presented work showing that algorithms developed for benign purposes like healthcare resource allocation and advertising actually resulted in racial bias. How can you reduce the risk of your company becoming the example of a business whose well-intentioned algorithm perpetuates racial inequity? It’s essential to test your algorithm – both before you use it and periodically after that – to make sure that it doesn’t discriminate on the basis of race, gender, or other protected class.
      • Embrace transparency and independence. Who discovered the racial bias in the healthcare algorithm described at PrivacyCon 2020 and later published in Science? Independent researchers spotted it by examining data provided by a large academic hospital. In other words, it was due to the transparency of that hospital and the independence of the researchers that the bias came to light. As your company develops and uses AI, think about ways to embrace transparency and independence – for example, by using transparency frameworks and independent standards, by conducting and publishing the results of independent audits, and by opening your data or source code to outside inspection.
      • Don’t exaggerate what your algorithm can do or whether it can deliver fair or unbiased results. Under the FTC Act, your statements to business customers and consumers alike must be truthful, non-deceptive, and backed up by evidence. In a rush to embrace new technology, be careful not to overpromise what your algorithm can deliver. For example, let’s say an AI developer tells clients that its product will provide “100% unbiased hiring decisions,” but the algorithm was built with data that lacked racial or gender diversity. The result may be deception, discrimination – and an FTC law enforcement action.
      • Tell the truth about how you use data. In our guidance on AI last year, we advised businesses to be careful about how they get the data that powers their model. We noted the FTC’s complaint against Facebook, which alleged that the social media giant misled consumers by telling them they could opt in to the company’s facial recognition algorithm, when in fact Facebook was using their photos by default. The FTC’s recent action against app developer Everalbum reinforces that point. According to the complaint, Everalbum used photos uploaded by app users to train its facial recognition algorithm. The FTC alleged that the company deceived users about their ability to control the app’s facial recognition feature and made misrepresentations about users’ ability delete their photos and videos upon account deactivation. To deter future violations, the proposed order requires the company to delete not only the ill-gotten data, but also the facial recognition models or algorithms developed with users’ photos or videos.
      • Do more good than harm. To put it in the simplest terms, under the FTC Act, a practice is unfair if it causes more harm than good. Let’s say your algorithm will allow a company to target consumers most interested in buying their product. Seems like a straightforward benefit, right? But let’s say the model pinpoints those consumers by considering race, color, religion, and sex – and the result is digital redlining (similar to the Department of Housing and Urban Development’s case against Facebook in 2019). If your model causes more harm than good – that is, in Section 5 parlance, if it causes or is likely to cause substantial injury to consumers that is not reasonably avoidable by consumers and not outweighed by countervailing benefits to consumers or to competition – the FTC can challenge the use of that model as unfair.
      • Hold yourself accountable – or be ready for the FTC to do it for you. As we’ve noted, it’s important to hold yourself accountable for your algorithm’s performance. Our recommendations for transparency and independence can help you do just that. But keep in mind that if you don’t hold yourself accountable, the FTC may do it for you. For example, if your algorithm results in credit discrimination against a protected class, you could find yourself facing a complaint alleging violations of the FTC Act and ECOA. Whether caused by a biased algorithm or by human misconduct of the more prosaic variety, the FTC takes allegations of credit discrimination very seriously, as its recent action against Bronx Honda demonstrates.
  • House Energy and Commerce Committee Chair Frank Pallone, Jr. (D-NJ), Ranking Member Cathy McMorris Rodgers (R-WA), Communications and Technology Subcommittee Chair Mike Doyle (D-PA), Communications and Technology Subcommittee Ranking Member Bob Latta (R-OH) wrote to the National Telecommunications and Information Administration (NTIA) “urging the agency to fulfill its statutory role of managing federal spectrum use.” They asserted:
    • As the federal agency responsible for managing federal spectrum use, in collaboration with the Federal Communications Commission (FCC), the NTIA plays an important role in resolving interagency disputes about federal spectrum. The NTIA has the authority to “assign frequencies to radio stations or classes of radio stations belonging to and operated by the United States, including the authority to amend, modify, or revoke such assignments” and can take into account the differing missions and needs of federal spectrum users to maximize the benefits of spectrum use for the government and the public. Congress created this system to ensure that certain agencies do not improperly elevate their own spectrum needs over others.  Allowing a single agency with significant spectrum needs to manage both its and other agencies’ spectrum resources would risk inefficient use of this precious resource.  Each agency is expected to be an advocate for its own spectrum needs, while the NTIA, in advising the President, must resolve potential conflicts.  
    • By statute, the NTIA has “[t]he responsibility to ensure that the views of the executive branch on telecommunications matters are effectively presented to the [FCC].” In recent years, several federal agencies with spectrum allocations have circumvented this statutory process and argued the importance of their particular use cases directly to the FCC, rather than working through the NTIA as the central repository and manager of federal spectrum.
    • In contrast, last summer, the NTIA and the Department of Defense worked to reach an agreement that substantially cleared the 3.45 to 3.55 gigahertz (GHz) band. We applaud this collaborative and productive process and hope to see similarly effective engagements in the future.  According to a recent NTIA report, the 3.1 to 3.45 GHz band also could be a good candidate for federal/non-federal relocation, coordination, or sharing, and we would appreciate regular updates of the NTIA’s progress in that effort.
  • The Agencia Española de Protección de Datos (AEPD) and the European Data Protection Supervisor (EDPS) published a joint paper “10 MISUNDERSTANDINGS RELATED TO ANONYMISATION.” The AEPD and EDPS asserted:
    • According to the European Union’s data protection laws, in particular the General Data Protection Regulation (GDPR), anonymous data is “information which does not relate to an identified or identifiable natural person or to personal data rendered anonymous in such a manner that the data subject is not or no longer identifiable”.
    • Throughout the years, there have been several examples of incomplete or wrongfully conducted anonymisation processes that resulted in the re-identification of individuals.
    • Anonymous data play an important role in the context of research in the fields of medicine, demographics, marketing, economy, statistics and many others. However, this interest coincided with the spread of related misunderstandings. The objective of this document is to raise public awareness about some misunderstandings about anonymisation, and to motivate its readers to check assertions about the technology, rather than accepting them without verification.
    • This document lists ten of these misunderstandings, explains the facts and provides references for further reading:
      • 1.“Pseudonymisation is the same as anonymisation”
      • 2.“Encryption is anonymisation”
      • 3.“Anonymisation of data is always possible”
      • 4.“Anonymisation is forever”
      • 5.“Anonymisation always reduces the probability of re-identification of a dataset to zero”
      • 6.“Anonymisation is a binary concept that cannot be measured”
      • 7.“Anonymisation can be fully automated”
      • 8.“Anonymisation makes the data useless”
      • 9.“Following ananonymisation process that others used successfully will lead our organisation to equivalent results”
      • 10.“There is no risk and no interest in finding out to whom this data refers to”
  • Homeland Security Committee Ranking Member John Katko (R-NY) and Cybersecurity, Infrastructure Protection, & Innovation Subcommittee Ranking Member Andrew Garbarino (R-NY) wrote the Secretaries of Homeland Security and Commerce, stating:
    • Specifically, we are alarmed at the rise of Chinese technology company Xiaomi, which has recently launched several new high-end smartphones aiming to fill the consumer-facing void left by Huawei. We share grave concerns that Xiaomi poses a significant threat to the privacy of any of its users through its lineup of smartphones, laptops, smart watches, and other consumer-facing products. In many ways, data has become the modern-day currency of homeland security and we must take threats to the data integrity of the free world seriously.
    • Please provide our committee with an update on the steps your respective departments are taking to ensure the security of the nation’s ICT supply chain, and what authorities and levers you plan to exercise in response to the threat of emergent market players like Xiaomi.
  • Ireland’s Data Protection Commission (DPC) issued a draft Regulatory Strategy that “sets out an ambitious vision for what it believes will be five crucial years in the evolution of data protection law, regulation and culture” according to the agency’s press release. The DPC stated:
    • In developing this draft Strategy for stakeholder consultation, the DPC has been careful to give conscientious thought to the needs and insights of its stakeholders, the legislation under which it must regulate, the context in which it currently operates and the various future states for which it must prepare. It has also taken account of the academic theories that are emerging in respect of effective regulation and behavioural economics.
    • The breadth of the DPC’s regulatory remit cuts across all areas of personal and public life; both at national and international level. In order to develop a Regulatory Strategy that will provide effective direction for such a vast operational remit, the DPC has been careful to take careful account of the wider context in which it regulates, the needs of its diverse stakeholders and the evolving nature of the fast-paced and non-traditional sectors it regulates.
    • The Strategy is arranged according to fundamental goals, underpinned by the DPC’s mission, vision and values, which collectively contribute to the delivery of its strategic priorities. In putting this document out for consultation, the DPC wants to make sure it hears as many points of view as possible before committing to a definite course of action.
  • Insurer GEICO notified the California Attorney General of a data breach. The company claimed:
    • We recently determined that between January 21, 2021 and March 1, 2021, fraudsters used information about you –which they acquired elsewhere – to obtain unauthorized access to your driver’s license number through the online sales system on our website. We have reason to believe that this information could be used to fraudulently apply for unemployment benefits in your name. If you receive any mailings from your state’s unemployment agency/department, please review them carefully and contact that agency/department if there is any chance fraud is being committed.
    • As soon as GEICO became aware of the issue, we secured the affected website and worked to identify the root cause of the incident. While we regularly maintain high security and privacy standards, we have also implemented—and continue to implement—additional security enhancements to help prevent future fraud and illegal activities on our website.
  • The Daily Mail has sued Google, alleging that the company’s online advertising business and practices have violated United States (U.S.) law. The newspaper company claimed:
    • Online advertising continues to grow overall as users consume more internet content, yet newspapers’ advertising revenue has declined by 70% over the last decade. As a result, since 2008, newsroom employment has dropped by more than half, 20% of all newspapers have closed, and half of all U.S. counties now have only one newspaper, usually a weekly edition. The circulation of daily newspapers has decreased by more than 40%.
    • News publishers do not see the growing ad spending because Google and its parent Alphabet unlawfully have acquired and maintain monopolies for the tools that publishers and advertisers use to buy and sell online ad space. Those tools include the software publishers use to sell their ad inventory, and the dominant exchange where millions of ad impressions are sold in auctions every day. Google controls the “shelf space” on publishers’ pages where ads appear, and it exploits that control to defeat competition for that ad space. Among other tactics, Google makes it difficult for publishers to compare prices among exchanges; reduces the number of exchanges that can submit bids; and uses bids offered by rival exchanges to set its own bids — a de facto bid rigging scheme. Further, for years, Google has used its search rankings to punish publishers that do not submit to its practices. The lack of competition for publishers’ inventory depresses prices and reduces the amount and quality of news available to readers, but Google ends up ahead because it controls a growing share of the ad space that remains.
  • The United Kingdom’s Department for Digital, Culture, Media & Sport (DCMS) issued the “Telecoms Diversification Taskforce: findings and report,” and the chair of the taskforce explained:
    • Last August you asked me to chair an expert Taskforce for around six months to identify solutions and opportunities to diversify the supply market for 5G. Since the Taskforce was established, the Government has published its 5G Supply Chain Diversification Strategy, alongside the introduction of the Telecommunications (Security) Bill.  Together these set out the Government’s intention to ensure the security of UK telecoms networks and its approach to build a healthy, innovative and competitive supply market moving forward. 
    • The Diversification Taskforce has taken the Government’s strategy as the basis for its work and has focused on how to deliver the ambitions that the Government has set out. The attached report outlines our findings and recommendations in four key areas – telecoms standards; regulatory policy; accelerating the adoption of Open RAN; and long-term research and innovation to build UK capability. These recommendations are supported by more detailed work plans that I have shared with your officials. The report also highlights a number of broader areas where the Government should consider taking action.

Further Reading

  • Stop believing your lying eyes: Deepfakes are coming, and they might reshape SA’s politics” By Stephen Grootes — Daily Maverick. African National Congress politicians had their remarks from a recent internal party meeting turned into audio deepfakes, the likes of which are optimally spread on WhatsApp. Piecing together audio from politicians may prove easier and more effective in the short term while visual deepfake technology is perfected. As with all deepfake type material, even later denials and fact checks often cannot change many people’s perceptions.
  • Facebook Knows It Was Used To Help Incite The Capitol Insurrection” “Read Facebook’s Internal Report About Its Role In The Capitol Insurrection” By Ryan Mac, Craig Silverman, and Jane Lytvynenko — BuzzFeed News. The pipeline of information from deep inside Facebook to BuzzFeed is alive and well. In this piece, we learn the company’s internal report “Stop the Steal and Patriot Party: The Growth and Mitigation of an Adversarial Harmful Movement” concluded the platform is not well equipped to handle coordinated authentic content and campaigns like Stop The Steal. Facebook usually focuses on inauthentic or fake campaigns. Additionally, the internal review found Facebook was focused on activity before the election and did not perceive the extremist, violent groups and content that led to the 6 January attack on the Capitol as a problem until after it happened.
  • China is using our legal systems against us” By J. Michael Cole — The National Post. The People’s Republic of China’s (PRC) government and companies are using “lawfare” (i.e., “the use or threat of legal action”), a long employed tactic to go after critics. PRC companies like Huawei are using the lgal systems of nations like Canada, Taiwan, Australia, and others to file libel claims against academics and think tanks who dare to criticize the companies or the country. The author is one such academic who successfully fended off such a suit and offers policy changes to stop what he sees as the weaponization of western legal systems.
  • Fed chair deems cyber threat top risk to financial sector” By Shannon Vavra — cyberscoop. In a 60 Minutes interview, Federal Reserve Board Chairman Jerome Powell said cyber-attacks that cripple the financial system are is biggest concern right now. Powell’s worries have been echoes in a number of reports, including this Carnegie Endowment report from last fall. When asked about the scenarios that concern him, Powell responded:
    • All different kinds. I mean, there are scenarios in which a large payment utility, for example, breaks down and the payment system can’t work. Payments can’t be completed. There are scenarios in which a large financial institution would lose the ability to track the payments that it’s making and things like that. Things like that where you would have a part of the financial system come to a halt, or perhaps even a broad part.
    • And so we spend so much time and energy and money guarding against these things. There are cyber attacks every day on all major institutions now. And the government is working hard on that. So are all the private sector companies. There’s a lot of effort going in to deal with those threats. That’s a big part of the threat picture in today’s world.
  • U.S. Faces Uphill Climb to Rival China’s Rare-Earth Magnet Industry” By Alistair MacDonald — The Wall Street Journal. The United States (U.S.) and other nations are trying to jumpstart production and refinement of rare earths like those that are used to build powerful magnets vital to the production of electric cars and solar wind turbines. However, the People’s Republic of China (PRC) dominates rare earths production because it is cheaper and easier for them to do so. Western nations have higher labor costs and stricter environmental controls, building in higher costs for production. Consequently, western companies have a harder time competing with cheaper Chinese products.
  • COVID vaccine passports: what can we learn from Israel?” — The Guardian. This short video explains how Israel’s Green Pass works (a QR Code on a piece of paper or a phone with information about vaccination) that is supposed to be used for entry to many public places. In the experience of The Guardian journalist, this has not been the case because infection rates are so low due to the very high vaccination rates. Might this be a preview for the European Union and its Digital Green Certificates?
  • Israel May Have Destroyed Iranian Centrifuges Simply By Cutting Power” By Kim Zetter — The Intercept. Centrifuges enrich uranium by spinning the material at high speeds and need to be slowed gradually to avoid damaging the machines’ rotors and bellows. Consequently, at Iran’s Nataanz facility, there are primary and backup power systems. It appears as if Israel attacked both systems at the same time, cutting power for Iran’s new centrifuges and likely destroying them and the system for enriching and collecting uranium (i.e., the cascades.) It is not clear whether Iran did this through a cyber-attack, a conventional attack, or some combination. What is clear is that Israel did this on the eve of The United States, Iran, and European nations resuming talks on reviving the nuclear deal struck in 2015 that Israel opposes.

Coming Events

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2021. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Photo by Sigmund on Unsplash

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s