Further Reading, Other Developments, and Coming Events (4 December)

Further Reading

  • How Misinformation ‘Superspreaders’ Seed False Election Theories” By Sheera Frenkel — The New York Times. A significant percentage of lies, misinformation, and disinformation about the legitimacy of the election have been disseminated by a small number of right-wing figures, which are then repeated, reposted, and retweeted. The Times relies on research of how much engagement people like President Donald Trump and Dan Bongino get on Facebook after posting untrue claims about the election and it turns out that such trends and rumors do not start spontaneously.
  • Facebook Said It Would Ban Holocaust Deniers. Instead, Its Algorithm Provided a Network for Them” By Aaron Sankin — The Markup. This news organization still found Holocaust denial material promoted by Facebook’s algorithm even though the platform said it was taking down such material recently. This result may point to the difficulty of policing objectionable material that uses coded language and/or the social media platforms lack of sufficient resources to weed out this sort of content.
  • What Facebook Fed the Baby Boomers” By Charlie Warzel — The New York Times. A dispiriting trip inside two people’s Facebook feeds. This article makes the very good point that comments are not moderated, and these tend to be significant sources of vitriol and disinformation.
  • How to ‘disappear’ on Happiness Avenue in Beijing” By Vincent Ni and Yitsing Wang — BBC. By next year, the People’s Republic of China (PRC) may have as many as 560 million security cameras, and one artist ran an experiment of sorts to see if a group of people could walk down a major street in the capital without being seen by a camera or without their face being seen at places with lots of cameras.
  • Patients of a Vermont Hospital Are Left ‘in the Dark’ After a Cyberattack” By Ellen Barry and Nicole Perlroth — The New York Times. A Russian hacking outfit may have struck back after the Department of Defense’s (DOD) Cyber Command and Microsoft struck them. A number of hospitals were hacked, and care was significantly disrupted. This dynamic may lend itself to arguments that the United States (U.S.) may be wise to curtail its offensive operations.
  • EU seeks anti-China alliance on tech with Biden” By Jakob Hanke Vela and David M. Herszenhorn — Politico. The European Union (EU) is hoping the United States (U.S.) will be more amenable to working together in the realm of future technology policy, especially against the People’s Republic of China (PRC) which has made a concerted effort to drive the adoption of standards that favor its companies (e.g., the PRC pushed for and obtained 5G standards that will favor Huawei). Diplomatically speaking, this is considered low-hanging fruit, and a Biden Administration will undoubtedly be more multilateral than the Trump Administration.
  • Can We Make Our Robots Less Biased Than We Are?” By David Berreby — The New York Times. The bias present in facial recognition technology and artificial intelligence is making its way into robotics, posing the question of how do we change this? Many African American and other minority scientists are calling for the inclusion of people of color inn designing such systems as a countermeasure to the usual bias for white men.

Other Developments

  • The top Democrat on the Senate Homeland Security and Governmental Affairs Committee wrote President Donald Trump and “slammed the Trump Administration for their lack of action against foreign adversaries, including Russia, China, and North Korea, that have sponsored cyber-attacks against American hospitals and research institutions in an effort to steal information related to development of Coronavirus vaccines.” Peters used language that was unusually strong as Members of Congress typically tone down the rhetoric and deploy coded language to signal their level of displeasure about administration action or inaction. Peters could well feel strongly about what he perceives to be Trump Administration indifference to the cyber threats facing institutions researching and developing COVID-19 vaccines, this is an issue on which he may well be trying to split Republicans, placing them in the difficult position of lining up behind a president disinclined to prioritize some cyber issues or breaking ranks with him.
    • Peters stated:
      • I urge you, again, to send a strong message to any foreign government attempting to hack into our medical institutions that this behavior is unacceptable. The Administration should use the tools at its disposal, including the threat of sanctions, to deter future attacks against research institutions. In the event that any foreign government directly threatens the lives of Americans through attacks on medical facilities, other Department of Defense capabilities should be considered to make it clear that there will be consequences for these actions.
  • A United States federal court has ruled against a Trump Administration appointee Michael Pack and the United States Agency for Global Media (USAGM) and their attempts to interfere illegally with the independence of government-funded news organizations such as the Voice of America (VOA). The District Court for the District of Columbia enjoined Pack and the USAGM from a list of actions VOA and USAGM officials claim are contrary to the First Amendment and the organization’s mission.
  • The Federal Trade Commission (FTC) is asking a United States federal court to compel former Trump White House advisor Steve Bannon to appear for questioning per a Civil Investigative Demand (CID) as part of its ongoing probe of Cambridge Analytica’s role in misusing personal data of Facebook users in the 2016 Presidential Election. The FTC noted it “issued the CID to determine, among other things, whether Bannon may be held individually liable for the deceptive conduct of Cambridge Analytica, LLC—the subject of an administrative law enforcement action brought by the Commission.” There had been an interview scheduled in September but the day before it was to take place, Bannon’s lawyers informed the FTC he would not be attending.
    • In 2019, the FTC settled with former Cambridge Analytica CEO Alexander Nix and app developer Aleksandr Kogan in “administrative orders restricting how they conduct any business in the future, and requiring them to delete or destroy any personal information they collected.” The FTC did not, however, settle with the company itself. The agency alleged “that Cambridge Analytica, Nix, and Kogan deceived consumers by falsely claiming they did not collect any personally identifiable information from Facebook users who were asked to answer survey questions and share some of their Facebook profile data.” Facebook settled with the FTC for a record $5 billion for its role in the Cambridge Analytica scandal and for how it violated its 2012 consent order with the agency.
  • Apple responded to a group of human rights and civil liberties organizations about its plans to deploy technology on its operating system that allows users greater control of their privacy. Apple confirmed that its App Tracking Transparency (ATT) would be made part of its iOS early next year and would provide users of Apple products with a prompt with a warning about how their information may be used by the app developer. ATT would stop app developers from tracking users when they use other apps on ta device. Companies like Facebook have objected, claiming that the change is a direct shot at them and their revenue. Apple does not reap a significant revenue stream from collecting, combining, and processing user data whereas Facebook does. Facebook also tracks users across devices and apps on a device through a variety of means.
    • Apple stated:
      • We delayed the release of ATT to early next year to give developers the time they indicated they needed to properly update their systems and data practices, but we remain fully committed to ATT and to our expansive approach to privacy protections. We developed ATT for a single reason: because we share your concerns about users being tracked without their consent and the bundling and reselling of data by advertising networks and data brokers.
      • ATT doesn’t ban the reasonable collection of user data for app functionality or even for advertising. Just as with the other data-access permissions we have added over many software releases, developers will be able to explain why they want to track users both before the ATT prompt is shown and in the prompt itself. At that point, users will have the freedom to make their own choice about whether to proceed. This privacy innovation empowers consumers — not Apple — by simply making it clear what their options are, and giving them the information and power to choose.
    • As mentioned, a number of groups wrote Apple in October “to express our disappointment that Apple is delaying the full implementation of iOS 14’s anti-tracking features until early 2021.” They argued:
      • These features will constitute a vital policy improvement with the potential to strengthen respect for privacy across the industry. Apple should implement these features as expeditiously as possible.
      • We were heartened by Apple’s announcement that starting with the iOS 14 update, all app developers will be required to provide information that will help users understand the privacy implications of an app before they install it, within the App Store interface.
      • We were also pleased that iOS 14 users would be required to affirmatively opt in to app tracking, on an app-by-app basis. Along with these changes, we urge Apple to verify the accuracy of app policies, and to publish transparency reports showing the number of apps that are rejected and/or removed from the App Store due to inadequate or inaccurate policies.
  • The United States (U.S.) Government Accountability Office (GAO) sent its assessment of the privacy notices and practices of U.S. banks and credit unions to the chair of the Senate committee that oversees this issue. Senate Banking, Housing, and Urban Affairs Committee Chair Mike Crapo (R-ID) had asked the GAO “to examine the types of personal information that financial institutions collect, use, and share; how they make consumers aware of their information-sharing practices; and federal regulatory oversight of these activities.” The GAO found that a ten-year-old model privacy disclosure form used across these industries may comply with the prevailing federal requirements but no longer encompasses the breadth and scope of how the personal information of people is collected, processed, and used. The GAO called on the Consumer Financial Protection Bureau (CFPB) to update this form. The GAO explained:
    • Banks and credit unions collect, use, and share consumers’ personal information—such as income level and credit card transactions—to conduct everyday business and market products and services. They share this information with a variety of third parties, such as service providers and retailers.
    • The Gramm-Leach-Bliley Act (GLBA) requires financial institutions to provide consumers with a privacy notice describing their information-sharing practices. Many banks and credit unions elect to use a model form—issued by regulators in 2009—which provides a safe harbor for complying with the law (see figure). GAO found the form gives a limited view of what information is collected and with whom it is shared. Consumer and privacy groups GAO interviewed cited similar limitations. The model form was issued over 10 years ago. The proliferation of data-sharing since then suggests a reassessment of the form is warranted. Federal guidance states that notices about information collection and usage are central to providing privacy protections and transparency.
    • Since Congress transferred authority to the CFPB for implementing GLBA privacy provisions, the agency has not reassessed if the form meets consumer expectations for disclosures of information-sharing. CFPB officials said they had not considered a reevaluation because they had not heard concerns from industry or consumer groups about privacy notices. Improvements to the model form could help ensure that consumers are better informed about all the ways banks and credit unions collect and share personal information
    • The increasing amounts of and changing ways in which industry collects and shares consumer personal information—including from online activities—highlights the importance of clearly disclosing practices for collection, sharing, and use. However, our work shows that banks and credit unions generally used the model form, which was created more than 10 years ago, to make disclosures required under GLBA. As a result, the disclosures often provided a limited view of how banks and credit unions collect, use, and share personal information.
    • We recognize that the model form is required to be succinct, comprehensible to consumers, and allow for comparability across institutions. But, as information practices continue to change or expand, consumer insights into those practices may become even more limited. Improvements and updates to the model privacy form could help ensure that consumers are better informed about all the ways that banks and credit unions collect, use, and share personal information. For instance, in online versions of privacy notices, there may be opportunities for readers to access additional details—such as through hyperlinks—in a manner consistent with statutory requirements.
  • The Australian Competition & Consumer Commission (ACCC) is asking for feedback on Google’s proposed $2.1 billion acquisition of Fitbit. In a rather pointed statement, the chair of the ACCC, Rod Sims, made clear “[o]ur decision to begin consultation should not be interpreted as a signal that the ACCC will ultimately accept the undertaking and approve the transaction.” The buyout is also under scrutiny in the European Union (EU) and may be affected by the suit the United States Department of Justice (DOJ) and some states have brought against the company for anti-competitive behavior. The ACCC released a Statement of Issues in June about the proposed deal.
    • The ACCC explained “[t]he proposed undertaking would require Google to:
      • not use certain user data collected through Fitbit and Google wearables for Google’s advertising purposes for 10 years, with an option for the ACCC to extend this obligation by up to a further 10 years;
      • maintain access for third parties, such as health and fitness apps, to certain user data collected through Fitbit and Google wearable devices for 10 years; and
      • maintain levels of interoperability between third party wearables and Android smartphones for 10 years.
    • In August, the EU “opened an in-depth investigation to assess the proposed acquisition of Fitbit by Google under the EU Merger Regulation.” The European Commission (EC) expressed its concerns “that the proposed transaction would further entrench Google’s market position in the online advertising markets by increasing the already vast amount of data that Google could use for personalisation of the ads it serves and displays.” The EC stated “[a]t this stage of the investigation, the Commission considers that Google:
      • is dominant in the supply of online search advertising services in the EEA countries (with the exception of Portugal for which market shares are not available);
      • holds a strong market position in the supply of online display advertising services at least in Austria, Belgium, Bulgaria, Croatia, Denmark, France, Germany, Greece, Hungary, Ireland, Italy, Netherlands, Norway, Poland, Romania, Slovakia, Slovenia, Spain, Sweden and the United Kingdom, in particular in relation to off-social networks display ads;
      • holds a strong market position in the supply of ad tech services in the EEA.
    • The EC explained that it “will now carry out an in-depth investigation into the effects of the transaction to determine whether its initial competition concerns regarding the online advertising markets are confirmed…[and] will also further examine:
      • the effects of the combination of Fitbit’s and Google’s databases and capabilities in the digital healthcare sector, which is still at a nascent stage in Europe; and
      • whether Google would have the ability and incentive to degrade the interoperability of rivals’ wearables with Google’s Android operating system for smartphones once it owns Fitbit.
    • Amnesty International (AI) sent EC Executive Vice-President Margrethe Vestager a letter, arguing “[t]he merger risks further extending the dominance of Google and its surveillance-based business model, the nature and scale of which already represent a systemic threat to human rights.” AI asserted “[t]he deal is particularly troubling given the sensitive nature of the health data that Fitbit holds that would be acquired by Google.” AI argued “[t]he Commission must ensure that the merger does not proceed unless the two business enterprises can demonstrate that they have taken adequate account of the human rights risks and implemented strong and meaningful safeguards that prevent and mitigate these risks in the future.”
  • Europol, the United Nations Interregional Crime and Justice Research Institute (UNICRI) and Trend Micro have cooperated on a report that looks “into current and predicted criminal uses of artificial intelligence (AI).
    • The organizations argued “AI could be used to support:
      • convincing social engineering attacks at scale;
      • document-scraping malware to make attacks more efficient;
      • evasion of image recognition and voice biometrics;
      • ransomware attacks, through intelligent targeting and evasion;
      • data pollution, by identifying blind spots in detection rules.
    • The organizations concluded:
      • Based on available insights, research, and a structured open-source analysis, this report covered the present state of malicious uses and abuses of AI, including AI malware, AI-supported password guessing, and AI-aided encryption and social engineering attacks. It also described concrete future scenarios ranging from automated content generation and parsing, AI-aided reconnaissance, smart and connected technologies such as drones and autonomous cars, to AI-enabled stock market manipulation, as well as methods for AI-based detection and defense systems.
      • Using one of the most visible malicious uses of AI — the phenomenon of so-called deepfakes — the report further detailed a case study on the use of AI techniques to manipulate or generate visual and audio content that would be difficult for humans or even technological solutions to immediately distinguish from authentic ones.
      • As speculated on in this paper, criminals are likely to make use of AI to facilitate and improve their attacks by maximizing opportunities for profit within a shorter period, exploiting more victims, and creating new, innovative criminal business models — all the while reducing their chances of being caught. Consequently, as “AI-as-a-Service”206 becomes more widespread, it will also lower the barrier to entry by reducing the skills and technical expertise required to facilitate attacks. In short, this further exacerbates the potential for AI to be abused by criminals and for it to become a driver of future crimes.
      • Although the attacks detailed here are mostly theoretical, crafted as proofs of concept at this stage, and although the use of AI to improve the effectiveness of malware is still in its infancy, it is plausible that malware developers are already using AI in more obfuscated ways without being detected by researchers and analysts. For instance, malware developers could already be relying on AI-based methods to bypass spam filters, escape the detection features of antivirus software, and frustrate the analysis of malware. In fact, DeepLocker, a tool recently introduced by IBM and discussed in this paper, already demonstrates these attack abilities that would be difficult for a defender to stop.
      • To add, AI could also enhance traditional hacking techniques by introducing new ways of performing attacks that would be difficult for humans to predict. These could include fully automated penetration testing, improved password-guessing methods, tools to break CAPTCHA security systems, or improved social engineering attacks. With respect to open-source tools providing such functionalities, the paper discussed some that have already been introduced, such as DeepHack, DeepExploit, and XEvil.
      • The widespread use of AI assistants, meanwhile, also creates opportunities for criminals who could exploit the presence of these assistants in households. For instance, criminals could break into a smart home by hijacking an automation system through exposed audio devices.

Coming Events

  • The National Institute of Standards and Technology (NIST) will hold a webinar on the Draft Federal Information Processing Standards (FIPS) 201-3 on 9 December.
  • On 9 December, the Senate Commerce, Science, and Transportation Committee will hold a hearing titled “The Invalidation of the EU-US Privacy Shield and the Future of Transatlantic Data Flows” with the following witnesses:
    • The Honorable Noah Phillips, Commissioner, Federal Trade Commission
    • Ms. Victoria Espinel, President and Chief Executive Officer, BSA – The Software Alliance
    • Mr. James Sullivan, Deputy Assistant Secretary for Services, International Trade Administration, U.S. Department of Commerce
    • Mr. Peter Swire, Elizabeth and Tommy Holder Chair of Law and Ethics, Georgia Tech Scheller College of Business, and Research Director, Cross-Border Data Forum
  • On 10 December, the Federal Communications Commission (FCC) will hold an open meeting and has released a tentative agenda:
    • Securing the Communications Supply Chain. The Commission will consider a Report and Order that would require Eligible Telecommunications Carriers to remove equipment and services that pose an unacceptable risk to the national security of the United States or the security and safety of its people, would establish the Secure and Trusted Communications Networks Reimbursement Program, and would establish the procedures and criteria for publishing a list of covered communications equipment and services that must be removed. (WC Docket No. 18-89)
    • National Security Matter. The Commission will consider a national security matter.
    • National Security Matter. The Commission will consider a national security matter.
    • Allowing Earlier Equipment Marketing and Importation Opportunities. The Commission will consider a Notice of Proposed Rulemaking that would propose updates to its marketing and importation rules to permit, prior to equipment authorization, conditional sales of radiofrequency devices to consumers under certain circumstances and importation of a limited number of radiofrequency devices for certain pre-sale activities. (ET Docket No. 20-382)
    • Promoting Broadcast Internet Innovation Through ATSC 3.0. The Commission will consider a Report and Order that would modify and clarify existing rules to promote the deployment of Broadcast Internet services as part of the transition to ATSC 3.0. (MB Docket No. 20-145)

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s