Other Developments, Further Reading, and Coming Events (13 April 2021)

Other Developments

  • The White House has finally named its nominees to be the first National Cyber Director and the next Cybersecurity and Infrastructure Security Agency (CISA) Director. In a statement, National Security Advisor Jake Sullivan announced that President Joe Biden’s intention “to nominate Chris Inglis as National Cyber Director and Jen Easterly as the Director of the Cybersecurity and Infrastructure Agency” (sic). Both Inglis and Easterly have ties to the National Security Agency, with Inglis having been its Deputy Director from 2006 through 2014. Sullivan added:
    • If confirmed, Chris and Jen will add deep expertise, experience and leadership to our world-class cyber team, which includes the first-ever Deputy National Security Advisor for Cyber and Emerging Technology Anne Neuberger, as well as strong, crisis-tested professionals from the FBI to ODNI to the Department of Homeland Security to U.S. Cyber Command and the National Security Agency.
    • The Center for Cyber Security Studies posted a bio of Inglis:
      • Mr. Inglis retired from the Department of Defense in January 2014 following over 41 years of federal service, including 28 years at NSA and seven and a half years as its senior civilian and Deputy Director. Mr. Inglis began his career at NSA as a computer scientist within the National Computer Security Center followed by tours in information assurance, policy, time-sensitive operations, and signals intelligence organizations. Promoted to NSA’s Senior Executive Service in 1997, he held a variety of senior leadership assignments and twice served away from NSA Headquarters, first as a visiting professor of computer science at the U.S. Military Academy (1991-1992) and later as the U.S. Special Liaison to the United Kingdom (2003-2006).
      • Mr. Inglis holds advanced degrees in engineering and computer science from Columbia University (MS), Johns Hopkins University (MS), and the George Washington University (Professional Degree). He is also a graduate of the Kellogg Business School executive development program, the USAF Air War College, Air Command and Staff College, and Squadron Officers’ School.
      • Mr. Inglis’ military career includes over 30 years of service in the US Air Force — nine years on active duty and twenty one years in the Air National Guard — from which he retired as a Brigadier General in 2006. He holds the rating of Command Pilot and commanded units at the squadron, group, and joint force headquarters levels.
      • Mr. Inglis’ significant Awards include the Clements award as the US Naval Academy’s Outstanding Military Faculty member (1984), three Presidential Rank Awards (2000, 2004, 2009), the USAF Distinguished Service Medal (2006), the Boy Scouts of America Distinguished Eagle Scout Award (2009), the Director of National Intelligence Distinguished Service Medal (2014), and The President’s National Security Medal (2014).
    • The National Security Institute posted a bio of Easterly:
      • Jen Easterly is a Managing Director of Morgan Stanley and Global Head of the Firm’s Cybersecurity Fusion Center. She joined the firm in February 2017 after nearly three decades in U.S. Government service. Prior to joining Morgan Stanley, Jen served on the National Security Council as Special Assistant to President Obama and Senior Director for Counterterrorism, where she led the development and coordination of U.S. counterterrorism and hostage policy.  Prior to that, she was the Deputy for Counterterrorism at the National Security Agency.  A two-time recipient of the Bronze Star, Jen retired from the U.S. Army after more than twenty years of service in intelligence and cyber operations, including tours of duty in Haiti, the Balkans, Iraq, and Afghanistan.  Responsible for standing up the Army’s first cyber battalion, Jen was also instrumental in the creation of United States Cyber Command.  A member of the Council on Foreign Relations and a French-American Foundation Young Leader, Jen is a Fellow of the 2018 class of the Aspen Finance Leaders Fellowship and a member of the Aspen Global Leadership Network. She is also a Senior International Security Fellow at the New America Foundation, as well as the past recipient of the Council on Foreign Relations International Affairs Fellowship and the Director, National Security Agency Fellowship. A distinguished graduate of the United States Military Academy at West Point, Jen holds a master’s degree in Philosophy, Politics, and Economics from the University of Oxford, where she studied as a Rhodes Scholar.  A Trustee of the Morgan Stanley Foundation, Jen serves on the Board of Nuru International, a non-profit dedicated to the eradication of extreme poverty, and on the Advisory Council for Hostage US, which supports the families of Americans taken hostage abroad and hostages when they return home.  She is the 2018 recipient of the James W. Foley American Hostage Freedom Award.
  • The National Security Agency (NSA) officially named a replacement for Anne Neuberger who left the agency earlier this year to become the Deputy National Security Advisor for Cyber and Emerging Technology at the National Security Council. Rob Joyce has started as the NSA’s new Cybersecurity Director after his name had been floated for the job in January. Joyce has extensive experience with the NSA having been “Deputy Director to the former Information Assurance Directorate, and as the Director of Tailored Access Operations.” Joyce also served in the Trump White House but was purged when former National Security Advisor John Bolton restructured the NSC in 2018, forcing out Joyce and his boss, former Homeland Security Advisor Tom Bossert. Thereafter Joyce returned to the NSA.
  • The Privacy and Civil Liberties Oversight Board (PCLOB) issued its long awaited report on Executive Order (EO) 12333, its “capstone to its more than six-year examination of the government’s use” of the EO, a “foundational document for the United States’ (U.S.) foreign intelligence efforts.” However, as pointed out by one expert, this PCLOB report is much shorter than its reports on the Foreign Intelligence Surveillance Act provisions in Sections 215 and 702 and the USA Freedom Act Telephone Call Records Program. Moreover, the EO 12333 lacks recommendations. This may displease more than privacy and civil liberties advocates because the European Data Protection Board (EDPB), among other European Union (EU), have been awaiting the PCLOB to provide more insight into the use of EO 12333 for collecting the data of EU residents. However, PCLOB did conduct detailed investigations of three counterterrorism programs as authorized under EO 12333 and submit these classified reports to a number of federal government stakeholders:
    • The Board chose to review counterterrorism-related activities involving one or more of the following: (1) bulk collection that carries a significant chance of acquiring some U.S. person information; (2) use of incidentally collected U.S. person information; (3) targeting of U.S. persons; and (4)collection that occurs within the United States or from U.S. companies.
    • Based on these criteria, the Board conducted two deep dive reviews of CIA activities, and one review of an NSA activity. The Board assessed the counterterrorism value of the activities in question, their impact on privacy and civil liberties, and whether those national security activities were appropriately balanced with privacy and civil liberties. The Board also considered whether the activities comport with requirements imposed by the Constitution, statutes, and agency-level implementing procedures and guidelines. The Board completed its deep dive review of one CIA activity in 2017and its deep dive review of the NSA activity in 2020. Both reviews resulted in classified reports. The Board provided both classified reports to Congress, and the NSA and CIA reports to the respective agencies. Also in 2020, staff completed the deep dive review of the second CIA activity.
    • PCLOB Members Ed Felten and Travis LeBlanc issued a statement on the report:
      • Accordingly, as the project progressed, it unfortunately proved difficult and impractical for the Board to address the full framework of counterterrorism activities governed byE.O.12333. Additionally, as noted in the report, most of the Board’s work on E.O. 12333 remains classified.
      • Accepting these facts, we voted to approve this report for two primary reasons. First, this report is the only public, unclassified document released by PCLOB regarding our initial review of the privacy and civil liberties implications of counterterrorism activities undertaken pursuant to E.O. 12333. The Board completed three additional deep dive reviews of activities conducted under E.O. 12333 by the CIA (2) and NSA (1)that have been provided to Congress and the respective agencies. We found that those classified deep dive reviews ultimately were more meaningful and impactful regarding our balancing of privacy and civil liberties with national security value compared to what we could say publicly in an unclassified manner about E.O. 12333.
      • Second, our Board is a relatively small agency with limited resources. When presented with the option to carry on with a very broad oversight review of E.O.12333, balanced against our need also to work on other timely, critical, and impactful issues affecting the privacy and civil liberties of Americans, we decided that there are other important issues today that demand our attention. We believe the Board now can and should focus its resources on other projects, which likely will continue to include oversight of specific counterterrorism activities conducted under E.O. 12333.
  • It appears Israel launched a cyber-attack on Iran’s Natanz nuclear facility that destroyed new centrifuges that had come into service. Israel’s government has not made any official comments about the attack, but Israeli media have quoted senior Israeli military officials who say Israel’s Mossad was indeed behind the attacks. The United States denied any involvement, and Iran is pointing the finger at Israel. The White House declared “[t]he United States was not involved in any manner.” Iran’s government called the attack “nuclear terrorism” and “a crime against humanity.” Moreover, it may not be a coincidence the attack occurred a few days before the United States and Iran are set to resume talks on how to renew the Obama Administration nuclear deal the Trump Administration withdrew from. Other important context is the history of cyber-attacks between the two Middle East adversaries with Israel claiming Iran attacked two water facilities and Iran claiming Israel thereafter attacked a port facility.
  • Senator Mark Warner (D-VA) wrote a letter to Facebook CEO Mark Zuckerberg regarding his “concern for your companies’ continued amplification of harmful misinformation, particularly the spread of COVID-19 and vaccine misinformation promoted by the Instagram algorithm.” Warner cited examples that demonstrate Facebook’s continued unwillingness or inability to enforce its own Community Standards and take action to reduce the spread of misinformation on its platforms.” He added “a recent report suggests that Facebook has failed to address the ways in which its products directly contribute towards radicalization, misinformation proliferation, and hate speech – deprioritizing or dismissing a range of proposed product reforms and interventions because of their tendency to depress user engagement with your products.” Warner stated:
    • Eliminating misinformation on your platforms is a valuable and necessary undertaking as online health misinformation can have a substantive impact on users’ intent to get vaccinated, with people exposed to COVID-19 and vaccine misinformation shown to be more likely to express vaccine hesitancy than those who were not.  Further, public health authorities shoulder an even greater burden – at a time of profound resource and budget strain – to combat misinformation amplified by platforms like Instagram, Facebook and WhatsApp. Given that over half of Americans rely on social media to get their news, with Facebook in particular serving as a “regular source of news” for about a third of Americans,  it is critical that Facebook take seriously its influence on users’ health decisions. 
    • To address these concerns, I request that you provide responses to the following questions by April 23, 2021:
      • 1. What procedures does Facebook have to exclude misinformation from its recommendation algorithm, specifically on Instagram?
      • 2. Please provide my office with Facebook internal research of the platform’s amplification of anti-vaccine content, groups, pages, and verified figures.
      • 3. Why were posts with content warnings about health misinformation promoted into Instagram feeds?
      • 4. When developing the new Suggested Posts function, what efforts did Facebook make to ensure that the new tool was only recommending reliable information?
      • 5. What is the process for the removal of prominent anti-vaccine accounts, and what is the rationale for disabling such users’ accounts from one of Facebook’s platforms but not others?
      • 6. How often are you briefed on the COVID-19 misinformation on Instagram and across Facebook platforms?
      • 7. Did Facebook perform safety checks to prevent the algorithmic amplification of COVID-19 misinformation? What did those safety protocols entail?
      • 8. Will anti-vaccine content continue to be monitored and removed after the COVID-19 pandemic?
      • 9. Please provide my office with Facebook’s policies for informing users that they were exposed to misinformation and how Facebook plans to remedy those harms.
      • 10. Combatting health misinformation amplified by large social media platforms puts an additional strain on the time, resources, and budgets of public health agencies – often requiring them to spend on online ads on the very platforms amplifying and propelling misinformation they must counter. Will you commit to provide free advertising for state and local public health authorities working to combat health misinformation?
  • The European Parliament adopted a resolution that “Concludes that, two years after its entry into application, the General Data Protection Regulation (GDPR) has been an overall success, and agrees with the Commission that it is not necessary at this stage to update or review the legislation.” The resolution also “[a]cknowledges that until the Commission’s next evaluation, the focus must continue to be on the improvement of implementation and on actions to strengthen the enforcement of the GDPR” and “[a]cknowledges the need for strong and effective enforcement of the GDPR in large digital platforms, integrated companies and other digital services, especially in the areas of online advertising, micro-targeting, algorithmic profiling, and the ranking, dissemination and amplification of content.” The Parliament also “[e]xpresses its concern about the uneven and sometimes non-existent enforcement of the GDPR by national data protection authorities (DPAs) more than two years after the start of its application, and therefore regrets that the enforcement situation has not substantially improved compared to the situation under Directive 95/46/EC.” Regarding possible future legislation, the Parliament stated:
    • Takes the view that by being technology-neutral, the GDPR provides a solid regulatory framework for emerging technologies; considers, nonetheless, that further efforts are needed to address broader issues of digitisation, such as monopoly situations and power imbalances through specific regulation, and to carefully consider the correlation of the GDPR with each new legislative initiative in order to ensure consistency and address legal gaps; reminds the Commission of its obligation to ensure legislative proposals, such as the data governance, data act, digital services act or on artificial intelligence, must always fully comply with the GDPR and the Law Enforcement Directive(9); considers that the final texts adopted by the co-legislators through interinstitutional negotiations need to fully respect the data protection acquis; regrets, however, that the Commission itself does not always have a consistent approach to data protection in legislative proposals; stresses that a reference to the application of the GDPR, or ‘without prejudice to the GDPR’, does not automatically make a proposal GDPR compliant; calls on the Commission to consult the European Data Protection Supervisor (EDPS) and the EDPB where there is an impact on the protection of individuals’ rights and freedoms with regard to the processing of personal data following the adoption of proposals for a legislative act; calls further on the Commission, when preparing proposals or recommendations, to endeavour to consult the EDPS, in order to ensure consistency of data protection rules throughout the Union, and to always conduct an impact assessment;
    • Notes that profiling, although only allowed by Article 22 GDPR under strict and narrow conditions, is increasingly used as the online activities of individuals allow for deep insights into their psychology and private life; notes that since profiling makes it possible to manipulate users’ behaviour, the collection and processing of personal data concerning the use of digital services should be limited to the extent strictly necessary in order to provide the service and bill users; calls on the Commission to propose strict sector-specific data protection legislation for sensitive categories of personal data where it has not yet done so; demands the strict enforcement of the GDPR in the processing of personal data;
    • Calls for the empowerment of consumers so that they can make informed decisions on the privacy implications of using new technologies and to ensure fair and transparent processing by providing easy-to-use options to give and withdraw consent to the processing of their personal data as provided for by the GDPR;
    • Finally, regarding the still-to-be-negotiated ePrivacy Regulation, the parliament stated:
      • Expresses its deep concern about the lack of implementation of the ePrivacy Directive by the Member States in view of the changes introduced by the GDPR; calls on the Commission to speed up its assessment and initiate infringement procedures against those Member States that failed to properly implement the ePrivacy Directive; is greatly concerned that the overdue reform of the eprivacy for several years leads to fragmentation of the legal landscape in the EU, detrimental to both businesses and citizens; recalls that the ePrivacy Regulation was designed to complement and particularise the GDPR and coincide with the entry into application of the GDPR; underlines that the reform of the ePrivacy rules must not lead to a lowering of the current level of protection afforded under the GDPR and the ePrivacy Directive; regrets the fact that it took four years for the Council to eventually adopt its negotiating position on the proposal for the ePrivacy Regulation, while Parliament adopted its negotiation position in October 2017; recalls the importance of upgrading the ePrivacy rules from 2002 and 2009 in order to improve protection of fundamental rights of citizens and legal certainty for companies, complementing the GDPR;
  • The chair and ranking member of the Senate Homeland Security and Governmental Affairs Committee have written the acting Director of the Cybersecurity and Infrastructure Security Agency (CISA) and the Federal Chief Information Security Officer. Senators Gary Peters (D-MI) and Rob Portman (R-OH) posed a series of questions to these Biden Administration officials, suggesting the committee will continue its oversight of the SolarWinds and Microsoft Exchange hacks with an eye towards updating legislation enacted in the previous decade to secure the government’s systems.
    • In the letter to acting CISA Director Brandon Wales, Peters and Portman stated “[a] recent report has raised the troubling possibility that the Department of Homeland Security (DHS or the Department) did not fully report the extent of the SolarWinds breach to Congress.” Peters and Portman added:
      • Our hearing also revealed key limitations of the EINSTEIN intrusion detection and intrusion prevention system. EINSTEIN is a signature-based intrusion detection and prevention system that sits on the perimeter of civilian federal agencies’ computer networks. As you alluded to in your testimony, network perimeters are increasingly irrelevant with modern information technology infrastructure that emphasizes end-to-end encryption and reliance on cloud service providers outside of an organization’s network; these technologies represent an inherent limitation of perimeter-based intrusion detection systems like EINSTEIN. Additionally, signature-based intrusion detection and intrusion prevention systems are largely limited to detecting previously seen threats—they are ineffective at identifying or blocking sophisticated and novel attacks like the SolarWinds hack. As this Committee warned nearly five years ago, “Current reliance on decades old signature-based detection technology limits the effectiveness of EINSTEIN against advanced persistent threats.”
      • The authorization for DHS to operate EINSTEIN lapses on December 18, 2022 and we look forward to working with you to determine whether and how to reauthorize the program to address these limitations and, more broadly, how to defend better against advanced persistent cyber threats. To assist us in this investigation and these policy considerations, please provide unredacted copies of the following documents no later than 5:00 p.m. on April 20, 2021:
      • 1. Documents sufficient to show the specific information systems compromised at federal agencies shared with CISA in regards to the SolarWinds and MS Exchange cyberattacks or that may have been captured by EINSTEIN in the past six months, including the names of the individuals whose accounts or systems were compromised or targeted if at the SES, ES, or equivalent level; and the agencies and programs with which those individuals and systems were associated, to the greatest level of detail possible.
      • 2. The Department’s current cybersecurity strategy and implementation plan and intrusion assessment plan.
      • 3. Documents sufficient to show the current and planned technical capabilities of EINSTEIN 1 (E1); EINSTEIN 2 (E2); EINSTEIN 3 Accelerated (E3A); and Enhanced Cybersecurity Services, including any improvements, new technologies, modification of existing technologies, advanced protective technologies, or detection technologies beyond signature based detection planned, acquired, tested, evaluated, piloted, or deployed on the EINSTEIN platform.
      • 4. All reports, evaluations, studies, or reviews related to EINSTEIN classified indicators, including the assessment CISA performed in 2020 on the efficacy of utilizing classified indicators and any update since the publication of that study.
      • 5. All classified indicators in use on E3A as of the date of this letter as well as any contextual information DHS has regarding those indicators.
      • 6. Documents sufficient to show the current and planned technical capabilities of the Continuous Diagnostics and Mitigation (CDM) program including advanced network security tools to improve visibility of network activity and to detect and to mitigate intrusions and anomalous activity, and the current plan to ensure that each agency utilizes advanced networks security tools as part of the CDM program.
      • 7. The performance work statement for each CDM integrator including for each: documents sufficient to show whether the contract is fixed price or cost based and incentives and awards.
      • 8. Operations and spending plans for the National Cybersecurity Protection System and for the CDM program to the greatest level of detail possible for the each of the past five fiscal years.
    • In the letter to Federal Chief Information Security Officer Christopher DeRusha, Peters and Portman stated:
      • We look forward to working with the Administration on needed improvements to the Federal Information Security Modernization Act of 2014, and other legislative improvements to defend better against advanced persistent cyber threats. To assist us in this investigation and these policy considerations, please provide unredacted copies of the following documents no later than 5:00 p.m., April 20, 2021:
      • 1. The current federal cybersecurity strategy and any associated implementation plan(s) and a description of any plan to update the strategy or plan(s).
      • 2. A list of roles and responsibilities for federal cybersecurity including an assessment of how these defined roles prevent duplicative efforts and facilitated the federal government’s response to the SolarWinds attack.
      • 3. Documents sufficient to show the specific information systems compromised or targeted at federal agencies in the SolarWinds Orion attack and Microsoft Exchange attacks; the names of the individuals whose accounts or systems were compromised or targeted if at the SES, ES, or equivalent level; and the agencies, programs, and teams with which those individuals and systems were associated, to the greatest level of detail possible.
      • 4. Documents sufficient to show current and planned metrics used to measure security in accordance with section 3554 of title 44, United States Code.
      • 5. Cyberscope data received for each department or agency for FY 2020.
  • NATO Secretary General Jens Stoltenberg released NATO’s 2020 annual report that “covers NATO’s work and our achievements throughout the year.” As cyber has grown in importance in the military world, NATO has placed greater emphasis on its capabilities and coordinating those among its member nations. In the report, regarding cybersecurity, NATO stated:
    • A secure cyberspace is essential to everything the Alliance does. This is why cyber defence is part of NATO’s core task of collective defence. NATO has made clear that a severe cyber attack could lead it to invoke Article 5 of the Washington Treaty.
    • NATO continues to develop doctrines and to conduct training and exercises to ensure it is just as effective in cyberspace as it is on land, in the air and at sea. In 2020, NATO published its first cyber doctrine. This was an important step in providing guidance for the conduct of cyberspace operations.
    • NATO Allies also continued to enhance their national cyber resilience, in line with the commitment they made at the Warsaw Summit in 2016. Since then, they have strengthened their cyber resilience by issuing and revising strategic guidance, including on dealing with cyber risk to supply chains, implementing organisational reforms and investing in training.
    • Information-sharing has never been more critical. Allies established the NATO Intelligence on Cyberspace Community of Interest to more regularly exchange information, assessments and best practices — improving NATO’s ability to prevent and respond to cyber threats. In addition, the NATO Communications and Information Agency continued to facilitate information exchanges between NATO Allies on cyber threats and incidents through its Cyber Collaboration Network. Twenty-one Allies have joined the network to date.
    • NATO also increased its engagements with the EU, notably in the areas of information exchange, training, research and exercises. NATO invested in enhancing its ties with the private sector through the NATO Industry Cyber Partnership, including in the areas of threat intelligence and incident response.
    • NATO also touted its work on emerging technologies:
      • In 2020, NATO implemented the Roadmap on Emerging and Disruptive Technologies, which NATO Leaders adopted in London in 2019. This work sets the foundations to allow NATO to adopt new technologies at the speed of relevance and ensure coherence of NATO’s innovation efforts. Work also continued to build a common understanding among Allies on the challenges and opportunities arising from the key technology areas of data, artificial intelligence, autonomy, hypersonic systems, quantum technologies as well as biotechnologies and human enhancements. Throughout the year, NATO conducted a number of workshops in these areas, each involving up to 150 participants, bringing together Allied officials, the private sector as well as academia. The workshops also tackled cross-cutting challenges such as interoperability, financing, arms control and NATO’s engagement with non-defence technology companies.
      • In July, the Secretary General announced the creation of the NATO Advisory Group on Emerging and Disruptive Technologies. The group, composed of 12 experts from academia and industry, advises NATO on its efforts to drive the adoption of new technologies. The experts have led cutting-edge research, driven policy developments on emerging technologies and have been responsible for the delivery of innovation programmes in their respective domains. In 2020, the group provided advice on how NATO might best fund its innovation efforts, build an operation network of Innovation Centres, promote successful innovation business and operating models, and increase the level of technical literacy across NATO.
  • The Office of Communications (Ofcom) and the Information Commissioner’s Office (ICO) “outlined a joint plan for tackling nuisance and scam calls for 2021/2022.” Ofcom and the ICO explained:
    • The previous annual update, published in May 2020, set out our areas of focus:
      • Taking targeted action against people or companies that are not following our rules.
      • Raising awareness of and tackling Coronavirus (Covid-19) scams and continuing to support the work of Stop Scams UK.
      • Working with telecoms companies to improve how they disrupt and prevent nuisance calls, by reviewing solutions made available to customers by their provider.
      • Identifying opportunities to deter and punish organisations and people responsible for nuisance calls and scams by working with other regulators and enforcement agencies.
      • Sharing intelligence with others, including international partners and enforcement agencies with responsibility for tackling scams and fraud.
    • This update reports on the progress made in each of the areas listed over the last 10 months and highlights how our collaborative efforts are making a positive difference to people. Ofcom and the ICO have enforcement tools within our own regimes but we recognise that results can be achieved more successfully through coordination between regulators, with industry and across sectors. This update details the coordinated workstreams in which Ofcom and the ICO are already involved, and our plans for future collaboration.
  • Five financial service regulators are asking for information and input on “financial institutions’ use of artificial intelligence (AI), including machine learning (ML).” In their Federal Register notice, the Comptroller of the Currency, the Federal Reserve System, the Federal Deposit Insurance Corporation, the Consumer Financial Protection Bureau, and the National Credit Union Administration stated:
    • The purpose of this request for information (RFI) is to understand respondents’ views on the use of AI by financial institutions in their provision of services to customers and for other business or operational purposes; appropriate governance, risk management, and controls over AI; and any challenges in developing, adopting, and managing AI. The RFI also solicits respondents’ views on the use of AI in financial services to assist in determining whether any clarifications from the agencies would be helpful for financial institutions’ use of AI in a safe and sound manner and in compliance with applicable laws and regulations, including those related to consumer protection.
    • Financial institutions are exploring AI-based applications in a variety of fields. Uses of AI by financial institutions include (but are not limited to):
      • Flagging unusual transactions. This involves employing AI to identify potentially suspicious, anomalous, or outlier transactions (e.g., fraud detection and financial crime monitoring). It involves using different forms of data (e.g., email text, audio data-both structured‚ and unstructured), with the aim of identifying fraud or anomalous transactions with greater accuracy and timeliness. It also includes identifying transactions for Bank Secrecy Act/anti-money laundering investigations, monitoring employees for improper practices, and detecting data anomalies.
      • Personalization of customer services. AI technologies, such as voice recognition and natural language processing (NLP), are used to improve customer experience and to gain efficiencies in the allocation of financial institution resources. One example is the use of chatbots‚ to automate routine customer interactions, such as account opening activities and general customer inquiries. AI is leveraged at call centers to process and triage customer calls to provide customized service. These technologies are also used to better target marketing and customize trade recommendations.
      • Credit decisions. This involves the use of AI to inform credit decisions in order to enhance or supplement existing techniques. This application of AI may use traditional data or employ alternative data‚ (such as cash flow transactional information from a bank account).
      • Risk management. AI may be used to augment risk management and control practices. For example, an AI approach might be used to complement and provide a check on another, more traditional credit model. Financial institutions may also use AI to enhance credit monitoring (including through early warning alerts), payment collections, loan restructuring and recovery, and loss forecasting. AI can assist internal audit and independent risk management to increase sample size (such as for testing), evaluate risk, and refer higher-risk issues to human analysts. AI may also be used in liquidity risk management, for example, to enhance monitoring of market conditions or collateral management.
      • Textual analysis. Textual analysis refers to the use of NLP for handling unstructured data (generally text) and obtaining insights from that data or improving efficiency of existing processes. Applications include analysis of regulations, news flow, earnings reports, consumer complaints, analyst ratings changes, and legal documents.
      • Cybersecurity. AI may be used to detect threats and malicious activity, reveal attackers, identify compromised systems, and support threat mitigation. Examples include real-time investigation of potential attacks, the use of behavior-based detection to collect network metadata, flagging and blocking of new ransomware and other malicious attacks, identifying compromised accounts and files involved in exfiltration, and deep forensic analysis of malicious files.
  • Facebook Vice President of Global Affairs Nick Clegg wrote a lengthy blog post defending Facebook and other technology companies’ use of algorithms. He claimed:
    • Data-driven personalized services like social media have empowered people with the means to express themselves and to communicate with others on an unprecedented scale. And they have put tools into the hands of millions of small businesses around the world which were previously available only to the largest corporations. Personalized digital advertising not only allows billions of people to use social media for free, it is also more useful to consumers than untargeted, low-relevance advertising. Turning the clock back to some false sepia-tinted yesteryear — before personalized advertising, before algorithmic content ranking, before the grassroots freedoms of the internet challenged the powers that be — would forfeit so many benefits to society.
    • But that does not mean the concerns about how humans and algorithmic systems interact should be dismissed. There are clearly issues to be resolved and questions to be answered. The internet needs new rules — designed and agreed by democratically elected institutions — and technology companies need to make sure their products and practices are designed in a responsible way that takes into account their potential impact on society. That starts — but by no means ends — with putting people, not machines, more firmly in charge.
    • Clegg contended:
      • Central to many of the charges by Facebook’s critics is the idea that its algorithmic systems actively encourage the sharing of sensational content and are designed to keep people scrolling endlessly. Of course, on a platform built around people sharing things they are interested in or moved by, content that provokes strong emotions is invariably going to be shared. At one level, the fact that people respond to sensational content isn’t new. As generations of newspaper sub-editors can attest, emotive language and arresting imagery grab people’s attention and engage them. It’s human nature. But Facebook’s systems are not designed to reward provocative content. In fact, key parts of those systems are designed to do just the opposite.
      • Facebook reduces the distribution of many types of content — meaning that content appears lower in your News Feed — because they are sensational, misleading, gratuitously solicit engagement, or are found to be false by our independent fact checking partners. For example, Facebook demotes clickbait (headlines that are misleading or exaggerated), highly sensational health claims (like those promoting “miracle cures”), and engagement bait (posts that explicitly seek to get users to engage with them).
      • Facebook’s approach goes beyond addressing sensational and misleading content post-by-post. When Pages and Groups repeatedly post some of these types of content to Facebook, like clickbait or misinformation, Facebook reduces the distribution of all the posts from those Pages and Groups. And where websites generate an extremely disproportionate amount of their traffic from Facebook relative to the rest of the internet, which can be indicative of a pattern of posting more sensational or spammy content, Facebook likewise demotes all the posts from the Pages run by those websites.
      • Facebook has also adjusted other aspects of its approach to ranking, including fundamental aspects, in ways that would be likely to devalue sensational content. Since the early days of the platform, the company has relied on explicit engagement metrics — whether people “liked,” commented on, or shared a post — to determine which posts they would find most relevant. But the use of those metrics has evolved and other signals Facebook considers have expanded.

Further Reading

  • 533 million Facebook users’ phone numbers, personal information exposed online, report says” By Hannah Knowles — The Washington Post. An immense cache of Facebook user data was leaked online, and even though this trove of personal data has been leaked before, its distribution this time is more widespread. A reported 533 million users in 106 countries have their personal data, possibly including phone numbers, birth dates and other biographical data, made available through this exfiltration. It appears that criminals are behind this hack. In a blog posting, Facebook asserted “It is important to understand that malicious actors obtained this data not through hacking our systems but by scraping it from our platform prior to September 2019.” And this is true, for these data have been passed around the darker corners of the online world for some time as explained in this January 2021 article. Be that as it may, the Israeli security firm that publicized this latest data dump is claiming the data were available in the first place because of weak Facebook security. The company says it found and fixed the vulnerability sometime after the 2019 exfiltration. Nonetheless, this leak will throw more scrutiny on the company at a time when it is already being assailed for alleged antitrust and anti-competitive measures and shoddy content moderation policies that have allowed extremism to flourish and be applied by its algorithms.
  • Call centre staff to be monitored via webcam for home-working ‘infractions’” By Peter Walker — The Guardian. Another sign of a dystopian future? Teleperformance, a French multinational, is requiring its teleworking staff to install web cameras so the employees can be policed for eating at their computer or being away from their computer without permission. The company backpedaled on plans to use this new system in the United Kingdom after Ministers of Parliament and labor unions started making inquiries, but the company operates in many other nations. The company claims the intent of the new system is “to respond to the overwhelming concerns of isolation, lack of team engagement and support, not seeing anyone from one day to the next, raised by those who are at home.” The company further claimed “[w]e absolutely trust them to do their jobs in a professional manner….[w]e are taking very seriously the concerns you raised … as they can be no further from the truth.”
  • On Google Podcasts, a Buffet of Hate” By Reggie Ugwu — The New York Times. Google Podcasts apparently stands alone in its offering of white supremacist and white nationalist extremist material one cannot find on other major podcasting apps and platforms. However, to be fair, former Trump White House advisor Steve Bannon’s podcast is available on both Apple and Google’s app even after YouTube permanently banned his channel for violating its terms of services. Critics claim Google could rid its podcast platform of extremist material if it wished while the company has embraced a free expression stance. It is likely this segment of the online world will come under more scrutiny.
  • Amazon Is the Target of Small-Business Antitrust Campaign” By Ryan Tracy — The Wall Street Journal. In a time-tested tactic in Washington, larger entities have banded together with smaller entities, the latter of which are the face and putative driving force behind the organization. In this instance, the Small Business Rising coalition formed that wants stronger antitrust and anti-competition policy enforcement and legislative reform, especially against Amazon, which they argue is acting unfairly against smaller businesses by running and online marketplace and offering its own products, often based on data about how third-party products are performing. Some larger players are part of this coalition, including the National Grocers Association, the American Booksellers Association and the Alliance for Pharmacy Compounding. The group seems to be throwing its weight behind some of the reforms the House Judiciary Committee’s Antitrust, Commercial, and Administrative Law Subcommittee Chair David Cicilline (D-RI) and fellow Democrats are calling for. However what may be new here is that some business interests seem to be throwing in with public advocacy organizations with links to labor unions, a sign perhaps of cooperation to come among organizations that traditionally line up on opposite sides on most issues.
  • The Little-Known Data Broker Industry Is Spending Big Bucks Lobbying Congress” By Alfred Ng and Maddy Varner — The Markup. A number of companies spent big to lobby the federal government for their little understood data brokering activities, including Oracle, CoreLogic, and others. They seem to want a national privacy law that would preempt states, and one may assume these entities also oppose bills introduced in recent years that would further regulate and scrutinize their activities at the federal level.
  • Russia suspected of stealing thousands of State Department emails” By Betsy Woodruff Swan and Natasha Bertrand — Politico. In a breach possibly unrelated to the SolarWinds hack, Russians may have accessed networks and email at the Department of State. Neither the agency nor the White House would confirm or deny the breach. Information on the breach was leaked from “Congressional sources” who say Russians accessed the networks of the Bureau of European and Eurasian Affairs and Bureau of East Asian and Pacific Affairs. If the report is true, this is the second breach of the Department of State by Russia in the last six years.

Coming Events

  • The Senate Appropriations Committee’s Commerce, Justice, Science, and Related Agencies Subcommittee may hold a hearing on FY 2022 budget request for the National Science Foundation and the competitiveness of the United States on 13 April.
  • The Senate Appropriations Committee’s Defense Subcommittee may hold a hearing on the Department of Defense’s innovation and research on 13 April.
  • On 14 April, the Senate Intelligence Committee will hold open and closed hearings with the heads of the major United States intelligence agencies and Director of National Intelligence Avril Haines on worldwide threats.
  • The House Veterans’ Affairs Committee’s Technology Modernization Subcommittee will hold a hearing on the Department of Veterans Affairs” Electronic Health Record Modernization Program on 14 April.
  • On 14 April, the Senate Armed Services Committee’s Cyber Subcommittee will hold a hearing on future cybersecurity architectures with these witnesses:
    • Mr. Robert Joyce, Director of Cybersecurity National Security Agency
    • Mr. David McKeown, Senior Information Security Officer/ Chief Information Officer for Cybersecurity Department of Defense
    • Rear Admiral William Chase III, Senior Military Advisor for Cyber Policy to the Under Secretary of Defense for Policy/Deputy Principal Cyber Advisor to the Secretary of Defense Secretary of Defense
  • On 15 April, the House Intelligence Committee will hold a hearing with the heads of the major United States intelligence agencies and Director of National Intelligence Avril Haines on worldwide threats.
  • The House Oversight and Reform Committee’s Government Operations Subcommittee will hold a hearing to assess agency compliance with the Federal Information Technology Acquisition Reform Act (FITARA) on 16 April.
  • The Federal Communications Commission (FCC) will hold an open meeting on 22 April with this draft agenda:
    • Text-to-988. The Commission will consider a Further Notice of Proposed Rulemaking to increase the effectiveness of the National Suicide Prevention Lifeline by proposing to require covered text providers to support text messaging to 988. (WC Docket No. 18-336)
    • Commercial Space Launch Operations. The Commission will consider a Report and Order and Further Notice of Proposed Rulemaking that would adopt a new spectrum allocation for commercial space launch operations and seek comment on additional allocations and service rules. (ET Docket No. 13-115)
    • Wireless Microphones. The Commission will consider a Notice of Proposed Rulemaking that proposes to revise the technical rules for Part 74 low-power auxiliary station (LPAS) devices to permit a recently developed, and more efficient, type of wireless microphone system. (RM-11821; ET Docket No. 21-115)
    • Improving 911 Reliability. The Commission will consider a Third Notice of Proposed Rulemaking to promote public safety by ensuring that 911 call centers and consumers receive timely and useful notifications of disruptions to 911 service. (PS Docket Nos. 13-75, 15-80; ET Docket No. 04-35
    • Concluding the 800 MHz Band Reconfiguration. The Commission will consider an Order to conclude its 800 MHz rebanding program due to the successful fulfillment of this public safety mandate. (WT Docket No. 02-55)
    • Enhancing Transparency of Foreign Government-Sponsored Programming. The Commission will consider a Report and Order to require clear disclosures for broadcast programming that is sponsored, paid for, or furnished by a foreign government or its representative. (MB Docket No. 20-299)
    • Imposing Application Cap in Upcoming NCE FM Filing Window. The Commission will consider a Public Notice to impose a limit of ten applications filed by any party in the upcoming 2021 filing window for new noncommercial educational FM stations. (MB Docket No. 20-343)
    • Enforcement Bureau Action. The Commission will consider an enforcement action.
  • The Federal Trade Commission (FTC) will hold a workshop titled “Bringing Dark Patterns to Light” on 29 April.
  • The Department of Commerce’s National Telecommunications and Information Administration (NTIA) will hold “a virtual meeting of a multistakeholder process on promoting software component transparency” on 29 April.
  • On 27 July, the Federal Trade Commission (FTC) will hold PrivacyCon 2021.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2021. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Photo by Chris Yang on Unsplash

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s