Further Reading, Other Developments, and Coming Events (21 August)

Here are Further Reading, Other Developments, and Coming Events.

Coming Events

  • The United States’ Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) announced that its third annual National Cybersecurity Summit “will be held virtually as a series of webinars every Wednesday for four weeks beginning September 16 and ending October 7:”
    • September 16: Key Cyber Insights
    • September 23: Leading the Digital Transformation
    • September 30: Diversity in Cybersecurity
    • October 7: Defending our Democracy
    • One can register for the event here.
  • The Senate Judiciary Committee’s Antitrust, Competition Policy & Consumer Rights Subcommittee will hold a hearing on 15 September titled “Stacking the Tech: Has Google Harmed Competition in Online Advertising?.” In their press release, Chair Mike Lee (R-UT) and Ranking Member Amy Klobuchar (D-MN) asserted:
    • Google is the dominant player in online advertising, a business that accounts for around 85% of its revenues and which allows it to monetize the data it collects through the products it offers for free. Recent consumer complaints and investigations by law enforcement have raised questions about whether Google has acquired or maintained its market power in online advertising in violation of the antitrust laws. News reports indicate this may also be the centerpiece of a forthcoming antitrust lawsuit from the U.S. Department of Justice. This hearing will examine these allegations and provide a forum to assess the most important antitrust investigation of the 21st century.
  • On 22 September, the Federal Trade Commission (FTC) will hold a public workshop “to examine the potential benefits and challenges to consumers and competition raised by data portability.” By 21 August, the FTC “is seeking comment on a range of issues including:
    • How are companies currently implementing data portability? What are the different contexts in which data portability has been implemented?
    • What have been the benefits and costs of data portability? What are the benefits and costs of achieving data portability through regulation?
    • To what extent has data portability increased or decreased competition?
    • Are there research studies, surveys, or other information on the impact of data portability on consumer autonomy and trust?
    • Does data portability work better in some contexts than others (e.g., banking, health, social media)? Does it work better for particular types of information over others (e.g., information the consumer provides to the business vs. all information the business has about the consumer, information about the consumer alone vs. information that implicates others such as photos of multiple people, comment threads)?
    • Who should be responsible for the security of personal data in transit between businesses? Should there be data security standards for transmitting personal data between businesses? Who should develop these standards?
    • How do companies verify the identity of the requesting consumer before transmitting their information to another company?
    • How can interoperability among services best be achieved? What are the costs of interoperability? Who should be responsible for achieving interoperability?
    • What lessons and best practices can be learned from the implementation of the data portability requirements in the GDPR and CCPA? Has the implementation of these requirements affected competition and, if so, in what ways?”
  • The Federal Communications Commission (FCC) will hold an open meeting on 30 September, but an agenda is not available at this time.

Other Developments

  • The National Institute of Standards and Technology (NIST) published for input Four Principles of Explainable Artificial Intelligence (Draft NISTIR 8312) in which the authors stated:
    • We introduce four principles for explainable artificial intelligence (AI) that comprise the fundamental properties for explainable AI systems. They were developed to encompass the multidisciplinary nature of explainable AI, including the fields of computer science,  engineering, and psychology. Because one size fits all explanations do not exist, different users will require different types of explanations. We present five categories of explanation and summarize theories of explainable AI. We give an overview of the algorithms in the field that cover the major classes of explainable algorithms. As a baseline comparison, we assess how well explanations provided by people follow our four principles. This assessment provides insights to the challenges of designing explainable AI systems.
    • NIST said “our four principles of explainable AI are:
      • Explanation: Systems deliver accompanying evidence or reason(s) for all outputs.
      • Meaningful: Systems provide explanations that are understandable to individual users.
      • Explanation Accuracy: The explanation correctly reflects the system’s process for generating the output.
      • Knowledge Limits: The system only operates under conditions for which it was designed or when the system reaches a sufficient confidence in its output.
    • A year ago, NIST published “U.S. LEADERSHIP IN AI: A Plan for Federal Engagement in Developing Technical Standards and Related Tools” as required by Executive Order (EO) 13859, Maintaining American Leadership in Artificial Intelligence in response to an August 10, 2019 due date. 
      • NIST explained that “[t]here are a number of cross-sector (horizontal) and sector-specific (vertical) AI standards available now and many others are being developed by numerous standards developing organizations (SDOs)…[and] [s]ome areas, such as communications, have well-established and regularly maintained standards in widespread use, often originally developed for other technologies. Other aspects, such as trustworthiness, are only now being considered.” NIST explained that its AI plan “identifies the following nine areas of focus for AI standards: 
        • Concepts and terminology
        • Data and knowledge 
        • Human interactions 
        • Metrics
        • Networking
        • Performance testing and reporting methodology
        • Safety
        • Risk management
        • Trustworthiness
      • NIST asserting that “[i]n deciding which standards efforts merit strong Federal government involvement, U.S. government agencies should prioritize AI standards efforts that are:
        • Consensus-based, where decision-making is based upon clearly established terms or agreements that are understood by all involved parties, and decisions are reached on general agreement.
        • Inclusive and accessible, to encourage input reflecting diverse and balanced communities of users, developers, vendors, and experts. Stakeholders should include representatives from diverse technical disciplines as well as experts and practioners from non-traditional disciplines of special importance to AI such as ethicists, economists, legal professionals, and policy makers: essentially, accommodating all desiring a “seat at the table.”
        • Multi-path, developed through traditional and novel standards-setting approaches and organizations that best meet the needs of developers and users in the marketplace as well as society at large.
        • Open and transparent, operating in a manner that: provides opportunity for participation by all directly- and materially- affected; has well-established and readily accessible operating rules, procedures, and policies that provide certainty about decision making processes; allows timely feedback for further consideration of the standard; and ensures prompt availability of the standard upon adoption.
        • Result in globally relevant and non-discriminatory standards, where standards avoid becoming non-tariff trade barriers or locking in particular technologies or products.
  • Consumer Watchdog has sued Zoom Video Communications “for making false and deceptive representations to consumers about its data security practices in violation of the District of Columbia Consumer Protection Procedures Act (CPPA).” The advocacy organization asserted
    • To distinguish itself from competitors and attract new customers, Zoom began advertising and touting its use of a strong security feature called “end-to-end encryption” to protect communications on its platform, meaning that the only people who can access the communicated data are the sender and the intended recipient. Using end-to-end encryption prevents unwanted third parties—including the company that owns the platform (in this case, Zoom)—from accessing communications, messages, and data transmitted by users.
    • Unfortunately, Zoom’s claims that communications on its platform were end-to-end encrypted were false. Zoom only used the phrase “end-to-end encryption” as a marketing device to lull consumers and businesses into a false sense of security.
    • The reality is that Zoom is, and has always been, capable of intercepting and accessing any and all of the data that users transmit on its platform—the very opposite of end-to-end encryption. Nonetheless, Zoom relied on its end-to-end encryption claim to attract customers and to build itself into a publicly traded company with a valuation of more than $70 billion.
    • Consumer Watchdog is seeking the greater of treble damages or $1,500 per violation along with other relief
    • Zoom is being sued in a number of other cases, including two class action suits in United States courts in Northern California (#1 and #2).
  • The United States (U.S.) Government Accountability Office (GAO) decided the Trump Administration violated the order of succession at the U.S. Department of Homeland Security by naming the Customs and Border Protection (CBP) Commissioner of Kevin McAleenan the acting Secretary after former Secretary Kirstjen Nielsen resigned early in 2019. The agency’s existing order of succession made clear that Cybersecurity and Infrastructure Security Agency (CISA) Director Christopher Krebs was next in line to lead DHS. The GAO added “[a]s such, the subsequent appointments of Under Secretary for Strategy, Policy, and Plans, Chad Wolf and Principal Deputy Director of U.S. Citizenship and Immigration Services (USCIS) Ken Cuccinelli were also improper because they relied on an amended designation made by Mr. McAleenan.”
    • However, GAO is punting the question of what the implications of its findings are:
      • In this decision we do not review the consequences of Mr. McAleenan’s service as Acting Secretary, other than the consequences of the November delegation, nor do we review the consequences of Messers. Wolf and Cuccinelli service as Acting Secretary and Senior Official Performing the Duties of Deputy Secretary respectively.
      • We are referring the question as to who should be serving as the Acting Secretary and the Senior Official Performing the Duties of Deputy Secretary to the DHS Office of Inspector General for its review.
      • We also refer to the Inspector General the question of consequences of actions taken by these officials, including consideration of whether actions taken by these officials may be ratified by the Acting Secretary and Senior Official Performing the Duties of Deputy Secretary as designated in the April Delegation.
    • The GAO also denied DHS’s request to rescind this opinion because “DHS has not shown that our decision contains either material errors of fact or law, nor has DHS provided information not previously considered that warrants reversal or modification of the decision.”
    • The chairs of the House Homeland Security and Oversight and Reform Committees had requested the GAO legal opinion and claimed in their press release the opinion “conclude[es] that President Donald Trump’s appointments to senior leadership positions at the Department of Homeland Security were illegal and circumvented both the Federal Vacancy Reform Act and the Homeland Security Act.”
  • Top Democrats on the House Energy and Commerce Committee wrote the members of the Facebook Oversight Board expressing their concern the body “does not have the power it needs to change Facebook’s harmful policies.” Chair Frank Pallone, Jr. (D-NJ), Communications and Technology Subcommittee Chair Mike Doyle (D-PA) and Consumer Protection and Commerce Subcommittee Chair Jan Schakowsky (D-IL) “encouraged the newly appointed members to exert pressure on Facebook to listen to and act upon their policy recommendations, something that is not currently included in the Board Members’ overall responsibilities.” They asserted:
    • The Committee leaders believe Facebook is intentionally amplifying divisive and conspiratorial content because such content attracts more customer usage and, with it, advertising revenue. Pallone, Doyle and Schakowsky were also troubled by recent reports that Facebook had an opportunity to retune its systems responsible for the amplification of this content, but chose not to. 
    • The three Committee leaders wrote that the public interest should be the Oversight Board’s priority and that it should not be influenced by the profit motives of Facebook executives. Pallone, Doyle and Schakowsky also requested the board members answer a series of questions in the coming weeks.
  • The United States (U.S.) Government Accountability Office (GAO) examined how well the United States Department of Homeland Security and selected federal agencies are implementing a cybersecurity program designed to give the government better oversight and control of their networks. In auditing the Continuous Diagnostics and Mitigation (CDM), the GAO found limited success and ongoing, systemic roadblocks preventing increased levels of security. DHS has estimated the program will cost $10.9 billion over ten years.
    • The GAO concluded
      • Selected agencies reported that the CDM program had helped improve their awareness of hardware on their networks. However, although the program has been in existence for several years, these agencies had only implemented the foundational capability for managing hardware to a limited extent, including not associating hardware devices with FISMA systems. In addition, while most agencies implemented requirements for managing software, all of them inconsistently implemented requirements for managing configuration settings. Moreover, poor data quality resulting from these implementation shortcomings diminished the usefulness of agency dashboards to support security-related decision making. Until agencies fully and effectively implement CDM program capabilities, including the foundational capability of managing hardware on their networks, agency and federal dashboards will not accurately reflect agencies’ security posture. Part of the reason that agencies have not fully implemented key CDM requirements is that DHS had not ensured integrators had addressed shortcomings with integrators’ CDM solutions for managing hardware and vulnerabilities. Although DHS has taken various actions to address challenges identified by agencies, without further assistance from DHS in helping agencies overcome implementation shortcomings, the program—costing billions of dollars— will likely not fully achieve expected benefits.
    • The chairs and ranking members of the Senate Homeland Security & Governmental Affairs and House Homeland Security Committees, the chair of the House Oversight and Reform Committee, and other Members requested that the GAO study and report on this issue.
  • Google and the Australian Competition and Consumer Commission (ACCC) have exchanged public letters, fighting over the latter’s proposal to ensure that media companies are compensated for articles and content the former uses.
    • In an Open Letter to Australians, Google claimed:
      • A proposed law, the News Media Bargaining Code, would force us to provide you with a dramatically worse Google Search and YouTube, could lead to your data being handed over to big news businesses, and would put the free services you use at risk in Australia.
      • You’ve always relied on Google Search and YouTube to show you what’s most relevant and helpful to you. We could no longer guarantee that under this law. The law would force us to give an unfair advantage to one group of businesses – news media businesses – over everyone else who has a website, YouTube channel or small business. News media businesses alone would be given information that would help them artificially inflate their ranking over everyone else, even when someone else provides a better result. We’ve always treated all website owners fairly when it comes to information we share about ranking. The proposed changes are not fair and they mean that Google Search results and YouTube will be worse for you.
      • You trust us with your data and our job is to keep it safe. Under this law, Google has to tell news media businesses “how they can gain access” to data about your use of our products. There’s no way of knowing if any data handed over would be protected, or how it might be used by news media businesses.
      • We deeply believe in the importance of news to society. We partner closely with Australian news media businesses — we already pay them millions of dollars and send them billions of free clicks every year. We’ve offered to pay more to license content. But rather than encouraging these types of partnerships, the law is set up to give big media companies special treatment and to encourage them to make enormous and unreasonable demands that would put our free services at risk.
    • In its response, the ACCC asserted:
      • The open letter published by Google today contains misinformation about the draft news media bargaining code which the ACCC would like to address. 
      • Google will not be required to charge Australians for the use of its free services such as Google Search and YouTube, unless it chooses to do so.
      • Google will not be required to share any additional user data with Australian news businesses unless it chooses to do so.
      • The draft code will allow Australian news businesses to negotiate for fair payment for their journalists’ work that is included on Google services.
      • This will address a significant bargaining power imbalance between Australian news media businesses and Google and Facebook.
    • Late last month, the ACCC released for public consultation a draft of “a mandatory code of conduct to address bargaining power imbalances between Australian news media businesses and digital platforms, specifically Google and Facebook.” The government in Canberra had asked the ACCC to draft this code earlier this year after talks broke down between the Australian Treasury and the companies.
    • The ACCC explained
      • The code would commence following the introduction and passage of relevant legislation in the Australian Parliament. The ACCC released an exposure draft of this legislation on 31 July 2020, with consultation on the draft due to conclude on 28 August 2020. Final legislation is expected to be introduced to Parliament shortly after conclusion of this consultation process.
    • This is not the ACCC’s first interaction with the companies. Late last year, the ACCC announced a legal action against Google “alleging they engaged in misleading conduct and made false or misleading representations to consumers about the personal location data Google collects, keeps and uses” according to the agency’s press release. In its initial filing, the ACCC is claiming that Google mislead and deceived the public in contravention of the Australian Competition Law and Android users were harmed because those that switched off Location Services were unaware that their location information was still be collected and used by Google for it was not readily apparent that Web & App Activity also needed to be switched off.
    • A year ago, the ACCC released its final report in its “Digital Platforms Inquiry” that “proposes specific recommendations aimed at addressing some of the actual and potential negative impacts of digital platforms in the media and advertising markets, and also more broadly on consumers.”
  • The United States Coast Guard is asking for information on “the introduction and development of automated and autonomous commercial vessels and vessel technologies subject to U.S. jurisdiction, on U.S. flagged commercial vessels, and in U.S. port facilities.” The Coast Guard is particularly interested in the “barriers to the development of autonomous vessels.” The agency stated
    • On February 11, 2019, the President issued Executive Order (E.O.) 13859, “Maintaining American Leadership in Artificial Intelligence.”The executive order announced the policy of the United States Government to sustain and enhance the scientific, technological, and economic leadership position of the United States in artificial intelligence (AI) research and development and deployment through a coordinated Federal Government strategy. Automation is a broad category that may or may not incorporate many forms of technology, one of which is AI. This request for information (RFI) will support the Coast Guard’s efforts to accomplish its mission consistent with the policies and strategies articulated in E.O. 13859. Input received from this RFI will allow the Coast Guard to better understand, among other things, the intersection between AI and automated or autonomous technologies aboard commercial vessels, and to better fulfill its mission of ensuring our Nation’s maritime safety, security, and stewardship.

Further Reading

  • ‘Boring and awkward’: students voice concern as colleges plan to reopen – through Minecraft” By Kari Paul – The Guardian. A handful of universities in the United States (U.S.) are offering students access to customized Minecraft, an online game that allows players to build worlds. The aim seems to be to allow students to socialize online in replicas on their campuses. The students interviewed for this story seemed underwhelmed by the effort, however.
  • When regulators fail to rein in Big Tech, some turn to antitrust litigation” – By Reed Albergotti and Jay Greene – The Washington Post. This article places Epic Games suit against Apple and Google into the larger context of companies availing themselves of the right to sue themselves under antitrust laws in the United States. However, for a number of reasons, these suits have not often succeeded, and one legal commentator opined that judges tend to see these actions as sour grapes. However, revelations turned up during discovery can lead antitrust regulators to jump into proceedings, giving the suit additional heft.
  • What Can America Learn from Europe About Regulating Big Tech?” By Nick Romeo – The New Yorker.  A former Member of the European Parliament, Marietje Schaake, from the Netherlands is now a professor at Stanford and is trying to offer a new path on regulating big tech that would rein in the excesses and externalities while allowing new technologies and competition to flourish. The question is whether there is a wide enough appetite for her vision in the European Union let alone the United States.
  • Facebook employees internally question policy after India content controversy – sources, memos” By Aditya Kalra and Munsif Vengattil – Reuters. The tech giant is also facing an employee revolt in the world’s largest democracy. Much like in the United States and elsewhere, employees are pressing leadership to explain why they are seemingly not applying the platform’s rules on false and harmful material to hateful speech by leaders. In this case, it was posts by a member of the ruling Bharatiya Janata Party (BJP) calling Indian Muslims traitors. And, in much the same way accusations have been leveled at a top Facebook lobbyist in Washington who has allegedly interceded on behalf of Republicans and far right interests on questionable material, a lobbyist in New Delhi has done the same the BJB.
  • List of 2020 election meddlers includes Cuba, Saudi Arabia and North Korea, US intelligence official says” By Shannon Vavra – cyberscoop. At a virtual event this week, National Counterintelligence and Security Center (NCSC) Director William Evanina claimed that even more nations are trying to disrupt the United States election this fall, including Cuba, Saudi Arabia, and North Korea. Evanina cautioned anyone lest they think the capabilities of these nations rise to the level of the Russian Federation, People’s Republic of China, and Iran. Earleir this month, Evanina issued an update to his late July statement “100 Days Until Election 2020” through “sharing additional information with the public on the intentions and activities of our adversaries with respect to the 2020 election…[that] is being released for the purpose of better informing Americans so they can play a critical role in safeguarding our election.” Evanina offered more in the way of detail on the three nations identified as those being most active in and capable of interfering in the November election: the Russian Federation, the PRC, and Iran. This additional detail may well have been provided given the pressure Democrats in Congress to do just this. Members like Speaker of the House Nancy Pelosi (D-CA) argued that Evanina was not giving an accurate picture of the actions by foreign nations to influence the outcome and perception of the 2020 election. Republicans in Congress pushed back, claiming Democrats were seeking to politicize the classified briefings given by the Intelligence Community (IC).

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Silentpilot from Pixabay

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s