Subscribe to my newsletter, The Wavelength, if you want updates on global technology developments four times a week.
Other Developments

- The Senate confirmed former National Security Agency Deputy Director Chris Inglis to be the United States’ (U.S.) first National Cyber Director. This position was created upon the advice of the Cyberspace Solarium Commission (CSC) and established in the FY 2021 National Defense Authorization Act. Inglis will lead an office inside the White House. In its FY 2022 budget request, the Biden Administration asked Congress for $15 million and 25 Full-Time Equivalents (FTE) to stand up the Office of the National Cyber Director. However, the CSC in making the recommendation that Congress create such a position called for at least 50 FTE in this office. Congress may appropriate funds and direct the creation of a larger office than the administration apparently wants. In an April press release accompanying his nomination, the White House summarized Inglis’ background and experience:
- Chris Inglis currently serves as a U. S. Naval Academy Looker Distinguished Visiting Professor for Cyber Studies, as a managing director at Paladin Capital, a member of the boards of several public and private corporations, and as a Commissioner on the U.S. Cyberspace Solarium Commission. He retired from the Department of Defense in January 2014 after 41 years of federal service, including 28 years at the National Security Agency and seven and a half years as its Deputy Director. He served as a member of the Department of Defense Science Board and as a National Intelligence University trustee until early 2021.
- A 1976 graduate of the U.S. Air Force Academy, Inglis holds advanced degrees in engineering and computer science from Columbia University, Johns Hopkins University, and the George Washington University. Inglis’ military career includes 30 years of service in the US Air Force and Air National Guard — from which he retired as a command pilot at the rank of Brigadier General. He and his wife Anna have three grown children and reside in Annapolis, MD.
- The European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS) adopted a joint opinion on the proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). In their press release, the EDPB and EDPS stated:
- The EDPB and the EDPS strongly welcome the aim of addressing the use of AI systems within the European Union, including the use of AI systems by EU institutions, bodies or agencies. At the same time, the EDPB and EDPS are concerned by the exclusion of international law enforcement cooperation from the scope of the Proposal.
- The EDPB and EDPS also stress the need to explicitly clarify that existing EU data protection legislation (GDPR, the EUDPR and the LED) applies to any processing of personal data falling under the scope of the draft AI Regulation.
- While the EDPB and the EDPS welcome the risk-based approach underpinning the Proposal, they consider that the concept of “risk to fundamental rights” should be aligned with the EU data protection framework. The EDPB and the EDPS recommend that societal risks for groups of individuals should also be assessed and mitigated. Moreover, they agree with the Proposal that the classification of an AI system as high-risk does not necessarily mean that it is lawful per se and can be deployed by the user as such. The EDPB and the EDPS also consider that compliance with legal obligations arising from Union legislation – including on personal data protection – should be a precondition for entering the European market as CE marked product.
- Taking into account the extremely high risks posed by remote biometric identification of individuals in publicly accessible spaces, the EDPB and the EDPS call for a general ban on any use of AI for automated recognition of human features in publicly accessible spaces, such as recognition of faces, gait, fingerprints, DNA, voice, keystrokes and other biometric or behavioural signals, in any context. Similarly, the EDPB and EDPS recommend a ban on AI systems using biometrics to categorize individuals into clusters based on ethnicity, gender, political or sexual orientation, or other grounds on which discrimination is prohibited under Article 21 of the Charter of Fundamental Rights. Furthermore, the EDPB and the EDPS consider that the use of AI to infer emotions of a natural person is highly undesirable and should be prohibited, except for very specified cases, such as some health purposes, where the patient emotion recognition is important, and that the use of AI for any type of social scoring should be prohibited.
- The EDPB and the EDPS further welcome the fact that the Proposal designates the EDPS as the competent authority and the market surveillance authority for the supervision of the Union institutions, agencies and bodies. However, the role and tasks of the EDPS should be further clarified, specifically when it comes to its role as market surveillance authority.
- The EDPB and EDPS recall that data protection authorities (DPAs) are already enforcing the GDPR and the LED on AI systems involving personal data, in order to guarantee the protection of fundamental rights and more specifically the right to data protection. As a result, the designation of DPAs as the national supervisory authorities would ensure a more harmonized regulatory approach, and contribute to the consistent interpretation of data processing provisions across the EU. Consequently, the EDPB and the EDPS consider that, to ensure a smooth application of this new regulation, DPAs should be designated as national supervisory authorities pursuant to Article 59 of the Proposal.
- Finally, the EDPB and EDPS question the designation of a predominant role to the European Commission in the “European Artificial Intelligence Board” (EAIB), as this would conflict with the need for an AI European body independent from any political influence. To ensure its independency, the Proposal should give more autonomy to the EAIB and ensure it can act on its own initiative.
- The United Kingdom’s (UK) Taskforce on Innovation, Growth and Regulatory Reform (TIGRR) “reported its recommendations to the Prime Minister on how the UK can reshape its approach to regulation and seize new opportunities from Brexit with its newfound regulatory freedom.” The body called for the government in London to enact the UK Framework of Citizen Data Rights. TIGRR asserted:
- We now have the opportunity to reform UK General Data Protection Regulation 2018 (GDPR) to create an even more innovative and cutting-edge business landscape and to attract the top start-ups and leaders in tech.
- GDPR is prescriptive, and inflexible and particularly onerous for smaller companies and charities to operate. It is challenging for organisations to implement the necessary processes to manage the sheer amounts of data that are collected, stored and need to be tracked from creation to deletion. Compliance obligations should be more proportionate, with fewer obligations and lower compliance burdens on charities, SMEs and voluntary organisations.
- GDPR is centred around the principle of citizen-owned data and organisations generally needing a person’s ‘consent’ to process their data. There are alternative ways to process data that do not require consent, but these are not well defined or understood, causing confusion amongst data processors and controllers. The overall effect is that growth and innovation are stifled. GDPR is not delivering for the consumer either. Tech giants oblige consumers to ‘consent’ to use their platforms before selling and profiting from the data collected, with the illusion that the consumer has control.
- Any reform of GDPR must of course continue to ensure that privacy is protected. Data sharing can deliver important benefits in healthcare and other public services as well as in innovative industries in the private sector. But this must be balanced with appropriate safeguards.
- Extensive work is already underway in government on data. As the Secretary of State for Digital, Culture, Media and Sport set out in the DCMS National Data Strategy, the UK is a leading digital nation. The data market in the UK (i.e. money made from products or services derived from digitised data) is the largest in Europe.
- Senate Majority Leader Chuck Schumer (D-NY) stated he is asking some Senate Committee chairs to review the recent spate of high profile cyber attacks. Such a review could lead to legislation changing how the United States addresses these and other cyber risks. Schumer’s announcement came at roughly the same time that Senate Intelligence Committee Chair Mark Warner (D-VA) and Ranking Member Marco Rubio (R-FL) along with Senator Susan Collins (R-ME) floated a draft bill, the “Cyber Incident Notification Act of 2021”, that would change United States (U.S.) law to require critical cyber infrastructure owners and operators and many federal contractors to report actual or potential cybersecurity intrusions within 24 hours of detection to the Cybersecurity and Infrastructure Security Agency (CISA) unless they have a more stringent reporting requirement to another federal agency. Nonetheless, Schumer stated:
- While the Attorney General has announced an intensified effort to combat this scourge of ransomware attacks, we in Congress have a responsibility to conduct oversight and determine whether our government needs an additional authority and resource to take the fight to cyber criminals and foreign intelligence services.
- Congress must ensure that federal agencies, like the Cybersecurity and Infrastructure Security Agency, have the necessary resources to take on this growing threat and support state and local governments under assault. So earlier this week I called for Congress to increase CISA’s budget by $500 million to fight this growing threat.
- And today, I am asking Chairman Gary Peters of our Homeland Security Committee, and the other relevant committee chairs, to begin a government-wide review of these attacks and determine what legislation may be needed to counter the threat of cyber crime and bring the fight to the cyber criminals.
- The National Science Foundation and the Office of Science and Technology Policy launched the National Artificial Intelligence Research Resource Task Force. The agencies explained:
- As directed by Congress in the National AI Initiative Act of 2020, the Task Force will serve as a Federal advisory committee to help create and implement a blueprint for the National AI Research Resource (NAIRR) — a shared research infrastructure providing AI researchers and students across all scientific disciplines with access to computational resources, high-quality data, educational tools, and user support.
- The Task Force will provide recommendations for establishing and sustaining the NAIRR, including technical capabilities, governance, administration, and assessment, as well as requirements for security, privacy, civil rights, and civil liberties. The Task Force will submit two reports to Congress that together will present a comprehensive strategy and implementation plan — an interim report in May 2022 and final report in November 2022.
- Public input on the vision for and implementation of the NAIRR will be sought, including through a forthcoming request for information to be posted to the Federal Register. For more information about the National AI Initiative and the NAIRR, please visit AI.gov.
- In addition, in the coming weeks, an AI advisory committee – the National AI Advisory Committee – will be established. It will provide recommendations and advice on a wide array of AI topics, including on the implications of AI on the future of learning and workers; research and development; economic competitiveness; societal, ethical, legal, safety, and security matters; commercial application; and opportunities for international engagement. A Federal Register notice will call for nominations of experts who will bring a broad range of perspectives in developing recommendations on these issues, including perspectives from labor, education, research, startup businesses and more.
- The Government Accountability Office (GAO) again stressed the problems with cybersecurity and software the Department of Defense (DOD) and its contractors are struggling to address in the development and procurement of major weapons systems. In its annual assessment of these programs, the GAO stated:
- Programs continued this year to identify software development factors, including meeting cybersecurity needs, as risks to efforts to develop and field capabilities to the warfighter, consistent with our findings from last year’s assessment. DOD made efforts to improve in these areas, such as working to update its software and cybersecurity instructions and provide guidance on Agile software development practices. However, we found that the majority of programs we surveyed continue to face challenges in executing modern software development practices and many programs we surveyed are challenged in implementing iterative and early cybersecurity assessments.
- In a blog post, the United Kingdom’s (UK) Information Commissioner Elizabeth Denham discussed her Commissioner’s Opinion “on the use of live facial recognition (LFR) in public places by private companies and public organisations…[that] explains how data protection and people’s privacy must be at the heart of any decisions to deploy LFR.” She said the opinion “explains how the law sets a high bar to justify the use of LFR and its algorithms in places where we shop, socialise or gather.” Denham continued:
- The Opinion is rooted in law and informed in part by six ICO investigations into the use, testing or planned deployment of LFR systems, as well as our assessment of other proposals that organisations have sent to us. Uses we’ve seen included addressing public safety concerns and creating biometric profiles to target people with personalised advertising.
- It is telling that none of the organisations involved in our completed investigations were able to fully justify the processing and, of those systems that went live, none were fully compliant with the requirements of data protection law. All of the organisations chose to stop, or not proceed with, the use of LFR.
- With any new technology, building public trust and confidence in the way people’s information is used is crucial so the benefits derived from the technology can be fully realised.
- In the US, people did not trust the technology. Some cities banned its use in certain contexts and some major companies have paused facial recognition services until there are clearer rules. Without trust, the benefits the technology may offer are lost.
- And, if used properly, there may be benefits. LFR has the potential to do significant good – helping in an emergency search for a missing child, for example.
- Today’s Opinion sets out the rules of engagement. It builds on our Opinion into the use of LFR by police forces and also sets a high threshold for its use.
- Organisations will need to demonstrate high standards of governance and accountability from the outset, including being able to justify that the use of LFR is fair, necessary and proportionate in each specific context in which it is deployed. They need to demonstrate that less intrusive techniques won’t work.
- These are important standards that require robust assessment.
- Organisations will also need to understand and assess the risks of using a potentially intrusive technology and its impact on people’s privacy and their lives. For example, how issues around accuracy and bias could lead to misidentification and the damage or detriment that comes with that.
- My office will continue to focus on technologies that have the potential to be privacy invasive, working to support innovation while protecting the public. Where necessary we will tackle poor compliance with the law.
- We will work with organisations to ensure that the use of LFR is lawful, and that a fair balance is struck between their own purposes and the interests and rights of the public. We will also engage with Government, regulators and industry, as well as international colleagues to make sure data protection and innovation can continue to work hand in hand.
- The Organisation for Economic Co-operation and Development (OECD) issued its “first report on the state of implementation of the policy recommendations to governments contained in the OECD Principles on Artificial Intelligence adopted in May 2019.” The OECD stated that “[t]his report presents a conceptual framework, provides findings, identifies good practices, and examines emerging trends in AI policy, particularly on how countries are implementing the five recommendations to policy makers contained in the OECD AI Principles…[and] builds both on the expert input provided at meetings of the OECD.AI Network of Experts working group on national AI policies that took place online from February 2020 to April 2021 and on the EC-OECD database of national AI strategies and policies.” The OECD added that “[a]s policy makers and AI actors around the world move from principles to implementation, this report aims to inform the implementation of the OECD AI Principles.” The OECD offered the following recommendations:
- The development of national policies that focus specifically on AI is a relatively new phenomenon. This report identifies challenges and good practices for implementing the OECD AI Principles’ (OECD,2019) five recommendations to governments:
- Invest in AI R&D;
- Foster a digital ecosystem for AI;
- Shape an enabling policy environment for AI;
- Build human capacity and preparing for labour market transformation; and
- Foster international co-operation for trustworthy AI.
- The report gives practical advice for implementing the OECD AI Principles throughout each phase of the AI policy cycle:
- Policy design: advice for national AI governance policies and approaches;
- Policy implementation: national implementation examples to illustrate lessons learned to date;
- Policy intelligence: evaluation methods and monitoring exercises, and;
- An overview of AI actors and initiatives at the international level with approaches to international and multi-stakeholder co-operation on AI policy.
- The development of national policies that focus specifically on AI is a relatively new phenomenon. This report identifies challenges and good practices for implementing the OECD AI Principles’ (OECD,2019) five recommendations to governments:
- Senators Michael Bennet (D-CO), Angus King (I-ME), and Rob Portman (R-OH) “introduced major bipartisan legislation to provide $40 billion in flexible funding to states, Tribal governments, U.S. territories, and the District of Columbia to bridge the digital divide.” They claimed that “The Broadband Reform and Investment to Drive Growth in the Economy (BRIDGE) Act of 2021” “would provide states with the resources and flexibility to deploy “future-proof” networks able to meet communities’ needs in the 21st century, and to support local initiatives to promote broadband affordability, adoption, and inclusion, among other efforts.” Bennet, King, and Portman made available a section by section summary and a one page summary of the bill. They claimed “The BRIDGE Act would:
- Provide $40 billion to States, Tribal Governments, and U.S. Territories to ensure all Americans have access to affordable, high-speed broadband.
- Prioritize unserved, underserved, and high-cost areas with investments in “future proof” networks that will meet the long-term needs of communities while supporting efforts to promote broadband affordability, adoption, and digital inclusion.
- Encourage gigabit-level internet wherever possible while raising the minimum speeds for new broadband networks to at least 100/100 Mbps, with flexibility for areas where this is technologically or financially impracticable.
- Emphasize affordability and inclusion by requiring at least one affordable option.
- Increase choice and competition by empowering local and state decision-making, lifting bans against municipal broadband networks, and allowing more entities to compete for funding.
Further Reading

- “Apple Did Business With A Wind Energy Company That Has Close Ties To Xinjiang” By Megha Rajagopalan — BuzzFeed News. Apple and Warren Buffett’s Berkshire Hathaway have done business with a Chinese wind energy giant linked to controversial government and labor programs in Xinjiang, where the US and other countries say China is carrying out a genocide of Muslim minorities. Xinjiang Goldwind Science & Technology, China’s largest wind turbine maker, on at least one occasion entered talks to receive “labor exports” from the Hotan prefecture in Xinjiang to a facility hundreds of miles away, new research from the Tech Transparency Project has found. Hotan officials traveled to a Goldwind plant to “coordinate” the labor exports, as part of an effort to strengthen the “organizational and disciplinary education” of workers, according to an archived local government media report uncovered by the Tech Transparency Project.
- “Record labels sue another ISP, demanding mass disconnections of Internet users” By Jon Brodkin — Ars Technica. The major record labels yesterday filed another lawsuit demanding that an Internet service provider terminate many more subscribers for alleged copyright violations. Universal, Sony, and Warner sued Frontier Communications in US District Court for the Southern District of New York, alleging that the DSL and fiber ISP with 3.5 million subscribers “received hundreds of thousands of copyright infringement notices from copyright owners” but “provided known repeat infringers with continued access to and use of its network and failed to terminate the accounts of, or otherwise take any meaningful action against, those subscribers. In reality, Frontier operated its network as an attractive tool and safe haven for infringement.” Frontier “chose not to act on those notices and address the rampant infringement on its network,” the companies claimed.
- “China Still Buys American DNA Equipment for Xinjiang Despite Blocks” By Sui-Lee Wee — The New York Times. The police in the Chinese region of Xinjiang are still buying hundreds of thousands of dollars’ worth of American DNA equipment despite warnings from the U.S. government that the sale of such technologies could be used to enable human rights abuses in the region. The U.S. government has tried to prevent the sale of DNA sequencers, test kits and other products made by American firms to the police in Xinjiang for years, amid concerns raised by scientists and human rights groups that the authorities could use the tools to build systems to track people. In 2019, the Trump administration banned the sale of American goods to most law enforcement agencies in Xinjiang unless the companies received a license. And in 2020, Washington warned that companies selling biometric technology and other products to Xinjiang should be aware of the “reputational, economic and legal risks.”
- “Hunting Leaks, Trump Officials Focused on Democrats in Congress” By Katie Benner, Nicholas Fandos, Michael S. Schmidt and Adam Goldman — The New York Times. As the Justice Department investigated who was behind leaks of classified information early in the Trump administration, it took a highly unusual step: Prosecutors subpoenaed Apple for data from the accounts of at least two Democrats on the House Intelligence Committee, aides and family members. One was a minor. All told, the records of at least a dozen people tied to the committee were seized in 2017 and early 2018, including those of Representative Adam B. Schiff of California, then the panel’s top Democrat and now its chairman, according to committee officials and two other people briefed on the inquiry. Representative Eric Swalwell of California said in an interview Thursday night that he had also been notified that his data had been subpoenaed.
- “TikTok just gave itself permission to collect biometric data on US users, including ‘faceprints and voiceprints’” By Sarah Perez — Tech Crunch. A change to TikTok’s U.S. privacy policy on Wednesday introduced a new section that says the social video app “may collect biometric identifiers and biometric information” from its users’ content. This includes things like “faceprints and voiceprints,” the policy explained. Reached for comment, TikTok could not confirm what product developments necessitated the addition of biometric data to its list of disclosures about the information it automatically collects from users, but said it would ask for consent in the case such data collection practices began.
- “U.S. Senate to probe whether legislation needed to combat cyber attacks” — Reuters. U.S. Senate Majority Leader Chuck Schumer on Thursday said he is initiating a review of recent high-profile cyber attacks on governments and businesses to find out whether a legislative response is needed.
- “The Ruthless Hackers Behind Ransomware Attacks on U.S. Hospitals: ‘They Do Not Care’” By Kevin Poulsen and Melanie Evans — The Wall Street Journal. A ransomware attack on a national hospital chain nearly brought Las Vegas hospitals to their knees. Another attack in Oregon abruptly shut down alerts tied to patient monitors tracking vital signs. In New York, one county’s only trauma center briefly closed to ambulances, with the nearest alternative 90 miles away. Multiple attacks were carried out in recent months against U.S. hospitals, suspending some surgeries, delaying medical care and costing hospitals millions of dollars.
- “Pipeline Investigation Upends Idea That Bitcoin Is Untraceable” By Nicole Perlroth, Erin Griffith and Katie Benner — The New York Times. When Bitcoin burst onto the scene in 2009, fans heralded the cryptocurrency as a secure, decentralized and anonymous way to conduct transactions outside the traditional financial system. Criminals, often operating in hidden reaches of the internet, flocked to Bitcoin to do illicit business without revealing their names or locations. The digital currency quickly became as popular with drug dealers and tax evaders as it was with contrarian libertarians.
- “JBS Foods pays $14.2 million ransom to end cyber attack on its global operations” By David Claughton and Nikolai Beilharz — ABC News. Global meat processing company JBS Foods has confirmed that it paid the equivalent of $US11 million ($14.2 million) to a criminal gang to end a five-day cyber attack that halted its operations around the world last week, including Australia. The company said it paid the money to mitigate any unforeseen issues related to the attack and ensure no data was exfiltrated.
- “China’s New Power Play: More Control of Tech Companies’ Troves of Data” By Lingling Wei — The Wall Street Journal. Shortly after rising to power in late 2012, Xi Jinping made his first company visit in his new job as China’s Communist Party chief, to Tencent Holdings Ltd. There, he raised a topic that has become both an opportunity and a challenge for his rule: the vast troves of personal data being gathered by the country’s technology companies. Mr. Xi complimented Tencent’s founder, Pony Ma, on the way the company was accumulating information from millions of users, and harnessing that data to drive innovation. He also suggested that data would be useful to Beijing.
- “‘This is what bad looks like’: Major company ignored Australia’s cyber spy agency after hack” By Anthony Galloway — Sydney Morning Herald. A major company in charge of critical infrastructure refused to comply with Australia’s cyber spy agency for weeks after it was hit by a significant cyber attack. Australian Signals Directorate director-general Rachel Noble has revealed her agency found out about the cyber attack through media reports despite the incident having a “national impact on our country”.
- “Anti-vaxxers are weaponizing Yelp to punish bars that require vaccine proof” By Tanya Basu — MIT Technology Review. On the first hot weekend of the summer, Richard Knapp put up a sign outside Mother’s Ruin, a bar tucked in Manhattan’s SoHo neighborhood. It had two arrows: one pointing vaccinated people indoors, another pointing unvaccinated people outdoors. The Instagram post showing the sign (above) quickly went viral among European anti-vaxxers on Reddit. “We started receiving hate mail through the Google portal,” Knapp says, estimating he’d received about a “few dozen” emails: “I’ve been called a Nazi and a communist in the same sentence. People hope that our bar burns down. It’s a name and shame campaign.” It wasn’t just the emails. Soon, his bar started receiving multiple one-star reviews on Yelp and Google Reviews from accounts as far away as Europe.
- “In Leak Investigation, Tech Giants Are Caught Between Courts and Customers” By Jack Nicas, Daisuke Wakabayashi and Katie Benner — The New York Times. On Feb. 6, 2018, Apple received a grand jury subpoena for the names and phone records connected to 109 email addresses and phone numbers. It was one of the more than 250 data requests that the company received on average from U.S. law enforcement each week at the time. An Apple paralegal complied and provided the information. This year, a gag order on the subpoena expired. Apple said it alerted the people who were the subjects of the subpoena, just as it does with dozens of customers each day.
- “China’s New Data Law Gives Xi the Power to Shut Down Tech Firms” — Bloomberg News. China’s new data security regime gives President Xi Jinping the power to shut down or fine tech companies as part of his drive to wrest control of vast reams of data held by giants like Alibaba Group Holding Ltd. and Tencent Holdings Ltd. Firms found mishandling “core state data” can be forced to cease operations, have their operating licenses revoked or fined up to 10 million yuan ($1.6 million) under a law passed Thursday by the Asian nation’s top legislative body.
Coming Events

- On 24 June, the House Energy and Commerce Committee’s Health Subcommittee will hold a hearing titled “Empowered by Data: Legislation to Advance Equity and Public Health” that will likely include discussion of the following bills:
- H.R. 379, the “Improving Social Determinants of Health Act of 2021”
- H.R. 666, the “Anti-Racism in Public Health Act of 2021”
- H.R. 778, the “Secure Data and Privacy for Contact Tracing Act of 2021”
- H.R. 791, the “Tracking COVID–19 Variants Act”
- H.R. 831, the “Health Standards To Advance Transparency, Integrity, Science, Technology Infrastructure, and Confidential Statistics Act of 2021” or the “Health STATISTICS Act of 2021”
- H.R. 925, the “Data to Save Moms Act”
- H.R. 943, the “Social Determinants for Moms Act”
- H.R. 976, the “Ensuring Transparent Honest Information on COVID–19 Act” or the “ETHIC Act”
- H.R. 2125, the “Quit Because of COVID–19 Act”
- H.R. 2503, the “Social Determinants Accelerator Act of 2021”
- H.R. 3894, the “Collecting and Analyzing Resources Integral and Necessary for Guidance for Social Determinants of Health Act of 2021” or the “CARING for Social Determinants of Health Act of 2021”
- H.R. 3969, to amend title XXVII of the Public Health Service Act to include activities to address social determinants of health in the calculation of medical loss ratios
- H.R. ____, to require the Comptroller General of the United States to submit to Congress a report on actions taken by the Secretary of Health and Human Services to address social determinants of health
- The House Appropriations Committee’s Financial Services and General Government Subcommittee will mark up its FY 2022 appropriations bill on 24 June, which includes funding and programmatic direction for a number of agencies including the Federal Communications Commission and Federal Trade Commission.
- On 24 June, the House Small Business Committee’s Oversight, Investigations, and Regulations Subcommittee will hold a hearing titled “CMMC Implementation: What It Means for Small Businesses.” The subcommittee explained:
- The Cybersecurity Maturity Model Certification (CMMC) is the Department of Defense’s latest initiative to increase cybersecurity preparedness across the defense industrial base. The hearing will provide Members the opportunity to learn more about this initiative, its implementation, and the compliance challenges it poses for small businesses.
- The following witnesses will appear:
- Mr. Jonathan T. Williams, Partner, PilieroMazza PLLC
- Mr. Scott Singer, President, CyberNINES
- Ms. Tina Wilson, Chief Executive Officer, T47 International, Inc.
- Mr. Michael Dunbar, President, Ryzhka International LLC
- On 27 July, the Federal Trade Commission (FTC) will hold PrivacyCon 2021.
© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2021. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.
Photo by Scott Rodgerson on Unsplash
Image by Gerd Altmann from Pixabay