Sponsors Take A New Run At Privacy Law in Washington State

Perhaps the third time is the charm? Legislators seek to pass a privacy law in Washington state for the third year in a row.

A group of Senators in Washington state’s Senate have introduced a slightly altered version of a privacy bill they floated last summer. A committee of jurisdiction will hold a hearing on 14 January 2021 on SB 5062. Of course, this would mark the third year in a row legislators have tried to enact the Washington privacy act. The new bill (SB 5062) tracks closely with the two bills produced by the Washington Senate and House last year lawmakers could not ultimately reconcile. However, there are no provisions on facial recognition technology, which was largely responsible for sinking a privacy bill in Washington State two years ago. The sponsors have also taken the unusual step of appending language covering the collection and processing of personal data to combat infectious diseases like COVID-19.

I analyzed the discussion draft that Washington State Senator Reuven Carlyle (D-Seattle) released over the summer, and so I will not recite everything about the new bill. It should suffice to highlight the differences between the discussion draft and the introduced legislation. Big picture, the bill still uses the concepts of data controllers and processors most famously enshrined in the European Union’s (EU) General Data Protection Regulation (GDPR). Like other privacy bills, generally, people in Washington State would not need to consent before an entity could collect and process its information. People would be able to opt out of some activities, but most could data collection and processing could still occur as it presently does.

The date on which the bill would take effect was pushed aback from 120 days in the discussion draft to 31 July 2022 in the introduced bill. While SB 5062 would cover non-profits, institutions of higher education, airlines, and others unlike the discussion draft, the effective date for the bill to cover would be 31 July 2026. The right of a person to access personal data a controller is processing is narrowed slightly in that it would no longer be the personal data the controller has but rather categories of personal data. The time controllers would have to respond to a certain class of request would be decreased from 45 to 15 days. This class includes requests to opt out of targeted advertising, the sale of personal data, and any profiling in furtherance of decisions with legal effects. Section 106’s requirement that processors have reasonable security measures has been massaged, rephrased and possibly weakened a bit.

One of the activities controllers and processors could undertake without meeting the requirements of the act was removed. Notably, they will no longer be able to “conduct internal research solely to improve or repair products, services, or technology.” There is also a clarification that using any of the exemptions in Section 110 does not make an entity a controller for purposes of the bill. There is a new requirement that the State Office of Privacy and Data Protection must examine current technology that allows for mass or global opt out or opt in and then report to the legislature. Finally, two of the Congressional stakeholders on privacy and data security hail from Washington state, and consideration and possible passage of a state law may limit their latitude on a federal bill they could support. Senator Maria Cantwell (D-WA) and Representative Cathy McMorris Rodgers (R-WA), who are the ranking members of the Senate Commerce, Science, and Transportation Committee and House Energy and Commerce Committee respectively, are expected to be involved in drafting their committee’s privacy bills, and a Washington state statute may affect their positions in much the same the “California Consumer Privacy Act” (CCPA) (AB 375) has informed a number of California Members’ position on privacy legislation, especially with respect to bills being seen as weaker than the CCPA.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2021. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Kranich17 from Pixabay

Privacy Shield Hearing

The focus was on how the U.S. and EU can reach agreement on an arrangement that will not be struck down by the EU’s highest court.

Last week, the Senate Commerce, Science, and Transportation Committee held a hearing on the now invalidated European Union (EU)-United States (U.S.) Privacy Shield, a mechanism that allowed companies to transfer the personal data of EU residents to the U.S. The EU’s highest court struck down the adequacy decision that underpinned the system on the basis of U.S. surveillance activities and lack of redress that violated EU law. This is the second time in the decade the EU’s top court has invalidated a transfer arrangement, the first being the Safe Harbor system. Given the estimated billions, or even trillions, of dollars in value realized from data flows between the EU and U.S. there is keen interest on both sides of the Atlantic in finding a legal path forward. However, absent significant curtailment of U.S. surveillance and/or a significant expansion of the means by which EU nationals could have violations of their rights rectified, it would appear a third agreement may not withstand the inevitable legal challenges. Moreover, there are questions as to the legality of other transfer tools in light of the Court of Justice for the European Union’s decision in the case known as Schrems II, and the legality of some Standard Contractual Clauses (SCC) and Binding Corporate Rules (BCR) may be soon be found in violation, too.

Consequently, a legislative fix, or some portion thereof, could be attached to federal privacy legislation. Hence, the striking down of Privacy Shield may provide additional impetus to Congress and the next Administration to reach a deal on privacy. Moreover, the lapsed reauthorization of some Foreign Intelligence Surveillance Act authorities may be another legislative opportunity for the U.S. to craft an approach amendable to the EU in order to either obtain an adequacy decision or a successor agreement to the Privacy Shield.

Chair Roger Wicker (R-MS) approached the issue from the perspective of international trade and the economic benefit accruing to businesses on both sides of the Atlantic. His opening remarks pertained less to the privacy and surveillance aspects of the CJEU’s ruling. Wicker appears to be making the case that the EU seems to misunderstand that redress rights in the U.S. are more than adequate, and the U.S.’ surveillance regime is similar to those of some EU nations. One wonders if the CJEU is inclined to agree with this position. Nonetheless, Wicker expressed hope that the EU and U.S. can reach “a durable and lasting data transfer framework…that provides meaningful data protections to consumers, sustains the free flow of information across the Atlantic, and encourages continued economic and strategic partnership with our European allies – a tall order but an essential order.” He worried about the effect of the CJEU’s ruling on SCCs. Wicker made the case that the EU and U.S. share democratic values and hinted that the ongoing talks in the committee to reach a federal data privacy law might include augmented redress rights that might satisfy the CJEU.

Ranking Member Maria Cantwell (D-WA) spoke very broadly about a range of issues related to data transfers and privacy. She stressed the importance of data flows in the context of larger trade relations. Cantwell also stressed the shared values between the U.S. and the EU and her hope that the two entities work “together on these very important national concerns, trade and technology, so that we can continue to improve economic opportunities and avoid moves towards protectionism.” She also called for federal privacy legislation but hinted that states should still be able to regulate privacy, suggesting her commitment to having a federal law be a floor for state laws. Cantwell also asserted that bulk surveillance, the likes of which the National security Agency has engaged in, may simply not be legal under EU law.

Deputy Assistant Secretary of Commerce for Services James Sullivan blurred the issues presented by Schrems II much like Cantwell did. The CJEU’s decision that focused on U.S. surveillance practices and the lack of meaningful recourse in the U.S. if an EU resident’s rights were violated was merged into a call for like-minded nations to unite against authoritarian nations. Sullivan distinguished between U.S. surveillance and the surveillance conducted by the People’s Republic of China (without naming the nation) and other regimes as if this should satisfy the EU as to the legality and propriety of U.S. treatment of EU personal data. Sullivan stated:

  • The Schrems II decision has created enormous uncertainties for U.S. companies and the transatlantic economy at a particularly precarious time. Immediately upon issuance of the ruling, the 5,400 Privacy Shield participants and their business partners in the EU could no longer rely on the Framework as a lawful basis for transferring personal data from Europe to the United States. Because neither the Court nor European data protection authorities provided for any enforcement grace period, Privacy Shield companies were left with three choices: (1) risk facing potentially huge fines (of up to 4 percent of total global turnover in the preceding year) for violating GDPR, (2) withdraw from the European market, or (3) switch right away to another more expensive data transfer mechanism.
  • Unfortunately, because of the Court’s ruling in the Privacy Shield context that U.S. laws relating to government access to data do not confer adequate protections for EU personal data, the use of other mechanisms like SCCs and BCRs to transfer EU personal data to the United States is now in question as well.
  • The objective of any potential agreement between the United States and the European Commission to address Schrems II is to restore the continuity of transatlantic data flows and the Framework’s privacy protections by negotiating targeted enhancements to Privacy Shield that address the Court’s concerns in Schrems II. Any such enhancements must respect the U.S. Government’s security responsibilities to our citizens and allies.
  • To be clear, we expect that any enhancements to the Privacy Shield Framework would also cover transfers under all other EU-approved data transfer mechanisms like SCCs and BCRs as well.
  • The Schrems II decision has underscored the need for a broader discussion among likeminded democracies on the issue of government access to data. Especially as a result of the extensive U.S. surveillance reforms since 2015, the United States affords privacy protections relating to national security data access that are equivalent to or greater than those provided by many other democracies in Europe and elsewhere.
  • To minimize future disruptions to data transfers, we have engaged with the European Union and other democratic nations in a multilateral discussion to develop principles based on common practices for addressing how best to reconcile law enforcement and national security needs for data with protection of individual rights.
  • It is our view that democracies should come together to articulate shared principles regarding government access to personal data—to help make clear the distinction between democratic societies that respect civil liberties and the rule of law and authoritarian governments that engage in the unbridled collection of personal data to surveil, manipulate, and control their citizens and other individuals without regard to personal privacy and human rights. Such principles would allow us to work with like-minded partners in preserving and promoting a free and open Internet enabled by the seamless flow of data.

Federal Trade Commission (FTC) Commissioner Noah Joshua Phillips stressed he was speaking in a personal capacity and not for the FTC. He extolled the virtues of the “free and open” internet model in the U.S. with the double implication that it is superior both to nations like the PRC and Russia but also the EU model. Phillips seemed to be advocating for talking the EU into accepting that the U.S.’s privacy regime and civil liberties are stronger than any other nation. Her also made the case, like other witnesses, that the U.S. data privacy and protection regulation is more similar to the EU than the PRC, Russia, and others. Phillips also sought to blur the issues and recast Privacy Shield in the context of the global struggle between democracies and authoritarian regimes. Phillips asserted:

  • First, we need to find a path forward after Schrems II, to permit transfers between the U.S. and EU. I want to recognize the efforts of U.S. and EU negotiators to find a replacement for Privacy Shield. While no doubt challenging, I have confidence in the good faith and commitment of public servants like Jim Sullivan, with whom I have the honor of appearing today, and our partners across the Atlantic. I have every hope and expectation that protecting cross-border data flows will be a priority for the incoming Administration, and I ask for your help in ensuring it is.
  • Second, we must actively engage with nations evaluating their approach to digital governance, something we at the FTC have done, to share and promote the benefits of a free and open Internet. There is an active conversation ongoing internationally, and at every opportunity—whether in public forums or via private assistance—we must ensure our voice and view is heard.
  • Third, we should be vocal in our defense of American values and policies. While we as Americans always look to improve our laws—and I commend the members of this committee on their important work on privacy legislation and other critical matters—we do not need to apologize to the world. When it comes to civil liberties or the enforcement of privacy laws, we are second to none. Indeed, in my view, the overall U.S. privacy framework—especially with the additional protections built into Privacy Shield—should certainly qualify as adequate under EU standards.
  • Fourth, as European leaders call to strengthen ties with the U.S., we should prioritize making our regimes compatible for the free flow of data. This extends to the data governance regimes of like-minded countries outside of Europe as well. Different nations will have different rules, but relatively minor differences need not impede mutually-beneficial commerce. We need not and should not purport to aim for a single, identical system of data governance. And we should remind our allies, and remind ourselves, that far more unites liberal democracies than divides us.
  • Fifth and finally, if we must draw lines, those lines should be drawn between allies with shared values—the U.S., Europe, Japan, Australia, and others—and those, like China and Russia, that offer a starkly different vision. I am certainly encouraged when I hear recognition of this distinction from Europe. European Data Protection Supervisor Wojciech Wiewiórowski recently noted that the U.S. is much closer to Europe than is China and that he has a preference for data being processed by countries that share values with Europe. Some here in the U.S. are even proposing agreements to solidify the relationships among technologically advanced democracies, an idea worth exploring in more detail

Washington University Professor of Law Neil Richards stressed that the Schrems II decision spells out how the U.S. would achieve adequacy: reforming surveillance and providing meaningful redress for alleged privacy violations. Consequently, FISA would need to be rewritten and narrowed and a means for EU residents to seek relief beyond the current Ombudsman system is needed, possibly a statutory right to sue. Moreover, he asserted strong data protection and privacy laws are needed and some of the bills introduced in this Congress could fit the bill. Richards asserted:

In sum, the Schrems litigation is a creature of distrust, and while it has created problems for American law and commerce, it has also created a great opportunity. That opportunity lies before this Committee –the chance to regain American leadership in global privacy and data protection by passing a comprehensive law that provides appropriate safeguards, enforceable rights, and effective legal remedies for consumers. I believe that the way forward can not only safeguard the ability to share personal data across the Atlantic, but it can do so in a way that builds trust between the United States and our European trading partners and between American companies and their American and European customers. I believe that there is a way forward, but it requires us to recognize that strong, clear, trust-building rules are not hostile to business interest, that we need to push past the failed system of “notice and choice,” that we need to preserve effective consumer remedies and state-level regulatory innovation, and seriously consider a duty of loyalty. In that direction, I believe, lies not just consumer protection, but international cooperation and economic prosperity.

Georgia Tech University Professor Peter Swire explained that the current circumstances make the next Congress the best possibility in memory to enact privacy legislation because of the need for a Privacy Shield replacement, passage of the new California Privacy Rights Act (Proposition 24), and the Biden Administration’s likely support for such legislation. Swire made the following points:

  1. The European Data Protection Board in November issued draft guidance with an extremely strict interpretation of how to implement the Schrems II case.
  2. The decision in Schrems II is based on EU constitutional law. There are varying current interpretations in Europe of what is required by Schrems II, but constitutional requirements may restrict the range of options available to EU and U.S. policymakers.
  3. Strict EU rules about data transfers, such as the draft EDPB guidance, would appear to result in strict data localization, creating numerous major issues for EU- and U.S.-based businesses, as well as affecting many online activities of EU individuals.
  4. Along with concerns about lack of individual redress, the CJEU found that the EU Commission had not established that U.S. surveillance was “proportionate” in its scope and operation. Appendix 2 to this testimony seeks to contribute to an informed judgment on proportionality, by cataloguing developments in U.S. surveillance safeguards since the Commission’s issuance of its Privacy Shield decision in 2016.
  5. Negotiating an EU/U.S. adequacy agreement is important in the short term.
  6. A short-run agreement would assist in creating a better overall long-run agreement or agreements.
  7. As the U.S. considers its own possible legal reforms in the aftermath of Schrems II, it is prudent and a normal part of negotiations to seek to understand where the other party – the EU – may have flexibility to reform its own laws.
  8. Issues related to Schrems II have largely been bipartisan in the U.S., with substantial continuity across the Obama and Trump administrations, and expected as well for a Biden administration.
  9. Passing comprehensive privacy legislation would help considerably in EU/U.S. negotiations.
  10. This Congress may have a unique opportunity to enact comprehensive commercial privacy legislation for the United States.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Dooffy Design from Pixabay

Biden Administration Tech Policy: Federal Trade Commission (FTC)

Under President Joe Biden, the FTC will face most of the same issues presently before the agency.

In a Biden Administration, the FTC may tip from three Republican Commissioners, including the chair, to a majority of Democrats if Chair Joseph Simons steps down as has been rumored for some months now, in part because of political pressure and displeasure from the Trump White House. However, it is not uncommon for chairs to stay on even if a President of a different party comes to power, and, in fact, it rarely occurs that a sitting chair resigns at the beginning of a new presidency as occurred when then Chair Edith Ramirez resigned at the beginning of the Trump Administration in 2017. However, by law, the President may not remove the FTC chair or any commissioner except for “inefficiency, neglect of duty, or malfeasance in office.”

However, the President may, and almost always does in the event the White House changes hands, designate a new chair, and either of the sitting Commissioners could become the new chair: Rebecca Kelly Slaughter or Rohit Chopra. However, the latter’s term actually ended in September 2019 and can serve until he is re-confirmed or a successor is confirmed. It is not clear whether Chopra would be re-nominated given his view on regulating is to the left of Biden’s historical position on such issues. However, Chopra has support from Senator Elizabeth Warren (D-MA), a key stakeholder a Biden White House may try to keep happy. However, Chopra’s name was floated for the head of the Consumer Financial Protection Bureau (CFPB), the agency where he served as the Deputy Director. So, it may come to pass that President-elect Joe Biden gets to appoint two Democrats to the FTC if Simons steps down and Chopra moves on to the CFPB.

Of course, the FTC will almost certainly continue as the de facto federal data protection authority (DPA) for the United States and will use its Section 5 powers to investigate and punish privacy, data security, and cybersecurity violations. The agency is one of the two federal antitrust enforcers, a recently revived area of federal law that has bipartisan interest and support, and is on the verge of filing an antitrust action against Facebook, alleging violations of antitrust law in the social messaging market, especially on account of its WhatsApp and Instagram acquisitions. Conceivably, the FTC under Democratic leadership may have a more aggressive posture towards technology companies and other swaths of the economy that have undergone increased consolidation.

Moreover, most of the privacy bills in Congress would assign the responsibility of enforcing the regime at the federal level to the FTC, a power it would share with state attorneys general as is the current case with respect to antitrust and data security enforcement. The crucial question will be whether the agency receives the resources necessary to maintain its current responsibilities while taking on new responsibilities. At present, the House is proposing a $10 million increase to the agency’s budget from $331 million to $341 million.

Another aspect of the FTC that bears watching is how federal courts construe the agency’s power because a significant portion of the FTC’s ability to use its enforcement powers will hinge on court cases and possible Congressional tweaks to the FTC Act.

A few weeks ago, the FTC recently wrote the House and Senate committees with jurisdiction over the agency, asking for language restoring the power to seek and obtain restitution for victims of those who have violated Section 5 of the FTC Act and disgorgement of ill-gotten gains. The FTC is also asking that Congress clarify that the agency may act against violators even if their conduct has stopped as it has for more than four decades. Two federal appeals courts have ruled in ways that have limited the FTC’s long used powers, and now the Supreme Court of the United States is set to rule on these issues sometime next year. The FTC is claiming, however, that defendants are playing for time in the hopes that the FTC’s authority to seek and receive monetary penalties will ultimately be limited by the United States (U.S.) highest court. Judging by language tucked into a privacy bill introduced by the chair of one of the committees, Congress may be willing to act soon.

The FTC asked the House Energy and Commerce and Senate Commerce, Science, and Transportation Committees “to take quick action to amend Section 13(b) [of the FTC Act i.e. 15 U.S.C. § 53(b)] to make clear that the Commission can bring actions in federal court under Section 13(b) even if conduct is no longer ongoing or impending when the suit is filed and can obtain monetary relief, including restitution and disgorgement, if successful.” The agency asserted “[w]ithout congressional action, the Commission’s ability to use Section 13(b) to provide refunds to consumer victims and to enjoin illegal activity is severely threatened.” All five FTC Commissioners signed the letter.

The FTC explained that adverse rulings by two federal appeals courts are constraining the agency from seeking relief for victims and punishment for violators of the FTC Act in federal courts below those two specific courts, but elsewhere defendants are either asking courts for a similar ruling or using delaying tactics in the hopes the Supreme Court upholds the two federal appeals courts:

  • …[C]ourts of appeals in the Third and Seventh Circuits have recently ruled that the agency cannot obtain any monetary relief under Section 13(b). Although review in the Supreme Court is pending, these lower court decisions are already inhibiting our ability to obtain monetary relief under 13(b). Not only do these decisions already prevent us from obtaining redress for consumers in the circuits where they issued, prospective defendants are routinely invoking them in refusing to settle cases with agreed-upon redress payments.
  • Moreover, defendants in our law enforcement actions pending in other circuits are seeking to expand the rulings to those circuits and taking steps to delay litigation in anticipation of a potential Supreme Court ruling that would allow them to escape liability for any monetary relief caused by their unlawful conduct. This is a significant impediment to the agency’s effectiveness, its ability to provide redress to consumer victims, and its ability to prevent entities who violate the law from profiting from their wrongdoing.

Earlier in the year, by a split vote across party lines, the Federal Trade Commission (FTC) asked a United States (U.S.) appeals court to reconsider a ruling that overturned a lower court’s ruling that Qualcomm has violated antitrust laws in the licensing of its technology and patents vital to smartphones. Republican Commissioners Noah Joshua Phillips and Christine Wilson voted against filing the brief asking for a rehearing with Chair Joseph Simons joining the two Democratic Commissioners Rohit Chopra and Rebecca Kelly Slaughter in voting to move forward with the brief. This case could have major ramifications for antitrust law and the technology sector in the U.S. and for the 5G market as Qualcomm is a major player in the development and deployment of the technology necessary for this coming upgrade in wireless communications expected to bring a host of intended and unintended improvements in communications.

In the brief, the FTC argued the (U.S.) Court Of Appeals for The Ninth Circuit (Ninth Circuit) did not disagree with the District Court’s factual findings of anticompetitive conduct and rather took issue with the lack of “a cogent theory of anticompetitive harm.” The FTC argued the case should be reconsidered on three grounds:

  • The Ninth Circuit ruled on the basis of formal labels and not economic substance contrary to established Supreme Court law
  • Facially neutral surcharges by one market participant to its rivals is, in fact, an unequal and exclusionary burden on rivals, conduct the Supreme Court has ruled violates antitrust law; and
  • Harm to customers is indeed a central focus and concern of antitrust cases and ruling that this harm is outside relevant antitrust markets is also a misreading of established law.

As noted, the Ninth Circuit reversed a U.S. District Court’s decision that Qualcomm’s licensing practices violated the Sherman Antitrust Act. Specifically, the lower court held these practices “have strangled competition in the Code Division Multiple Access (CDMA) and premium Long-Term Evolution (LTE) modem chip markets for years, and harmed rivals, original equipment manufacturers (OEMs), and end consumers in the process.” Consequently, the court found “an unreasonable restraint of trade under § 1 of the Sherman Act and exclusionary conduct under § 2 of the Sherman Act….and that Qualcomm is liable under the FTC Act, as “unfair methods of competition” under the FTC Act include “violations of the Sherman Act.”

However, the Ninth Circuit disagreed, overturned the district court and summarized its decision:

  • [We] began by examining the district court’s conclusion that Qualcomm had an antitrust duty to license its standard essential patents (SEPs) to its direct competitors in the modern chip markets pursuant to the exception outlined in Aspen Skiing Co. v. Aspen Highlands Skiing Corp., 472 U.S. 585 (1985). [We] held that none of the required elements for the Aspen Skiing exception were present, and the district court erred in holding that Qualcomm was under an antitrust duty to license rival chip manufacturers. [We] held that Qualcomm’s OEM-level licensing policy, however novel, was not an anticompetitive violation of the Sherman Act.
  • [We] rejected the FTC’s contention that even though Qualcomm was not subject to an antitrust duty to deal under Aspen Skiing, Qualcomm nevertheless engaged in anticompetitive conduct in violation of § 2 of the Sherman Act. [We] held that the FTC did not satisfactorily explain how Qualcomm’s alleged breach of its contractual commitment itself impaired the opportunities of rivals. Because the FTC did not meet its initial burden under the rule of reason framework, [We were] less critical of Qualcomm’s procompetitive justifications for its OEM-level licensing policy—which, in any case, appeared to be reasonable and consistent with current industry practice. [We] concluded that to the extent Qualcomm breached any of its fair, reasonable, and nondiscriminatory (FRAND) commitments, the remedy for such a breach was in contract or tort law.

The FTC has a number of significant outstanding rulemakings.

In early 2019, the FTC released notices of proposed rulemaking (NPRM) for two of the data security regulations with which some financial services companies must comply:

The reassessment of the Safeguards Rule began in 2016 when the FTC asked for comments. The proposed Safeguards Rule demonstrates the agency’s thinking on what data security regulations should look like, which is important because the FTC is the agency most likely to become the enforcer and writer of any new data security or privacy regulations. Notably, the new Safeguards regulation would require the use of certain best practices such as encrypting data in transit or at rest or requiring the use of multi-factor authentication “for any individual accessing customer information.” Moreover, the other financial services agencies charged with implementing the section of Gramm-Leach-Bliley (GLB) that requires financial services companies to safeguard customers’ information may follow suit (e.g. the Federal Reserve Board or the Comptroller of the Currency.)

In the proposed rule, the FTC noted that its changes to the Safeguards Rule would “include more detailed requirements for the development and establishment of the information security program required under the Rule…[and] [t]hese amendments are based primarily on the cybersecurity regulations issued by the New York Department of Financial Services, 23 NYCRR 500 (“Cybersecurity Regulations”), and the insurance data security model law issued by the National Association of Insurance Commissioners (“Model Law”).”

In the Safeguards Rule proposal, the FTC explained “[t]he proposal contains five main modifications to the existing Rule.”

  • First, it adds provisions designed to provide covered financial institutions with more guidance on how to develop and implement specific aspects of an overall information security program, such as access controls, authentication, and encryption.
  • Second, it adds provisions designed to improve the accountability of financial institutions’ information security programs, such as by requiring periodic reports to boards of directors or governing bodies.
  • Third, it exempts small businesses from certain requirements.
  • Fourth, it expands the definition of “financial institution” to include entities engaged in activities that the Federal Reserve Board determines to be incidental to financial activities. Such a change would add “finders”–companies that bring together buyers and sellers of a product or service–within the scope of the Rule.
  • Finally, the Commission proposes to include the definition of “financial institution” and related examples in the Rule itself rather than incorporate them by reference from a related FTC rule, the Privacy of Consumer Financial Information Rule.

The FTC’s Safeguards Rule applies to the following and other entities:

[M]ortgage lenders, “pay day” lenders, finance companies, mortgage brokers, account servicers, check cashers, wire transferors, travel agencies operated in connection with financial services, collection agencies, credit counselors and other financial advisors, tax preparation firms, non- federally insured credit unions, investment advisors that are not required to register with the Securities and Exchange Commission, and entities acting as finders.

The FTC explained that it “is proposing to expand the definition of “financial institution” in both the Privacy Rule and the Safeguards Rule to specifically include so-called “finders,” those who charge a fee to connect consumers who are looking for a loan to a lender…[because] [t]his proposed change would bring the Commission’s Rule in line with other agencies’ interpretation of the Gramm Leach Bliley Act.”

As part of its regular review of its regulations, the FTC released asked for input on its Health Breach Notification Rule (HBN Rule) promulgated in 2010 per direction in the “American Recovery and Reinvestment Act” (ARRA) (P.L. 111-5). When enacted, Congress expected this regulation to be temporary as policymakers thought a national breach notification statute would shortly be enacted that would make the FTC’s regulations superfluous, but that has obviously not happened. And, hence the FTC continues to have regulations governing breach notification and security of some health information for entities not subject to the “Health Insurance Portability and Accountability Act” (HIPAA)/“Health Information Technology for Economic and Clinical Health Act” (HITECH Act) regulations, which are generally healthcare providers and their business associates. Incidentally, it is possible the FTC’s HBN Rule would govern breaches arising from breaches of vendors involved with COVID-19 contact tracing.

As explained in the current regulation, the HBN Rule “applies to foreign and domestic vendors of personal health records (PHR), PHR related entities, and third party service providers, irrespective of any jurisdictional tests in the Federal Trade Commission (FTC) Act, that maintain information of U.S. citizens or residents.” This rule, however, “does not apply to HIPAA-covered entities, or to any other entity to the extent that it engages in activities as a business associate of a HIPAA-covered entity.”

And yet, the FTC conceded it “has not had occasion to enforce its Rule because, as the PHR market has developed over the past decade, most PHR vendors, related entities, and service providers have been HIPAA-covered entities or “business associates” subject to the Department of Health and Human Services’ (HHS) rule.” The FTC foresees utility and need for the HBN Rule “as consumers turn towards direct-to-consumer technologies for health information and services (such as mobile health applications, virtual assistants, and platforms’ health tools), more companies may be covered by the FTC’s Rule.” Accordingly, the FTC “now requests comment on the HBN Rule, including the costs and benefits of the Rule, and whether particular sections should be retained, eliminated, or modified.”

In terms of how the HBN Rule functions, the FTC explained:

  • The Recovery Act directed the FTC to issue a rule requiring these entities, and their third-party service providers, to provide notification of any breach of unsecured individually identifiable health information.
  • Accordingly, the HBN Rule requires vendors of PHRs and PHR related entities to provide: (1) Notice to consumers whose unsecured individually identifiable health information has been breached; (2) notice to the media, in many cases; and (3) notice to the Commission.
  • The Rule also requires third party service providers (i.e., those companies that provide services such as billing or data storage) to vendors of PHRs and PHR related entities to provide notification to such vendors and entities following the discovery of a breach.
  • The Rule requires notice “without unreasonable delay and in no case later than 60 calendar days” after discovery of a data breach. If the breach affects 500 or more individuals, notice to the FTC must be provided “as soon as possible and in no case later than ten business days” after discovery of the breach. The FTC makes available a standard form for companies to use to notify the Commission of a breach. The FTC posts a list of breaches involving 500 or more individuals on its website. This list only includes two breaches, because the Commission has predominantly received notices about breaches affecting fewer than 500 individuals.

Moreover, per the current regulations, the FTC may treat breaches as violations of regulation on unfair or deceptive practices, permitting the FTC to seek and possibly levy civil fines of up to $43,000 per violation.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Gerd Altmann from Pixabay

Brown’s Bill Is The Most Privacy Friendly So Far

The top Democrat on the Senate Banking Committee has released the most pro-privacy bill yet.

Even though Senate Banking, Housing and Urban Affairs Ranking Member Sherrod Brown (D-OH) introduced his draft privacy bill in June, the “Data Accountability and Transparency Act of 2020,” too much has been happening to take a proper look at the bill. Now that I have, I can say this is the most privacy and consumer rights friendly bill introduced in this Congress and quite possibly any of the recent Congresses. I wonder if Democrats could pass such a strong, restrictive bill even with super majorities in both chambers and a Democratic President, for the resistance by industry would be very fierce.

In terms of what this bill would do, most notably, a new agency would be created, the Data Accountability and Transparency Agency (DATA) that would be outside the appropriations process like the Consumer Financial Protection Bureau (CFPB), which limits Congress’ power over the agency. It would be headed by a Director appointed by the President and then confirmed by the Senate who could serve a five-year term. The agency would also have a Deputy Director. Again, this uses the CFPB as the template and not the Federal Trade Commission (FCC) or Federal Communications Commission (FCC), independent agencies with five Commissioners each. Also,  like the CFPB, and unlike the FTC, the agency would be charged with policing unfair, deceptive, and abusive privacy practices in violation of this new law. It appears the DATA (incidentally, a terrible acronym for an agency) would work alongside existing federal agencies, and so the FTC could still police privacy and data security.

Moreover, the Brown bill uses preemption model from the “Financial Services Modernization Act of 1999” (P.L. 106-102) (aka Gramm–Leach–Bliley) under which states would be allowed to regulate privacy above the federal standard so long as  a state statute is not inconsistent. And, state statutes would preempted only to the degree they are countered to the new federal law.

And, of course, Brown’s bill allows people to sue for violations, and on the most generous terms I’ve seen among the privacy bills.

Not surprisingly, the definitions are drafted in ways that are uber pro-privacy. For example, ‘‘personal data’’ is defined as “electronic data that, alone or in combination with other data—

  • could be linked or reasonably linkable to an individual, household, or device; or
  • could be used to determine that an individual or household is part of a protected class.”

This is a very broad definition of personal information a U.S. resident would have protected under the bill because it covers more than just data like names, addresses, Social Security numbers, etc. and instead covers all data that could be linked to a person, household, or device. This is a broader definition than most bills, which actually specify the sorts of data. For example, some bills treat specific geolocation data as deserving more protection than other data. However, it is often the case that this is defined as any such data that pinpoints a person’s location to within 1750 feet, meaning that data that locates a person within, say, 2000 feet, or less than half a mile would not be protected. Brown’s definition is simpler, broader, and quite possibly much easier to implement.

Likewise, what is considered a violation under the bill is also very broadly written. A ‘‘privacy harm’’ is “an adverse consequence, or a potential adverse consequence, to an individual, a group of individuals, or society caused, or potentially caused, in whole or in part, by the collection, use, or sharing of personal data, including:

(A) direct or indirect financial loss or economic harm, including financial loss or economic harm arising from fraudulent activities or data security breaches;

(B) physical harm, harassment, or a threat to an individual or property;

(C) psychological harm, including anxiety, embarrassment, fear, other trauma, stigmatization, reputational harm, or the revealing or exposing of an individual, or a characteristic of an individual, in an unexpected way;

(D) an adverse outcome or decision, including relating to the eligibility of an individual for the rights, benefits, or privileges in credit and insurance (including the denial of an application or obtaining less favorable terms), housing, education, professional certification, employment (including hiring, firing, promotion, demotion, and compensation), or the provision of health care and related services;

(E) discrimination or the otherwise unfair or unethical differential treatment with respect to an individual, including in a manner that is prohibited under section 104;

(F) the interference with, or the surveillance of, activities that are protected by the First Amendment to the Constitution of the United States;

(G) the chilling of free expression or action of an individual, or society generally, due to perceived or actual pervasive and excessive collection, use, or sharing of personal data;

(H) the impairment of the autonomy of an individual or society generally; and

(I) any harm fairly traceable to an invasion of privacy tort; and

(J) any other adverse consequence, or potential adverse consequence, consistent with the provisions of this Act, as determined by the Director.

I’ve quoted the entire definition of “privacy harm” because I think it helps one understand the full range of what harms the new privacy agency would be policing. First, it would be beyond actual financial or economic harms and go “psychological harm,” which may present courts with problems as they try to navigate what anguish meets this standard and which does not. Second, it covers activities that are protected under the First Amendment, the chilling of free expression, the impairment of a person’s impairment, and “any harm fairly traceable to an invasion of privacy tort.” This may be the widest definition of what is harm of any of the privacy bills introduced in this or any other recent Congress. Finally, the DATA could determine any other consequence, real or potential, qualifies as a privacy harm.  

“protected class’’ means the actual or perceived race, color, ethnicity, national origin, religion, sex, gender, gender identity, sexual orientation, familial status, biometric information, lawful source of income, or disability of an individual or a group of individuals.

The bill would outright ban data collection, use, or sharing unless for a permissible purpose, which include:

  • To provide a good, service, or specific feature requested by an individual in an intentional interaction.
  • To engage in journalism, provided that the data aggregator has reasonable safeguards and processes that prevent the collection, use, or sharing of personal data for commercial purposes other than journalism.
  • To employ an individual, including for administration of wages and benefits, except that a data aggregator may not invasively collect, use, or share the employee’s personal data in carrying out this paragraph.
  • Where mandated to comply with Federal, State, or local law.
  • Consistent with due process, direct compliance with a civil, criminal, or regulatory inquiry, investigation, subpoena, or summons.
  • To bring or defend legal claims, provided that the parties or potential parties take all necessary measures, including, as applicable, obtaining a protective order, to protect against unnecessary public disclosure of personal data.
  • To detect or respond to security incidents, protect against malicious, deceptive, fraudulent, or illegal activity, or prosecute those responsible for that activity.
  • Free expression by individuals on a social network or media platform.
  • In exigent circumstances, if first responders or medical personnel, in good faith, believe danger of death or serious physical injury to an individual, or danger of serious and unlawful injury to property, requires collection, use, or sharing of personal data relating to the exigent circumstances.
  • The development and delivery of advertisements—
    • based on the content of the website, online service, or application to which the individual or device is connected; and
    • excludes advertising based on the use of any personal data collected or stored from previous interactions with the individual or device, a profile of the individual or device, or the previous online or offline behavior of the individual or device.
  • To offer discounted or free goods or services to an individual if—
    • the offering is in connection with the voluntary participation by the individual in a program that rewards individuals for patronage; and
    • personal data is only collected to track purchases for loyalty rewards under the program

Again, I’ve quoted at length to show how restrictive the bill is. This is the list of permissible purposes, and one will not find a list of exemptions that pare back the privacy rights ostensibly granted by the bill. For the private sector, the first purpose will be the most relevant as they would be allowed to provide services, products, or goods requested by a person who has intentionally interacted with the entity (aka a data aggregator under the bill). Use of the word intention would seem to rule out accidental or questionable interaction. There is also not language making product or service development an exception like it is in many other bills.

Moreover, with respect to the online advertising industry, behavioral advertising would seem to not be a permissible purpose, at least the variety under which a company aggregates data from different sources to form a profile on a person. Moreover, “[c]ollecting, using, or sharing personal data to generate advertising revenue to support or carry out a permissible purpose is not a permissible purpose.”

The “Data Accountability and Transparency Act of 2020” would permit loyalty or reward programs and even allow a business to offer tiered pricing. And, entities could not charge higher or different prices if a person exercises her rights under the bill.

Brown’s bill would place very strict limits of what entities could do with personal data. To wit, it is provided that “[e]xcept where strictly necessary to carry out a permissible purpose, a data aggregator shall not—

  • share personal data with affiliated entities, service providers, or third parties;
  • use personal data for any purpose other than to carry out a permissible purpose;
  • retain personal data for any time longer than strictly necessary to carry out a permissible purpose; or
  • derive or infer data from any element or set of personal data.”

There is a list of prohibited practices, including, as mentioned, a bar on charging higher prices or providing lesser service or products if one chooses to exercise his rights under the bill. Also, businesses would be prohibited from re-identifying anonymized data or from commingling personal data from different sources. Violating these prohibitions could lead to treble damages.

It also seems like the bill bans most differential pricing:

It is unlawful for a data aggregator to collect, use, or share personal data for advertising, marketing, soliciting, offering, selling, leasing, licensing, renting, or otherwise commercially contracting for housing, employment, credit, or insurance in a manner that discriminates against or otherwise makes the opportunity unavailable or offered on different terms on the basis of a protected class or otherwise materially contributes to unlawful discrimination.

I suppose if there is differential pricing not based on a protected class, then it might be acceptable. However, I’m struggling to think of what that might look like.

This section also makes illegal the use of personal data for vote suppression. This language is an obvious non-starter with Republicans like Senate Majority Leader Mitch McConnell (R-KY) and would find few fans in the White House given recent and persistent allegations of vote suppression efforts by the Trump Campaign in 2016.

Brown’s use of the disparate impact standard in proving discrimination is anathema to most conservatives who have long made the case that disparate treatment should be the measuring stick for determining if discrimination has occurred.

Moreover, if a data aggregator use automated decision-making systems, then it must continually assess whether any bias against a protected class is occurring or any disparate impact against a protected class is happening.  

People would be able to access and port their personal information, and this right is much broader than those provided in other bills. They would be able to access to specific pieces of information collected, used or shared about, the permissible purposes used to collect the data, and the service providers and third parties with whom the information was shared. On this latter point, normally other privacy bills provide a person with access, upon request, to the categories of such entities and not the actual entities themselves.

Brown’s privacy bill provides a right of transparency that mandates in each party’s online privacy policy that the following information be made available:

  • A description of the personal data that the data aggregator collects, uses, or shares.
  • The specific sources from which personal data is collected.
  • A description of the sources from which personal data is collected.
  • The permissible purposes for which personal data is collected, used, or shared.
  • The affiliates, service providers, or third parties with which the data aggregator shares personal data, and the permissible purpose for such sharing.
  • A description of the length of time for which personal data is retained.
  • If personal data is collected and retained as anonymized data, a description of the techniques and methods used to create the anonymized data.

Again, this right provides more specific information than comparable rights in other privacy bills.

Data aggregators would have the affirmative duty to ensure information it collects is correct, and people would have the “right to require that a data aggregator that retains the individual’s personal data correct any inaccurate or incomplete personal data.” Moreover, data aggregators must correct any inaccurate or incorrect information as directed by a person. In other bills, there is language requiring businesses to make best or reasonable efforts but nothing like a guarantee for people or a duty for businesses.

People would be able to ask data aggregators to delete personal information, and they must unless these data are needed to complete a permissible purpose.

Brown’s bill has novel language stipulating “[a]n individual has the right to object to the claimed permissible purpose for any personal data that a data aggregator has collected, used, or shared of such individual.” Consequently, a data aggregator must “produce evidence supporting the data aggregator’s claim that the collection, use, or sharing of such individual’s personal data—

  • was strictly necessary to carry out a permissible purpose;
  • was not used or shared for any other purpose; and
  • has not been retained for any time longer than strictly necessary to carry out a permissible purpose.”

Presumably, failing to produce evidence at all or sufficient evidence constitutes a violation punishable by the new agency.

People would also be allowed to request that a person must review material decisions made via automated processes.

Brown puts an interesting twist on the customary language in almost all privacy bills requiring security commensurate with the type of information being collected, used, and shared. The bill creates a duty of care, which as I seem to be recalling against my will from law school, makes any violations of such duty a tort, permitting people to sue under tort law. Nonetheless, the bill that

A data aggregator shall implement and maintain reasonable security procedures and practices, including administrative, physical, and technical safeguards, appropriate to the nature of the personal data and the purposes for which the personal data will be collected, used, or shared…

Moreover, this duty of a data aggregator extends to service providers and the former are made explicitly liable for the violations of the latter.

If a data aggregator receives a request to exercise these rights that is verified, it must do so and at no cost. This would not apply to frivolous and irrelevant requests, however.

This new agency would be housed in the Federal Reserve Bank and would be able to keep and use the proceeds from its actions to fund operations. Just like the CFPB, this would ensure independence from Congress and the Executive Branch, and just like the CFPB, this is likely a non-starter with Republicans.

The new Data Accountability and Transparency Agency, as noted, would be empowered to “take any action authorized under this Act to prevent a data aggregator or service provider from committing or engaging in any unfair, deceptive, or abusive act or practice in connection with the collection, use, or sharing of personal data.” Moreover,

The Agency may prescribe rules applicable to a data aggregator identifying unlawful, unfair, deceptive, or abusive acts or practices in connection with the collection, use, or sharing of personal data, which may include requirements for the purpose of preventing such acts or practices. Rules under this section shall not limit, or be interpreted to limit, the scope of unlawful, deceptive, or abusive acts or practices in connection with the collection, use, or sharing of personal data.

The agency’s powers to punish unfair acts is drafted similarly to the FTC’s powers which add the caveat that any such acts must be unavoidable and is not outweighed by countervailing benefits to people or competition. It bears note that the agency would be able to punish unfair practices “likely” to cause privacy harms or “other substantial harm” to people in addition to actual practices.

An abusive practice is one that:

  • materially interferes with the ability of an individual to understand a term of condition of a good or service; or
  • takes unreasonable advantage of—
    • a lack of understanding on the part of the individual of the material risks, costs, or conditions of the product or service;
    • the inability of the individual to protect their interests in selecting or using a product or service; or
    • the reasonable reliance by the individual on a data aggregator or service provider to act in the interests of the individual.

Deceptive practices are not defined, and so it is likely the new agency’s powers would be the same as the FTC’s with respect to this type of illegal conduct. Also, the new agency would be able to punish violations of any privacy law, which would bring all the disparate privacy regimes under the roof of one entity in the U.S.

The new agency would receive the authority to punish bad actors in the same bifurcated fashion as the FTC and some other agencies: either through an administrative proceeding or by going to federal court. However, regarding the latter route, the agency would not need to ask the Department of Justice (DOJ) to file suit for it. This detail is salient because this is more and more coming to be the de facto Democratic position on this issue.

Whatever the case, the agency would be able to seek any appropriate legal or equitable relief, the latter term encompassing injunctions, disgorgement, restitution, and other such relief for violations. And, of course, the new agency would be able to punish violations of this new law or any federal privacy with civil fines laid out in tiers:

  • For any violation of a law, rule, or final order or condition imposed in writing by the Agency, a civil penalty may not exceed $5,000 for each day during which such violation or failure to pay continues.
  • [F]or any person that recklessly engages in a violation of this Act or any Federal privacy law, a civil penalty may not exceed $25,000 for each day during which such violation continues.
  • [F]or any person that knowingly violates this Act or any Federal privacy law, a civil penalty may not exceed $1,000,000 for each day during which such violation continues.”

It seems like these tiers would only result in the per day violation total and would not be multiplied by the number of affected people. If so, $5,000 a day is a sum most large companies would probably not register, and even $25,000 a day is bearable for enormous companies like Facebook or Amazon.

Nonetheless, violations arising from re-identifying personal data are punished under the last tier (i.e. $1 million per day), and any of these violations might result in criminal prosecution, for the agency may refer such violations to DOJ. CEOs and Boards of Directors could be prosecuted for knowing and intentional violations, which is a fairly high bar, and face up to ten years in prison and a $10 million fine if convicted.

Brown’s bill provides people with a right to sue entities, including government agencies under some circumstances, that violate this act. Also, people may sue the new agency for failing to promulgate required regulations or for adopting rules that violate the act. Plaintiffs would be able to win between $100-$1000 per violation per day, punitive damages, attorney’s fees and litigation costs, and any other relief the court sees fit to grant. Many Republicans and industry stakeholders are, of course, opposed to a private right of action, but Brown’s is beyond what the other bills are offering because it would allow for the award of punitive damages and fees related to the bringing of the litigation. They would likely argued, with justification, that there would be a wave of class action lawsuits. Another non-starter with Republicans is that the act circumvents a threshold consideration that weeds out lawsuits in federal court by stating that a mere violation of the act is an injury for purposes of the lawsuit. This language sidesteps the obstacle upon which may suits are dashed, for one whose privacy has been violated often cannot show an injury in the form of monetary or economic losses.

Like other bills, pre-dispute arbitration agreements and pre-dispute joint action waiver” signed by any person shall not be valid or enforceable in court, meaning companies cannot limit legal liability by requiring that people waive their rights as part of the terms of service as is now customary.  

As noted previously, the bill would not preempt all state privacy laws. Rather only those portions of state laws that conflict with this act would be preempted, and states would be free to legislate requirements more stringent than the new federal privacy regulatory structure. Moreover, the bill makes clear that common law actions would still be available.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Jan Alexander from Pixabay

A Washington State Privacy Bill…Rises From The Dead

One of the sponsors of a privacy bill that died earlier this year has reintroduced a modified version with new language in the hopes of passing the bill next year.

Washington State Senator Reuven Carlyle (D-Seattle) has floated a new draft of privacy legislation in the hopes it will be pass after forerunner bills dying in the last two legislative sessions. Carlyle has made a number of changes in the “Washington Privacy Act 2021” documented in this chart showing the differences between the new bill, the last version of the bill passed by the Washington State Senate last year, the “California Consumer Privacy Act” (CCPA) (AB 375), and the “California Privacy Rights Act” (CPRA) (aka Proposition 24) on this year’s ballot. But in the main, the bill tracks closely with the two bills produced by the Washington Senate and House last year lawmakers could not ultimately reconcile. However, there are no provisions on facial recognition technology, which was largely responsible for sinking a privacy bill in Washington State two years ago. Carlyle has taken the unusual step of appending language covering the collection and processing of personal data to combat infectious diseases like COVID-19.

Big picture, the bill still uses the concepts of data controllers and processors most famously enshrined in the European Union’s (EU) General Data Protection Regulation (GDPR). Like other privacy bills, generally, people in Washington State would not need to consent before an entity could collect and process its information. People would be able to opt out of some activities, but most could data collection and processing could still occur as it presently does.

Washingtonians would be able to access, correct, delete, and port their personal data. Moreover, people would be able to opt out of certain data processing: “for purposes of targeted advertising, the sale of personal data, or profiling in furtherance of decisions that produce legal effects concerning a consumer or similarly significant effects concerning a consumer.” Controllers must provide at least two secure and reliable means by which people could exercise these rights and may not require the creation of a new account. Rather, a controller can require a person to use an existing account to exercise her rights.

Controllers must act on the request within 45 days and are allowed one 45-day extension “where reasonably necessary, taking into account the complexity and number of the requests.” It is not clear what would justify a 45-day extension except for numerous, complex requests, but in any event, the requester must be informed of an extension. Moreover, if a controller decides not to comply with the request, it must let the person know within 45 days, the reasons for noncompliance, and how an appeal may be filed. People would be permitted two free requests a year (although nothing stops a controller from meeting additional requests for free), and controllers may charge thereafter to cover reasonable costs and to deal with repetitive requests. Controllers may also just deny repetitive requests, too, and they may also deny requests they cannot authenticate. In the event of the latter, a controller may ask for more information so the person can prove his identity but is not required to.

Each controller would need to establish an internal appeals process for people to use if their request to exercise a right is denied. There is a specified timeline, and, at the end of this process, if a person is unhappy with the decision, the controller must offer to turn the matter over to the Office of the Attorney General of Washington for adjudication.

Like last year’s bills, this draft makes clear the differentiated roles of controllers and processors in the data ecosystem regulated by Washington State. Processors must follow a controller’s instructions and has an obligation to help the controller comply with the act. These obligations must set out in a contract between each controller and processor “that sets out the processing instructions to which the processor is bound, including the nature and purpose of the processing, the type of personal data subject to the processing, the duration of the processing, and the obligations and rights of both parties.” Additionally, who is a controller and who is a processor will necessarily be a fact driven analysis and it is possible for one entity to be both depending on the circumstances.

Notably, processors must help controllers respond to requests from people exercising their rights, secure personal data, and assist in complying with Washington State’s data breach protocol if a breach occurs. Processors must implement and use security commensurate to the personal data they are holding and processing.

Controllers are obligated to furnish privacy policies to people that must include the categories of personal data processed, the purposes for any processing, the categories of personal data shared with third parties, and the categories of third parties with whom sharing occurs. Moreover, if a controller sells personal data for targeted advertising, a controller has a special obligation to make people aware on a continuing basis, including their right to opt out if they choose. Data collection is limited to what is reasonably necessary for the disclosed purposes of the data processing. And yet, a controller may ask for and obtain consent to process for purposes beyond those reasonably necessary to effectuate the original purposes disclosed to the person. Controllers would also need to minimize the personal data it has on hand.

Controllers must “establish, implement, and maintain reasonable administrative, technical, and physical data security practices to protect the confidentiality, integrity, and accessibility of personal data…[that] shall be appropriate to the volume and nature of the personal data at issue.” Controllers would not be allowed to process personal data in a way that would violate discrimination laws. And so, controllers may not “process personal data on the basis of a consumer’s or a class of consumers’ actual or perceived race, color, ethnicity, religion, national origin, sex, gender, gender identity, sexual orientation, familial status, lawful source of income, or disability, in a manner that unlawfully discriminates against the consumer or class of consumers with respect to the offering or provision of (a) housing, (b) employment, (c) credit, (d) education, or (e) the goods, services, facilities, privileges, advantages, or accommodations of any place of public accommodation.” Controllers could not retaliate against people who exercise any of their rights to access, correct, delete, or port their personal data through offering differently priced or quality products or services. And yet, controllers may offer different prices and services as part of a loyalty program that is voluntary for people to join and may share personal data with third parties for reasons limited to the loyalty program.

Regarding another subset of personal data, consent will be needed before processing can occur. This subset is “sensitive data,” which is defined as “(a) personal data revealing racial or ethnic origin, religious beliefs, mental or physical health condition or diagnosis, sexual orientation, or citizenship or immigration status; (b) the processing of genetic or biometric data for the purpose of uniquely identifying a natural person; (c) the personal data from a known child; or (d) specific geolocation data.”

The bill also bars a person waiving his or her rights under any type of agreement, and this will be null and void for reasons of public policy.

Controllers would not need to reidentify deidentified personal data to respond to a request from a person. However, the way this section is written gives rise to questions about the drafter’s intentions. The section would not require controllers to respond to requests from people to access, correct, delete or port personal data if the “controller is not reasonably capable of associating the request with the personal data, or…it would be unreasonably burdensome for the controller to associate the request with the personal data” if other conditions are true as well. Given that this provision comes right after the language on reidentifying deidentified data, it seems like the aforementioned language would apply to other personal data. And so, some controllers could respond to a request by arguing they cannot associate the request or it would be unduly burdensome. Perhaps this is not what the drafters intend, but this could become a route whereby controllers deny legitimate requests.

This section of the bill also makes clear that people will not be able to exercise their rights of access, correction, deletion, or porting if the personal data are “pseudonymous data.” This term is defined as “personal data that cannot be attributed to a specific natural person without the use of additional information, provided that such additional information is kept separately and is subject to appropriate technical and organizational measures to ensure that the personal data are not attributed to an identified or identifiable natural person.” This is a concept that would seem to encourage controllers and processors to store data in a state that would strip identifiers from the personal data in order for them not to have to incur the cost and time of responding to requests. It bears note the concept and definition appear heavily influenced by the GDPR, which provides:

pseudonymisation’ means the processing of personal data in such a manner that the personal data can no longer be attributed to a specific data subject without the use of additional information, provided that such additional information is kept separately and is subject to technical and organisational measures to ensure that the personal data are not attributed to an identified or identifiable natural person

Data protection assessments will be necessary for a subset of processing activities: targeted advertising, selling personal data, processing sensitive data, any processing of personal data that presents “a heightened risk of harm to consumers” and another case that requires explanation. This last category is for those controllers who are profiling such that a reasonably foreseeable risk is presented of:

  • “Unfair or deceptive treatment of, or disparate impact on, consumers;
  • financial, physical, or reputational injury to consumers;
  • a physical or other intrusion upon the solitude or seclusion, or the
  • private affairs or concerns, of consumers, where such intrusion would be offensive to a reasonable person; or
  • other substantial injury to consumers.”

These “data protection assessments must take into account the type of personal data to be processed by the controller, including the extent to which the personal data are sensitive data, and the context in which the personal data are to be processed.” Moreover, data protection assessments “must identify and weigh the benefits that may flow directly and indirectly from the processing to the controller, consumer, other stakeholders, and the public against the potential risks to the rights of the consumer associated with such processing,  as mitigated by safeguards that can be employed by the controller to reduce such risks.” Moreover, the bill stipulates “[t]he use of deidentified data and the reasonable expectations of consumers, as well as the context of the processing and the relationship between the controller and the consumer whose personal data will be processed, must be factored into this assessment by the controller.” And, crucially, controllers must provide data protection assessments to the Washington Attorney General upon request, meaning they could inform an enforcement action or investigation.

Section 110 of the “Washington Privacy Act 2021” lays out the reasons one usually finds in privacy bills as to the circumstances when controllers and processors are not bound by the act, including but not limited to:

  • Comply with federal, state, or local laws, rules, or regulations;
  • Comply with a civil, criminal, or regulatory inquiry, investigation, subpoena, or summons by federal, state, local, or other governmental authorities;
  • Cooperate with law enforcement agencies concerning conduct or activity that the controller or processor reasonably and in good faith believes may violate federal, state, or local laws, rules, or regulations;
  • Provide a product or service specifically requested by a consumer, perform a contract to which the consumer is a party, or take steps at the request of the consumer prior to entering into a contract;
  • Take immediate steps to protect an interest that is essential for the life of the consumer or of another natural person, and where the processing cannot be manifestly based on another legal basis;
  • Prevent, detect, protect against, or respond to security incidents, identity theft, fraud, harassment, malicious or deceptive activities, or any illegal activity; preserve the integrity or security of systems; or investigate, report, or prosecute those responsible for any such action;

Moreover, the act does “not restrict a controller’s or processor’s ability to collect, use, or retain data to:

  • Conduct internal research solely to improve or repair products, services, or technology;
    Identify and repair technical errors that impair existing or intended functionality; or
  • Perform solely internal operations that are reasonably aligned with the expectations of the consumer based on the consumer’s existing relationship with the controller, or are otherwise compatible with processing in furtherance of the provision of a product or service specifically requested by a consumer or the performance of a contract to which the consumer is a party.

It seems reasonable to expect controllers and processors to try and read these provisions as liberally as they can in order to escape or circumvent the obligations of the act. I do not level this claim as a criticism; rather, it is what will undoubtedly occur if a regulated entity has halfway decent legal counsel.

One also finds legal liability for controllers that was in last year’s bill, too. The act makes clear that controllers cannot be liable for a processor’s violation if “at the time of disclosing the personal data, the disclosing controller or processor did not have actual knowledge that the recipient intended to commit a violation.” Consequently, even if a reasonable person could foresee that a processor would likely violate the act, unless the controller actually knows a violation is imminent, then the controller cannot be held liable. This structuring of the legal liability will likely result in controllers claiming they did not know of processors’ violations and create a disincentive for controllers to press processors to comply with the statutory and contractual requirements binding both.

The bill reiterates:

Personal data that are processed by a controller pursuant to [any of the aforementioned carveouts in Section 110] must not be processed for any purpose other than those expressly listed in this section. Personal data that are processed by a controller pursuant to this section may be processed solely to the extent that such processing is:

(i) Necessary, reasonable, and proportionate to the purposes listed in this section; and

(ii) adequate, relevant, and limited to what is necessary in relation to the specific purpose or purposes listed in this section.

Finally, controllers bear the burden of making the case that the exception being used complies with this section. This would serve to check a regulated entity’s inclination to read terms and requirements as generously as possible for them and their conduct.

The bill would not create a new right for people to sue, but if there are existing grounds a person uses to sue (e.g. product liability, tort, contract law, etc.) and wins, the liability would be distributed between a controller and processor according to their liability.

In terms of enforcement by the Attorney General, violations of this act are treated as violations of the Washington State Consumer Protection Act, and violations are considered violations of the ban on unfair and deceptive practices with civil liability as high as $7,500 per violation. However, the Attorney general must first “provide a controller thirty days’ written notice identifying the specific provisions of this title the Attorney General, on behalf of a consumer, alleges have been or are being violated.” If a cure is affected, then the Attorney General may not seek monetary damages. But if a cure is not, then the Attorney General may take the matter to court.

The act preempts all county, city, and local data processing laws.

There is new language in the bill pertaining to public health emergencies, privacy, and contact tracing. However, the provisions are divided between two different titles with one controlling private sector entities and the other public sector entities. Incidentally, at the federal level, privacy bills have not tended to include provisions to address public health emergencies and instead standalone bills have been drafted and introduced.

In regard to private sector entities, controllers and processors would not be able to process “covered data” for a “covered purpose” which relates to the symptoms of infectious diseases and tracking their spread, unless certain conditions are met. For example, these entities would need to make available a privacy policy and people must consent to such processing. Additionally, controllers and processors would not be able to disclose “any covered data processed for a covered purpose” to any law enforcement agency in the U.S., sell “any covered data processed for a covered purpose,” or “[s]hare any covered data processed for a covered purpose with another controller, processor, or third party unless such sharing is governed by contract” according to the terms laid out in this section and disclosed to the person per the privacy policy that must be disclosed. However, private sector entities could disclose covered data processed for a covered purpose to federal, state, and local agencies pursuant to laws permitting such disclosure. So, this would likely be under public health or emergency laws.

This section of the bill defines “covered purpose” as

processing of covered data concerning a consumer for the purposes of detecting symptoms of an infectious disease, enabling the tracking of a consumer’s contacts with other consumers, or with specific locations to identify in an automated fashion whom consumers have come into contact with, or digitally notifying, in an automated manner, a consumer who may have become exposed to an infectious disease, or other similar purposes directly related to a state of emergency declared by the governor pursuant to RCW 43.06.010 and any restrictions imposed under the state of emergency declared by the governor pursuant to RCW 43.06.200 through 43.06.270.

There is a section that seems redundant. This provision establishes the right of a person to opt out of processing her covered data for a covered purpose, but the previous section makes clear a person’s covered data may not be processed without her consent. Nonetheless, a person may determine whether his covered data is being processed, request a correction of inaccurate information, and request the deletion of “covered data.” The provisions on how controllers are required to respond to and process such requests are virtually identical to those established for the exercise of the rights to access, correct, delete, and port in the bill.

The relationship and responsibilities between controllers and processors track very closely to those imposed for normal data processing.

Controllers would need to make available privacy policies specific to processing covered data. The bill provides:

Controllers that process covered data for a covered purpose must provide consumers with a clear and conspicuous privacy notice that includes, at a minimum:

  • How a consumer may exercise the rights contained in section 203 of this act, including how a consumer may appeal a controller’s action with regard to the consumer’s request;
  • The categories of covered data processed by the controller;
  • The purposes for which the categories of covered data are processed;
  • The categories of covered data that the controller shares with third parties, if any; and
  • The categories of third parties, if any, with whom the controller shares covered data.

Controllers would also need to limit collection of covered data to what is reasonably necessary for processing and minimize collection. Moreover, controllers may not process covered data in ways that exceed what is reasonably necessary for covered purposes unless consent is obtained from each person. But then the bill further limits what processing of covered data is permissible by stating that controllers cannot “process covered data or deidentified data that was processed for a covered purpose for purposes of marketing, developing new products or services, or engaging in commercial product or market research.” Consequently, other processing purposes would be permissible provided consent has been obtained. And so, a covered entity might process covered data to improve the current means of collecting covered data for the covered purpose.

There is no right to sue entities for violating this section, but it appears controllers may bear more legal responsibility for the violations of its processors regarding covered data. Moreover, the enforcement language is virtually identical to the earlier provisions in the bill as to how the Attorney General may punish violators.

The bill’s third section would regulate the collection and processing of covered data for covered purposes by public sector entities, and for purposes of this section controllers are defined as “local government, state agency, or institutions of higher education which, alone or jointly with others, determines the purposes and means of the processing of covered data.” This section is virtually identical to the second section with the caveat that people would not be given the right to determine if their covered data has been collected and processed for a covered purpose, to request a correction of covered data, and to ask that such data be deleted. Also, a person could not ask to opt out of collection.

Finally, two of the Congressional stakeholders on privacy and data security hail from Washington state, and consideration and possible passage of a state law may limit their latitude on a federal bill they could support. Senator Maria Cantwell (D-WA) and Representative Cathy McMorris Rodgers (R-WA), who are the ranking members of the Senate Commerce, Science, and Transportation Committee and House Energy and Commerce’s Consumer Protection and Commerce Subcommittee respectively, are involved in drafting their committee’s privacy bills, and a Washington state statute may affect their positions in much the same the “California Consumer Privacy Act” (CCPA) (AB 375) has informed a number of California Members’ position on privacy legislation, especially with respect to bills being seen as weaker than the CCPA.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Photo Credit: Ally Laws on Pixabay

Further Reading, Other Developments, and Coming Events (22 September)

Coming Events

  • The United States’ Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) announced that its third annual National Cybersecurity Summit “will be held virtually as a series of webinars every Wednesday for four weeks beginning September 16 and ending October 7:”
    • September 23: Leading the Digital Transformation
    • September 30: Diversity in Cybersecurity
    • October 7: Defending our Democracy
    • One can register for the event here.
  • The Senate Judiciary Committee’s Intellectual Property Subcommittee will hold a hearing on 23 September titled “Examining Threats to American Intellectual Property: Cyber-attacks and Counterfeits During the COVID-19 Pandemic” with these witnesses:
    • Adam Hickey, Deputy Assistant Attorney General National Security Division, Department of Justice
    • Clyde Wallace, Deputy Assistant Director Cyber Division, Federal Bureau of Investigation
    • Steve Francis, Assistant Director, HSI Global Trade Investigations Division Director, National Intellectual Property Rights Center, U.S. Immigration and Customs Enforcement, Department of Homeland Security
    • Bryan S. Ware, Assistant Director for Cybersecurity Cyber Security and Infrastructure Security Agency, Department of Homeland Security
  • On 23 September, the Commerce, Science, and Transportation Committee will hold a hearing titled “Revisiting the Need for Federal Data Privacy Legislation,” with these witnesses:
    • The Honorable Julie Brill, Former Commissioner, Federal Trade Commission
    • The Honorable William Kovacic, Former Chairman and Commissioner, Federal Trade Commission
    • The Honorable Jon Leibowitz, Former Chairman and Commissioner, Federal Trade Commission
    • The Honorable Maureen Ohlhausen, Former Commissioner and Acting Chairman, Federal Trade Commission
    • Mr. Xavier Becerra, Attorney General, State of California
  • The House Energy and Commerce Committee’s Consumer Protection and Commerce Subcommittee will hold a virtual hearing “Mainstreaming Extremism: Social Media’s Role in Radicalizing America” on 23 September with these witnesses:
    • Marc Ginsburg, President, Coalition for a Safer Web
    • Tim Kendall, Chief Executive Officer, Moment
    • Taylor Dumpson, Hate Crime Survivor and Cyber-Harassment Target
    • John Donahue, Fellow, Rutgers University Miler Center for Community Protection and Resiliency, Former Chief of Strategic Initiatives, New York City Police Department
  • On 23 September, the Senate Homeland Security and Governmental Affairs will hold a hearing to consider the nomination of Chad Wolf to be the Secretary of Homeland Security.
  • The Senate Armed Services Committee will hold a closed briefing on 24 September “on Department of Defense Cyber Operations in Support of Efforts to Protect the Integrity of U.S. National Elections from Malign Actors” with:
    • Kenneth P. Rapuano, Assistant Secretary of Defense for Homeland Defense and Global Security
    • General Paul M. Nakasone, Commander, U.S. Cyber Command and Director, National Security Agency/Chief, Central Security Service
  • On 24 September, the Homeland Security and Governmental Affairs will hold a hearing on “Threats to the Homeland” with:
    • Christopher A. Wray, Director, Federal Bureau of Investigation
    • Christopher Miller, Director, National Counterterrorism Center
    • Kenneth Cuccinelli, Senior Official Performing the Duties of the Deputy Secretary of Homeland Security
  • The Senate Judiciary Committee’s Antitrust, Competition Policy & Consumer Rights Subcommittee will hold a hearing on 30 September titled “Oversight of the Enforcement of the Antitrust Laws” with Federal Trade Commission Chair Joseph Simons and United States Department of Justice Antitrust Division Assistant Attorney General Makan Delhrahim.
  • The Federal Communications Commission (FCC) will hold an open meeting on 30 September and has made available its agenda with these items:
    • Facilitating Shared Use in the 3.1-3.55 GHz Band. The Commission will consider a Report and Order that would remove the existing non-federal allocations from the 3.3-3.55 GHz band as an important step toward making 100 megahertz of spectrum in the 3.45-3.55 GHz band available for commercial use, including 5G, throughout the contiguous United States. The Commission will also consider a Further Notice of Proposed Rulemaking that would propose to add a co-primary, non-federal fixed and mobile (except aeronautical mobile) allocation to the 3.45-3.55 GHz band as well as service, technical, and competitive bidding rules for flexible-use licenses in the band. (WT Docket No. 19-348)
    • Expanding Access to and Investment in the 4.9 GHz Band. The Commission will consider a Sixth Report and Order that would expand access to and investment in the 4.9 GHz (4940-4990 MHz) band by providing states the opportunity to lease this spectrum to commercial entities, electric utilities, and others for both public safety and non-public safety purposes. The Commission also will consider a Seventh Further Notice of Proposed Rulemaking that would propose a new set of licensing rules and seek comment on ways to further facilitate access to and investment in the band. (WP Docket No. 07-100)
    • Improving Transparency and Timeliness of Foreign Ownership Review Process. The Commission will consider a Report and Order that would improve the timeliness and transparency of the process by which it seeks the views of Executive Branch agencies on any national security, law enforcement, foreign policy, and trade policy concerns related to certain applications filed with the Commission. (IB Docket No. 16-155)
    • Promoting Caller ID Authentication to Combat Spoofed Robocalls. The Commission will consider a Report and Order that would continue its work to implement the TRACED Act and promote the deployment of caller ID authentication technology to combat spoofed robocalls. (WC Docket No. 17-97)
    • Combating 911 Fee Diversion. The Commission will consider a Notice of Inquiry that would seek comment on ways to dissuade states and territories from diverting fees collected for 911 to other purposes. (PS Docket Nos. 20-291, 09-14)
    • Modernizing Cable Service Change Notifications. The Commission will consider a Report and Order that would modernize requirements for notices cable operators must provide subscribers and local franchising authorities. (MB Docket Nos. 19-347, 17-105)
    • Eliminating Records Requirements for Cable Operator Interests in Video Programming. The Commission will consider a Report and Order that would eliminate the requirement that cable operators maintain records in their online public inspection files regarding the nature and extent of their attributable interests in video programming services. (MB Docket No. 20-35, 17-105)
    • Reforming IP Captioned Telephone Service Rates and Service Standards. The Commission will consider a Report and Order, Order on Reconsideration, and Further Notice of Proposed Rulemaking that would set compensation rates for Internet Protocol Captioned Telephone Service (IP CTS), deny reconsideration of previously set IP CTS compensation rates, and propose service quality and performance measurement standards for captioned telephone services. (CG Docket Nos. 13-24, 03-123)
    • Enforcement Item. The Commission will consider an enforcement action.

Other Developments

  • The United States (U.S.) Department of Justice (DOJ) has indicted two Iranian nationals for allegedly hacking into systems in the U.S., Europe, and the Middle East dating back to 2013 to engage in espionage and sometimes theft.
    • The DOJ claimed in its press release:
      • According to a 10-count indictment returned on Sept. 15, 2020, Hooman Heidarian, a/k/a “neo,” 30, and Mehdi Farhadi, a/k/a “Mehdi Mahdavi” and “Mohammad Mehdi Farhadi Ramin,” 34, both of Hamedan, Iran, stole hundreds of terabytes of data, which typically included confidential communications pertaining to national security, foreign policy intelligence, non-military nuclear information, aerospace data, human rights activist information, victim financial information and personally identifiable information, and intellectual property, including unpublished scientific research.  In some instances, the defendants’ hacks were politically motivated or at the behest of Iran, including instances where they obtained information regarding dissidents, human rights activists, and opposition leaders.  In other instances, the defendants sold the hacked data and information on the black market for private financial gain.
      • The victims included several American and foreign universities, a Washington, D.C.-based think tank, a defense contractor, an aerospace company, a foreign policy organization, non-governmental organizations (NGOs), non-profits, and foreign government and other entities the defendants identified as rivals or adversaries to Iran.  In addition to the theft of highly protected and sensitive data, the defendants also vandalized websites, often under the pseudonym “Sejeal” and posted messages that appeared to signal the demise of Iran’s internal opposition, foreign adversaries, and countries identified as rivals to Iran, including Israel and Saudi Arabia.
  • Two United States (U.S.) agencies took coordinated action against an alleged cyber threat group and a front company for a “a years-long malware campaign that targeted Iranian dissidents, journalists, and international companies in the travel sector.” The U.S. Department of the Treasury’s Office of Foreign Assets Control (OFAC) “imposed sanctions on Iranian cyber threat group Advanced Persistent Threat 39 (APT39), 45 associated individuals, and one front company…Rana Intelligence Computing Company (Rana)” per the agency’s press release. Treasury further claimed:
    • Rana advances Iranian national security objectives and the strategic goals of Iran’s Ministry of Intelligence and Security (MOIS) by conducting computer intrusions and malware campaigns against perceived adversaries, including foreign governments and other individuals the MOIS considers a threat. APT39 is being designated pursuant to E.O. 13553 for being owned or controlled by the MOIS, which was previously designated on February 16, 2012 pursuant to Executive Orders 13224, 13553, and 13572, which target terrorists and those responsible for human rights abuses in Iran and Syria, respectively.
    • The Federal Bureau of Investigation (FBI) provided “information on numerous malware variants and indicators of compromise (IOCs) associated with Rana to assist organizations and individuals in determining whether they may have been targeted.”
  • The United States (U.S.) Department of Justice (DOJ) also released grand jury indictments against five nationals of the People’s Republic of China and two Malaysians for extensive hacking and exfiltration of commercial and business information with an eye towards profiting from these crimes. The DOJ asserted in its press release:
    • In August 2019 and August 2020, a federal grand jury in Washington, D.C., returned two separate indictments (available here and here) charging five computer hackers, all of whom were residents and nationals of the People’s Republic of China (PRC), with computer intrusions affecting over 100 victim companies in the United States and abroad, including software development companies, computer hardware manufacturers, telecommunications providers, social media companies, video game companies, non-profit organizations, universities, think tanks, and foreign governments, as well as pro-democracy politicians and activists in Hong Kong.
    •  The intrusions, which security researchers have tracked using the threat labels “APT41,” “Barium,” “Winnti,” “Wicked Panda,” and “Wicked Spider,” facilitated the theft of source code, software code signing certificates, customer account data, and valuable business information.  These intrusions also facilitated the defendants’ other criminal schemes, including ransomware and “crypto-jacking” schemes, the latter of which refers to the group’s unauthorized use of victim computers to “mine” cryptocurrency. 
    • Also in August 2020, the same federal grand jury returned a third indictment charging two Malaysian businessmen who conspired with two of the Chinese hackers to profit from computer intrusions targeting the video game industry in the United States and abroad.  Shortly thereafter, the U.S. District Court for the District of Columbia issued arrest warrants for the two businessmen.  On Sept. 14, 2020, pursuant to a provisional arrest request from the United States with a view to their extradition, Malaysian authorities arrested them in Sitiawan.
  • On 21 September, the House of Representatives took and passed the following bills, according to summaries provided by the House Majority Whip’s office:
    • The “Effective Assistance in the Digital Era” (H.R. 5546) (Rep. Jeffries – Judiciary) This bill requires the Federal Bureau of Prisons to establish a system to exempt from monitoring any privileged electronic communications between incarcerated individuals and their attorneys or legal representatives.
    • The “Defending the Integrity of Voting Systems Act (S. 1321) This bill broadens the definition of “protected computer” for purposes of computer fraud and abuse offenses under current law to include a computer that is part of a voting system.
    • The “Promoting Secure 5G Act of 2020” (H.R. 5698) This bill will establish as a U.S. policy within the IFIs to only finance 5G projects and other wireless technologies that include adequate security measures in furtherance of national security aims to protect wireless networks from bad actors and foreign governments.
    • The “MEDIA Diversity Act of 2020” (H.R. 5567) This bill Requires the FCC to consider market entry barriers for socially disadvantaged individuals in the communications marketplace.
    • The “Don’t Break Up the T-Band Act of 2020” as amended (H.R. 451) This bill repeals the requirement on the FCC to reallocate and auction the T-Band.  H.R. 451 also requires the FCC to adopt rules limiting the use of 9-1-1 fees by States or other taxing jurisdictions to (1) the support and implementation of 9-1-1 services and (2) operational expenses of public safety answering points.
    • It bears note that S. 1321 has passed the Senate, and so it is off to the White House for the only election security bill that has made it through both house of Congress.

Further Reading

  • Justice Department expected to brief state attorneys general this week on imminent Google antitrust lawsuit” By Tony Romm — The Washington Post; “Justice Dept. Case Against Google Is Said to Focus on Search Dominance” By Cecilia Kang, Katie Benner, Steve Lohr and Daisuke Wakabayashi — The New York Times; “Justice Department, states to meet in possible prelude to Google antitrust suit” By Leah Nylen — Politico. Tomorrow, the United States Department of Justice (DOJ) will outline its proposed antitrust case against Google with state attorneys general, almost all of whom are investigating Google on the same grounds. Reportedly, the DOJ case is focused on the company’s dominance of online searches, notably its arrangement to make Google the default search engine on iPhones and Androids, and not on its advertising practices. If the DOJ goes this road, then it will be similar to the European Union’s (EU) 2018 case against Google for the same, which resulted in EU residents being offered a choice on search engines on Android devices and a €4.34 billion fine. This development comes after articles earlier this month that Attorney General William Barr has been pushing the DOJ attorneys and investigators against the wishes of many to wrap up the investigation in time for a pre-election filing that would allow President Donald Trump to claim he is being tough on big technology companies. However, if this comes to pass, Democratic attorneys general may decline to join the suit and may bring their own action also alleging violations in the online advertising realm that Google dominates. In this vein, Texas Attorney General Ken Paxton has been leading the state effort to investigate Google’s advertising business, which critics argue is anti-competitive. Also, according to DOJ attorneys who oppose what they see as Barr rushing the suit, this could lead to a weaker case Google may be able to defeat in court. Of course, this news comes shortly after word leaked from the Federal Trade Commission (FTC) that its case against Facebook could be filed regarding its purchase of rivals WhatsApp and Instagram.
  • Why Japan wants to join the Five Eyes intelligence network” By Alan Weedon — ABC News. This piece makes the case as to why the United States, United Kingdom, Canada, Australia, and New Zealand may admit a new member to the Five Eyes soon: Japan. The case for the first Asian country is that it is a stable, western democracy, a key ally in the Pacific, and a bulwark against the influence of the People’s Republic of China (PRC). It is really this latter point that could carry the day, for the Five Eyes may need Japan’s expertise with the PRC and its technology to counter the former’s growing ambitions.
  • The next Supreme Court justice could play a major role in cybersecurity and privacy decisions” By Joseph Marks — The Washington Post. There are a range of cybersecurity and technology cases that the Supreme Court will decide in the near future, and so whether President Donald Trump gets to appoint Justice Ruth Bader Ginsburg’s successor will be very consequential for policy in these areas. For example, the court could rule on the Computer Fraud and Abuse Act for the first time regarding whether researchers are violating the law by probing for weak spots in systems. There are also Fourth Amendment and Fifth Amendment cases pending with technology implications as the former pertains to searches of devices by border guards and the latter to self-incrimination visa vis suspects being required to unlock devices.
  • Facebook Says it Will Stop Operating in Europe If Regulators Don’t Back Down” By David Gilbert —VICE. In a filing in its case against Ireland’s Data Protection Commission (DPC), Facebook made veiled threats that if the company is forced to stop transferring personal data to the United States, it may stop operating in the European Union altogether. Recently, the DPC informed Facebook that because Privacy Shield was struck down, it would need to stop transfers even though the company has been using standard contractual clauses, another method permitted in some case under the General Data Protection Regulation. Despite Facebook’s representation, it seems a bit much that the company would leave the EU to any competitors looking to its fill its shoes.
  • As U.S. Increases Pressure, Iran Adheres to Toned-Down Approach” By Julian E. Barnes, David E. Sanger, Ronen Bergman and Lara Jakes — The New York Times. The Islamic Republic of Iran is showing remarkable restraint in its interactions with the United States in the face of continued, punitive actions against Tehran. And this is true also of its cyber operations. The country has made the calculus that any response could be used by President Donald Trump to great effect in closing the gap against front runner former Vice President Joe Biden. The same has been true of its cyber operations against Israel, which has reportedly conducted extensive operations inside Iran with considerable damage.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Another Federal Privacy Bill

Senate Commerce Republicans revise and release privacy bill that does not budge on main issues setting them apart from their Democratic colleagues.

Last week, in advance of tomorrow’s hearing on privacy legislation, the chair and key Republicans released a revised version of draft legislation released last year to mark their position on what United States (U.S.) federal privacy regulation should be. Notably, last year’s draft and the updated version would still preempt state laws like the “California Consumer Privacy Act” (CCPA) (AB 375), and people in the U.S. would not be given the right to sue entities that violate the privacy law. These two issues continue to be the main battle lines between Democratic and Republican bills to establish a U.S. privacy law. Given the rapidly dwindling days left in the 116th Congress and the possibility of a Democratic White House and Senate next year, this may be both a last gasp effort to get a bill out of the Senate and to lay down a marker for next year.

The “Setting an American Framework to Ensure Data Access, Transparency, and Accountability (SAFE DATA) Act” (S.4626) was introduced by Senate Commerce, Science, and Transportation Committee Chair Roger Wicker (R-MS), Senate Majority Whip and  Communications, Technology, Innovation, and the Internet Subcommittee Chair John Thune (R-SD), Transportation and Safety Subcommittee Chair Deb Fischer (R-NE), and Safety, and Senator Marsha Blackburn (R-TN). However, a notable Republican stakeholder is not a cosponsor: Consumer Protection Subcommittee Chair Jerry Moran (R-KS) who introduced his own bill, the “Consumer Data Privacy and Security Act of 2020” (S.3456) (See here for analysis).

As mentioned, Wicker had put out for comment a discussion draft, the “Consumer Data Privacy Act of 2019” (CDPA) (See here for analysis) in November 2019 shortly after the Ranking Member on the committee, Senator Maria Cantwell (D-WA) and other Democrats had introduced their privacy bill, the “Consumer Online Privacy Rights Act“ (COPRA) (S.2968) (See here for more analysis). Here’s how I summarized the differences at the time: in the main, CDPA shares the same framework with COPRA with some key, significant differences, including:

  • COPRA expands the FTC’s jurisdiction in policing privacy harms whereas CDPA would not
  • COPRA places a duty of loyalty on covered entities to people whose covered data they process or transfer; CDPA does not have any such duty
  • CDPA does not allow people to sue if covered entities violate the new federal privacy and security regime; COPRA would allow such suits to move forward
  • CDPA preempts state privacy and data security laws; COPRA would establish a federal floor that states like California would be able to legislate on top of
  • CDPA would take effect two years after enactment, and COPRA would take effect six months after enactment.
  • The bar against a person waiving her privacy rights under COPRA is much broader than CDPA
  • COPRA would empower the FTC to punish discriminatory data processing and transfers; CDPA would require the FTC to refer these offenses to the appropriate federal and state agencies
  • CDPA revives a concept from the Obama Administration’s 2015 data privacy bill by allowing organizations and entities to create standards or codes of conduct and allowing those entities in compliance with these standards or codes to be deemed in compliance with CDPA subject to FTC oversight; COPRA does not have any such provision
  • COPRA would require covered entities to conduct algorithmic decision-making impact assessments to determine if these processes are fair, accurate, and unbiased; no such requirement is found in CDPA

As a threshold matter, the SAFE DATA Act is in the latest in a line of enhanced notice and consent bills founded on the logic that if people were informed and able to make choices about how and when their data are used, then the U.S. would have an ideal data and privacy ecosystem. This view, perhaps coincidentally, dovetails with Republican views on other issues where people should merely be given information and the power to choose, and any bad outcomes being the responsibility of those who made poor choices. This view runs counter to those who see privacy and data security as being akin to environmental or pollution problems, that is being beyond the ability of any one person to manage or realistically change.

Turning to the bill before us, we see that while covered entities may not outright deny services and products to people if they choose to exercise the rights granted under the bill visa vis their covered data, a covered entity may charge different prices. This structure would predictably lead to only those who can afford it or are passionately committed to their privacy being able to pay for more privacy. And yet, the rights established by the bill for people to exercise some control over their private information cannot be waived, forestalling the possibility that some covered entities would make such a waiver a term of service like many companies do with a person’s right to sue.

Covered entities must publish privacy policies before or at the point of data collection, including:

  • The identity of the entity in charge of processing and using the covered data
  • The categories of covered data collected and the processing purposes of each category
  • Whether transfers of covered data occur, the categories of those receiving such data, and the purposes for which transfers occur
  • The entity’s data retention and data security policies generally; and
  • How individuals may exercise their rights.

Any material changes mean new privacy policies provided to people and consent again must be obtained before collection and processing may resume.

There is, however, language not seen in other privacy bills: “[w]here the ownership of an individual’s device is transferred directly from one individual to another individual, a covered entity may satisfy its obligation to disclose a privacy policy prior to or at the point of collection of covered data by making the privacy policy available under (a)(2)” (i.e. by posting on the entity’s website.) So, if I give an old phone to a friend, a covered entity may merrily continue collecting and processing data because I consented and my friend’s consent is immaterial. Admittedly, this would seem to be a subset of all devices used in the U.S., but it does not seem to be a stretch for covered entities to need to obtain consent if they determine a different person has taken over a device. After all, they will almost certainly be able to discern the identity of the new user and that the device is now being used by someone new.

Section 103 of the SAFE DATA Act establishes a U.S. resident’s rights to access, correct, delete, and port covered data. People would be able to access their covered data and correct “material” inaccuracies or incomplete information at least twice a year at no cost provided the covered entity can verify their identity. Included with the right to access would be provision of the categories of third parties to whom covered data has been transferred and a list of the categories of purposes. There is a long list of reasons why covered entities would not need to comply, including but not limited to:

  • If the covered entity must “retain any covered data for the sole purpose of fulfilling the request; “
  • If it would “be impossible or demonstrably impracticable to comply with;”
  • If a request would “require the covered entity to combine, relink, or otherwise reidentify covered data that has been deidentified;”
  • If it would “result in the release of trade secrets, or other proprietary or confidential data or business practices;”
  • If it would “interfere with law enforcement, judicial proceedings, investigations, or reasonable efforts to guard against, detect, or investigate malicious or unlawful activity, or enforce contracts;”
  • If it would “require disproportionate effort, taking into consideration available technology, or would not be reasonably feasible on technical grounds;”
  • If it would “compromise the privacy, security, or other rights of the covered data of an- other individual;”
  • If it would “be excessive or abusive to another individual; or
  • If t would “violate Federal or State law or the rights and freedoms of another individual, including under the Constitution of the United States.”

This extensive list will give companies not interested in complying with plenty of reason to proffer as to why they will not provide access or correct. Nonetheless, the FTC would need to draft and implement regulations “establishing requirements for covered entities with respect to the verification of requests to exercise rights” to access and correct. Perhaps the agency will be able to address some foreseeable problems with the statute as written.

Explicit consent is needed before a covered entity may transfer or process the “sensitive covered data” of a person. The first gloss on this right is that a person’s consent is not needed to collect, process, and transfer the “covered data” of a person. Elsewhere in the section, it is clear that one has a limited opt out right: “a covered entity shall provide an individual with the ability to opt out of the collection, processing, or transfer of such individual’s covered data before such collection, processing, or transfer occurs.”

Nonetheless, a bit of a detour back into the definitions section of the bill is in order to understand which types of data lay on which side of the consent line. “Covered data” are “information that identifies or is linked or reasonably linkable to an individual or a device that is linked or reasonably linkable to an individual” except for publicly available data, employment data, aggregated data, and de-identified data. Parenthetically, I would note the latter two exceptions would seem to be incentives for companies to hold personal information in the aggregate or in a de-identified state as much as possible so as to avoid triggering the requirements of the SAFE DATA Act.

“Sensitive covered data” would be any of the following (and, my apologies, the list is long):

  • A unique, government-issued identifier, such as a Social Security number, passport number, or driver’s license number, that is not required to be displayed to the public.
  • Any covered data that describes or reveals the diagnosis or treatment of the past, present, or future physical health, mental health, or disability of an individual.
  • A financial account number, debit card number, credit card number, or any required security or access code, password, or credentials allowing access to any such account.
    Covered data that is biometric information.
  • A persistent identifier.
  • Precise geolocation information (defined elsewhere as anything within 1750 feet)
  • The contents of an individual’s private communications, such as emails, texts, direct messages, or mail, or the identity of the parties subject to such communications, unless the covered entity is the intended recipient of the communication (meaning metadata is fair game; and this can be incredibly valuable. Just ask he National Security Agency)
  • Account log-in credentials such as a user name or email address, in combination with a password or security question and answer that would permit access to an online account.
  • Covered data revealing an individual’s racial or ethnic origin, or religion in a manner inconsistent with the individual’s reasonable expectation regarding the processing or transfer of such information. (of course, this sort of qualifying language always makes me think according to who’s definition of “reasonable expectation”)
  • Covered data revealing the sexual orientation or sexual behavior of an individual in a manner inconsistent with the individual’s reasonable expectation regarding the processing or transfer of such information. (See the previous clause)
  • Covered data about the online activities of an individual that addresses or reveals a category of covered data described in another subparagraph of this paragraph. (I suppose this is intended as a backstop against covered entities trying to backdoor their way into using sensitive covered data by claiming its covered data from online activities.)
  • Covered data that is calendar information, address book information, phone or text logs, photos, or videos maintained for private use on an individual’s device.
  • Any covered data collected or processed by a covered entity for the purpose of identifying covered data described in another paragraph of this paragraph. (again, this seems aimed at plugging a possible loophole in that ordinary covered data can probably be processed or combined with other covered data to arrive at some categories of “sensitive covered data.”)
  • Any other category of covered data designated by the Commission pursuant to a rulemaking under section 553 of title 5, United States Code (meaning the FTC can use normal rulemaking authority and not the shackles of the Moss-Magnuson rulemaking procedures to expand this definition as needed).

So, we have a subset of covered data that would be subject to consent requirements, including notice with a “clear description of the processing purpose for which the sensitive covered data will be processed;” that “clearly identif[ies] any processing purpose that is necessary to fulfill a request made by the individual” that “include[s] a prominent heading that would enable a reasonable individual to easily identify the processing purpose for which consent is sought; and “clearly explain[s] the individual’s right to provide or withhold consent.”

Finally, the FTC may but does not have “to establish requirements for covered entities regarding clear and conspicuous procedures for allowing individuals to provide or withdraw affirmative express consent for the collection of sensitive covered data.” If the agency chooses to do so, it may use the normal notice and comment procedures virtually every other U.S. agency may.

Covered entities must minimize collection, processing, and retention of covered data to “what is reasonably necessary, proportionate, and limited” except if permitted elsewhere in the SAFE DATA Act or another federal statute. Interestingly, the FTC would not be tasked with conducting a rulemaking but would instead need to issue guidelines with best practices on how covered entities would undertake such minimization.

Service providers must follow the direction of the covered entity with whom they are working and delete or deidentify data after they have finished work upon it. Third parties are limited in processing covered data to only those purposes consistent with the reasonable expectations of the individual to whom the data belong. However, third parties do not need to obtain consent for processing sensitive covered data or covered data. However, covered entities must perform due diligence to ensure that service providers and third parties will comply with the requirements particular to these two classes of entities. However, there is no obligation beyond due diligence and no suggestion of liability for the misdeeds and violations of service providers and third parties.

Large data holders would need to conduct periodic privacy impact analyses with an eye toward helping these entities improve their privacy policies. This class of covered entities are those that have processed or transferred the covered data of 8 million or more people in a given year or the sensitive covered data of 300,000 people.

The SAFE DATA Act would generally allow covered entities to collect, process, and transfer the covered data of people without their consent so long as these activities are reasonably necessary, proportionate and limited to the following purposes:

  • To initiate or complete a transaction or to fulfill an order or provide a service specifically requested by an individual, including associated routine administrative activities such as billing, shipping, financial reporting, and accounting.
  • To perform internal system maintenance, diagnostics, product or service management, inventory management, and network management.
  • To prevent, detect, or respond to a security incident or trespassing, provide a secure environment, or maintain the safety and security of a product, service, or individual.
  • To protect against malicious, deceptive, fraudulent, or illegal activity.
  • To comply with a legal obligation or the establishment, exercise, analysis, or defense of legal claims or rights, or as required or specifically authorized by law.
  • To comply with a civil, criminal, or regulatory inquiry, investigation, subpoena, or summons by an Executive agency.
  • To cooperate with an Executive agency or a law enforcement official acting under the authority of an Executive or State agency concerning conduct or activity that the Executive agency or law enforcement official reasonably and in good faith believes may violate Federal, State, or local law, or pose a threat to public safety or national security.
  • To address risks to the safety of an individual or group of individuals, or to ensure customer safety, including by authenticating individuals in order to provide access to large venues open to the public.
  • To effectuate a product recall pursuant to Federal or State law.

People would not be able to opt out of collection, processing, and transferring covered data. As mentioned earlier, U.S. residents would receive a limited right to opt out, and it is in Section 108 that one learns the things a person cannot opt out of. I suppose it should go without saying that covered entities will interpret these terms as broadly as they can in order to forestall people from opting out. The performance of “internal system maintenance, diagnostics, product or service management, inventory management, and network management” seems like a potentially elastic definition that could be asserted to give cover to some covered entities.

Speaking of exceptions, small businesses would not need to heed the rights of individuals regarding their covered data, do not need to minimize their collection, processing, and transferring covered data, and will not need to have data privacy and security officers. These are defined as entities with gross annual revenues below $50 million per year, that has processed the covered data of less than 1 million people, has fewer than 500 employees, and earns less than 50% of its revenue from transferring covered data. On its face, this seems like a very generous definition of what shall be a small business.

The FTC would not be able to police processing and transferring of covered data that violates discrimination laws. Instead the agency would need to transfer these matters to agencies of jurisdiction. The FTC would be required to use its 6(b) authority to “examin[e] the use of algorithms to process covered data in a manner that may violate Federal anti-discrimination laws” and then publish a report in its findings and guidance on how covered entities can avoid violating discrimination laws.

Moreover, the National Institute of Standards and Technology (NIST) must “develop and publish a definition of ‘‘digital content forgery’’ and accompanying explanatory materials.” One year afterwards, the FTC must “publish a report regarding the impact of digital content forgeries on individuals and competition.”

Data brokers would need to register with the FTC, which would then publish a registry of data brokers on its website.

There would be additional duties placed on covered entities. For example, these entities must “establish, implement, and maintain reasonable administrative, technical, and physical data security policies and practices to protect against risks to the confidentiality, security, and integrity of covered data.” However, financial services companies subject to and in compliance with Gramm-Leach-Bliley regulations would be deemed to be in compliance with these data security obligations. The same would be true of entities subject to and in compliance with the “Health Insurance Portability and Accountability Act” and “Health Information Technology for Economic and Clinical Health Act.” Additionally, the FTC may “issue regulations to identify processes for receiving and assessing information regarding vulnerabilities to covered data that are reported to the covered entity.”

The SAFE DATA Act has language new to federal privacy bills on “opaque algorithms.” Specifically, covered internet platforms would not be able to use opaque algorithms unless notice is provided to users and an input-transparent algorithm version is available to users. The term ‘‘covered internet platform’’ is broad and encompasses “any public-facing website, internet application, or mobile application, including a social network site, video sharing service, search engine, or content aggregation service.” An “opaque algorithm” is “an algorithmic ranking system that determines the order or manner that information is furnished to a user on a covered internet platform based, in whole or part, on user-specific data that was not expressly provided by the user to the platform for such purpose.”

The bill makes it an unfair and deceptive practice for “large online operator[s]” “to design, modify, or manipulate a user interface with the purpose or substantial effect of obscuring, subverting, or impairing user autonomy, decision-making, or choice to obtain consent or user data.”

A covered entity must have

  • 1 or more qualified employees or contractors as data privacy officers; and
  • 1 or more qualified employees or contractors…as data security officers.

Moreover, “[a] covered entity shall maintain internal controls and reporting structures to ensure that appropriate senior management officials of the covered entity are involved in assessing risks and making decisions that implicate compliance with this Act.”

There are also provisions protecting whistleblowers inside covered entities that “voluntarily provide[] [“original information”] to the [FTC]…relating to non-compliance with, or any violation or alleged violation of, this Act or any regulation promulgated under this Act.”

Like virtually all the other bills, the FTC would be able to levy civil fines of more than $42,000 per violation, and state attorneys general would also be able to enforce the new privacy regime. However, the FTC would be able to intervene and take over the action if it chose, and if two or more state attorneys general are bringing cases regarding the same violations, then the cases would be consolidated and heard in the federal court in the District of Columbia. The FTC would also get jurisdiction over common carriers and non-profits for purposes of enforcing the SAFE DATA Act.

And then there is new language in the SAFE DATA Act that seems aimed at addressing a pair of cases before the Supreme Court on the extent of the FTC’s power to seek and obtain certain monetary damages and equitable relief. The FTC has appealed an adverse ruling from the U.S. Court of Appeals for the Seventh Circuit while the other case is coming from the U.S. Court of Appeals for the Ninth Circuit.

Like the forerunner bill released last November, the FTC would be empowered to “approve voluntary consensus standards or certification programs that covered entities may use to comply with 1 or more provisions in this Act.” These provisions came from an Obama Administration privacy bill allowing for the development and usage of voluntary consensus-based standards for covered entities to comply with in lieu of the provisions of that bill.

The SAFE DATA Act would not impinge existing federal privacy laws but would preempt all privacy laws at the state level. Ironically, the bill would not preempt data breach notification laws. One would think if uniformity across the U.S. were a driving motivation, doing so would be desirable.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Gerd Altmann from Pixabay

CPRA From Another View

Let’s see how the CPRA would work from the view of a Californian.

Of course, I analyzed California Ballot Proposition 24, the “California Privacy Rights Act,” at some length in a recent issue, but I think taking on the proposed rewrite of the “California Consumer Privacy Act” (AB 375) from a different angle may provide value in understanding what this law would and would not do. In this piece, I want to provide a sense of what the California resident would be presented with under the new privacy statute.

As noted in my article the other day, as under the CCPA, the CPRA would still not allow people to deny businesses the right to collect and process their personal information unlike some of the bills pending in Congress. Californians could stop the sale or sharing of personal information, but not the collection and processing of personal data short of forgoing or limiting online interactions and many in-person interactions. A person could request the deletion of personal information collected and processed subject to certain limitations and exceptions businesses are sure to read as broadly as possible.

So, businesses subject to the CPRA will have to inform people at the point of collection “the categories of personal information to be collected and the purposes for which the categories of personal Information are collected or used and whether such Information is sold or shared.” Easy enough, as far as this goes. I live in Sacramento, and I log into Facebook, there should be notice about the categories of personal information (e.g. data such as IP address, physical address, name, geolocation data, browsing history, etc.) As a citizen of California afforded privacy rights by the CPRA, I would not be able to tell Facebook not to collect and process these sorts of data. I would be able to ask that they delete these data and to stop their selling or sharing of these data subject to significant limitations on these rights. Therefore, a baseline assumption in the CPRA, as in the CCPA, that it is either in the public interest that data collection and processing are a net good for California, its people, and its businesses, or a concession that it is too late to stop such practices, for strong law stopping some of these practice will result in these companies, some of which are headquartered in the state, to stop offering their free services and/or leave the state.

In the same notice described in the preceding paragraph, I would also be told whether Facebook sells or shares my personal information. I would also be alerted as to whether “sensitive personal information” is being collected and if these are being sold or shared.

Of course, with both categories of information collected from people in California, the use of the data must be compatible with the disclosed purpose for collection. And, so, presumably, the notice provided to me would include the why of the data collection, but whatever the purpose, so long as it is disclosed to me, it would be legal, generally speaking, under the CPRA. The only limitation seems to be purposes incompatible with the context in which the personal information was collected

My personal data could not be stored by a business indefinitely, for the law limits storage for each disclosed purpose for any the time necessary to undertake and complete the purpose.

It must also be stressed that Californians will all but certainly be presented with notice in the physical world when they shop in larger grocery store chains, national or large regional retailers, airlines, car rental firms, etc. In the case of hotels, car rental firms, and airlines, just to name three categories of businesses likely to be covered by the CPRA and to be collecting data on people, the notice may be appended to the boilerplate contractual language no one I know reads. It may be written in the clearest language imaginable, but a person must be advised of what data are being collected, the purpose of the collection and use, and whether it is being sold and shared. For the privacy purist, the only way not to have one’s information collected would be to not engage with these companies. Likewise, walking into a retail establishment large enough to qualify as a business under the CPRA may entail seeing notice posted somewhere in the store, possibly alongside information indicating customers are under surveillance by camera, that personal information is being collected.

I would be able to immediately ask the business to delete my personal information, but it would be allowed to keep this on hand during the period it is completing a transaction or providing goods or services. But there is language that may be interpreted broadly by a business to keep my personal information such as an exception to conduct a product recall or to anticipate future transactions as part of our ongoing business relationship. I would expect this to be very broadly read in favor of keeping personal data. Nonetheless, if it is a service or product used frequently, say, Amazon, then I would need to go back after every use and request my personal information be deleted. But if I placed four Amazon orders a month, the platform could reasonably deny my request because it is occurring in the course of an ongoing business transaction. There are other possible grounds on which a business might not delete a person’s personal or sensitive personal information such as ensuring the security and integrity of the service and product with the caveat that my personal information would have to somehow be “reasonably necessary and proportionate for those purposes.” Would the business make this determination? Subject to guidance or regulations?

However, the exception to the right to delete that is nearly opaque is “[t]o enable solely internal uses that are reasonably aligned with the expectations of the consumer based on the consumer’s relationship with the business and compatible with the context in which the consumer provided the information.” It is not clear to me the sort of “internal uses” this encapsulates. Data processing so the business can better target the person? This provision is drafted so broadly the new privacy agency must explicate it so businesses and Californians understand what this entails. Also, keep in mind, if I lived in California, I would have to repeat these deletion requests for each and every business collecting information on me.

I would be able to correct my personal information with a business but only with “commercially reasonable efforts,” suggesting cases in which correction are difficult would allow businesses to decline my request. For anyone who has ever tried to correct one’s personal information with a company, the frustration attendant on such endeavors can be significant. A major American automaker switched two letters my wife’s last name, and no matter how many times we asked that her name be spelled correctly, this massive corporation could not or would not make this change. This may end up as a right that is largely without effect.

I would be able to ask for and receive my personal information after a fashion. For example, I would be able to ask for and obtain the exact personal information the business has collected itself but only the categories of information obtained through means other direct collection (i.e. data brokers and other businesses.). To make this section even more convoluted, I would also receive the categories of personal information the business has directly collected on me. Moreover, I could learn the commercial or business purposes for collection and processing and the third parties with whom my personal information is sold or shared. However, if a business includes all this and other information on its website as part of its privacy policy, it would only have to send me the specific pieces of personal information it has collected directly from me. Whatever the case, I would generally only be able to receive information from the previous 12 months.

Separately from the aforementioned rights, I could also learn to whom a business is selling, sharing, and disclosing my information. However, if we drew a Venn Diagram between this right and the previous one, the most significant right bestowed by this section of the CPRA would be that of learning “[t]he categories of personal information that the business disclosed about the consumer for a business purpose and the categories of persons to whom It was disclosed for a business purpose.”

The CPRA would provide me the right to opt out of a business selling or sharing my personal information, and businesses would need to alert people of this right. If I were between the age of 13 and 16, I would need to opt in to selling or sharing my personal information. Moreover, for my children under the age of 13, I, or my wife, would need to opt in for their personal information to be sold or shared.

I would also be able to limit the use and disclosure of my sensitive personal information to an uncertain extent. The CPRA makes clear this is not an absolute right, and businesses would be able to use a number of exceptions to continue using this class of information. For example, a business would be able to do so “to ensure security and Integrity to the extent the use of the consumer’s personal Information is reasonably necessary and proportionate for these purposes.” Likewise, a business could use sensitive personal information for “[s]hort-term, transient use, including but not limited to non-personalized advertising shown as part of a consumer’s current Interaction with the business.” There are other exceptions, and the new California state agency established by the CPRA would be able to promulgate regulations to further define those situations in which use and disclosure may continue against my wishes.

Otherwise, a business would be unable to use or disclose my sensitive personal information once I elect to stop this practice. However, this right pertains only to the use of this type of information to infer my characteristics subject to the drafting of regulations.  

I would not be discriminated against for exercising any of the rights the CPRA grants me with a significant catch on which I’ll say more in a moment. This right would stop businesses from denying me goods or services, charging me a different price, or providing a different level of service or quality. And yet, a business would be able to charge me a different price or rate or give me a lesser level of service or product “if that difference is reasonably related to the value provided to the business by the consumer’s data.” This strikes me as a situation where the exception will eat the rule, for any business with any level of resources will make the claim that the value of my personal information is vital to providing me a service or product for free, and if I deny them the use of this information, the value proposition has changed and I must be charged to have the same level of service, or alternatively without payment, the business could only provide me with a lesser level of service or product. It is my guess that this right would be functionally null.

Moreover, this section is tied to loyalty and reward programs, which would also be exempt from this right so long as the case could be made that the value of my data justifies the difference in price or service. It is not hard to see to incentive structure here being such that businesses would likely establish new programs in order to pressure people in California not to exercise rights in the CPRA and to continue using their personal information in the current fashion. Of course, this is this provision “[a] business shall not use financial incentive practices that are unjust, unreasonable, coercive, or usurious in nature,” but where exactly is the line between a business offering a rewards or loyalty program purportedly tied to the value of the data it collects and processes and these sorts of practices. It may be very hard to divine and will likely require a case-by-case process to delineate the legal from the illegal.

I would generally have two ways to exercise the rights I would be given under the CPRA unless the business only operates online, and then it would be by email. The business would have 45 days after verifying my request for my personal information or to correct or delete to comply, and this would need to be free of charge. However, this 45-day period may be extended once so long as the business informs me. It would seem 90 days would become the de facto norm. A business may also be able to demand “authentication of the consumer that is reasonable in light of the nature of the personal information requested.” The intent is obviously for a business to be sure someone is not malicious or mischievously trying to change someone else’s information in what may come to be an extension of doxing or other vexatious practices seen elsewhere in the online realm. However, this may also likely be read liberally by some businesses as a means of trying up another barrier in the way of my exercise of these rights.

I would be wise as a California resident to understand some of the global limitations of the rights bestowed by the CPRA. For example, all bets are off with respect to a business’ compliance “with federal, state, or local laws or…with a court order or subpoena to provide Information.” A business would be within its legal rights to comply, my wishes be damned. Moreover, law enforcement agencies can direct businesses bot to delete my personal information for up to 90 days while a proper court order is obtained. Moreover, likely as an incentive for businesses, deidentified personal information is not subject to the obligations placed on businesses, and the same is true of “aggregate consumer information.” Obviously, a business would ideally use the safe harbor of deidentification where possible in order to render stolen data less desirable and valuable to thieves. Of course, at least one study has shown that deidentified data can be used to identify and link to people fairly easily and another stated “numerous supposedly anonymous datasets have recently been released and re-identified.” This may be less safe a harbor for my personal information than the drafters of the CPRA intend.

It also bears mention that some publicly available information shall not be considered personal information under the CPRA. The catch here is that not all of my personal information in the public sphere meets the terms of this exception, for new language in the CPRA to modify the CCPA definition makes clear the information has to be “lawfully obtained,” “truthful” and “a matter of public concern.” Additionally, businesses would be barred from using personal information made widely available that is probably not being disclosed lawfully (e.g. someone plastering my home address on social media.) And yet, the California Department of Motor Vehicles (DMV) has been selling the personal information of people to private investigators, bail bondsmen, and others, a legally sanctioned activity, but allowing this practice to funnel the personal information of Californians to businesses and data brokers would arguably not be a matter of public concern. Therefore, this exception may be written tightly enough to anticipate and forestall likely abuses.

Like the CCPA, the CPRA does not bear on use of my personal information in areas of like already regulated, often by the federal government such as health information or credit information. Any rights I would have with respect to these realms would remain unaffected by the CPRA.

I would receive protection in the event of specified types of data breaches, namely if my personal information were neither encrypted nor redacted, the CPRA’s breach provisions come into play. Under the CCPA, if my personal information were not encrypted but was redacted and stolen, a breach would occur, and the same was true if it were not redacted but encrypted. So, this seems to be a weakening of the trigger that would allow me to sue if my personal information were subject to unauthorized exfiltration or access, theft, or disclosure. Additionally, if my “email address in combination with a password or security question and answer that would permit access to the account” are exposed or stolen, I could also sue. Moreover, any unauthorized stealing, accessing, disclosing, or exposure of my personal information must be due to a “business’s violation of the duty to implement and maintain reasonable security procedures and practices appropriate to the nature of the information to protect the personal information” before a breach could occur.

Once a breach has occurred, however, I can sue for between $100-750 per incident plus actual damages but only after giving a business 30 days to cure the breach if possible. If there are no tangible monetary damages, as is often the case in breaches, then I would be left to weigh suing to recover the statutory damages. But if it’s one breach or a handful of breaches, it may not be worth the time and effort it takes to litigate, meaning this is likely the circumstances in which class actions will thrive.

Alternatively, the California Privacy Protection Agency will be empowered to bring actions against businesses that violate the CPRA, but the bill is silent on whether I would be made whole if I did not sue and the agency recovers money from the business. This is not entirely clear.

Finally, there are provisions that contemplate technological means for people to make their preferences under the CPRA known to many businesses at the same time or with minimal repetitive effort. I suppose this envisions someone designing an app that one could use that would do the hard work for you. This seems like language designed to seed the ground in California for developers to create and offer CPRA compliant products. Likewise, one could designate a person to undertake this work for you, which also suggests a market opportunity for an entity that can make the economics of such a business model work. In any event, I would likely be charged for using a service like either of these, leading one to the uncomfortable conclusion that these provisions may drive a greater bifurcation in the world of technology between the haves and haves not.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by OpenClipart-Vectors from Pixabay

CPRA Analyzed

The CCPA follow on bill on the ballot in California will significantly change how the state regulates privacy, which will set the de facto standard for the U.S. in the absence of federal legislation.      

With the “California Privacy Rights Act” (CPRA) having been successfully added to the November ballot on which Californians will vote in November, it is worth taking a deeper look at a bill. This bill would replace the “California Consumer Privacy Act” (CCPA) (AB 375), which just came into full effect with the publishing of final regulations on 14 August. Nonetheless, as the Office of the Attorney General was drafting regulations, the organization that pushed for passage of the CCPA, Californians for Consumer Privacy (CCP), completed the drafting of a follow on bill. CCP Chair and Founder Alistair Mactaggart explained his reasoning for a second ballot initiative: “[f]irst, some of the world’s largest companies have actively and explicitly prioritized weakening the CCPA…[and] [s]econd, technological tools have evolved in ways that exploit a consumer’s data with potentially dangerous consequences.” Moreover, if polling released earlier this month by CCP is close to being accurate, then an overwhelming majority of Californians support enactment of the CPRA, meaning a significantly new privacy scheme will come into effect in the new few years in California.

Of course, it may be fair to assert this bill looks to solve a number of problems created by the rush in June 2018 to draft a bill all parties could accept in order to get the CCPA removed from the ballot. Consequently, the CCPA package that was enacted was sloppily drafted in some places with inconsistent provisions that necessitated two rounds of legislation to fix or clarify the CCPA.

As under the CCPA, the CPRA would still not allow people to deny businesses the right to collect and process their personal information unlike some of the bills pending in Congress. Californians could stop the sale or sharing of personal information, but not the collection and processing of personal data short of forgoing or limiting online interactions and many in-person interactions. A person could request the deletion of personal information collected and processed subject to certain limitations and exceptions businesses are sure to read as broadly as possible. Additionally, a new agency would be created to police and enforce privacy rights, but legitimate questions may be posed about its level of resources. Nonetheless, the new statute would come into effect on 1 January 2023, leaving the CCPA as the law of California in the short term, and then requiring businesses and people to adjust to the new regime.

In the findings section CCP explicitly notes the bills introduced to weaken or rollback the CCPA as part of the reason as to why the CPRA should be enacted. Changes to the California Code made by ballot initiative are much harder to change or modify than the legislative route for enacting statutes. Notably, the CPRA would limit future amendments to only those in furtherance of the act, which would rule out any attempts to weaken or dilute the new regime. Moreover, the bill looks at privacy rights through the prism of an imbalance in information and is founded on the notion that should people in California have more information and real choice in how and when their personal data is shared, proceeded, and collected, then the most egregious data practices would stop. Of course, this conceptual framework differs from the one used by others in viewing data collection and processing as being more like pollution or air quality, situations any one individual is going to have limited impact over, thus necessitating collective government action to address deleterious effects. In the view of the CCP, Californians will be on better footing to negotiate their privacy with companies like Facebook and Google. Notably, the CCP asserted:

  • In the same way that Ingredient labels on foods help consumers shop more effectively, disclosure around data management practices will help consumers become more informed counterparties In the data economy, and promote competition, Additionally, if a consumer can tell a business not to sell his or her data, then that consumer will not have to scour a privacy policy to see whether the business Is, In fact, selling that data, and the resulting savings in time Is worth, in the aggregate, a tremendous amount of money.
  • Consumers should have the information and tools necessary to limit the use of their information to non-invasive, pro-privacy advertising, where their personal information is not sold to or shared with hundreds of businesses they’ve never heard of, If they choose to do so. Absent these tools, it will be virtually Impossible for consumers to fully understand these contracts they are essentially entering into when they interact with various businesses.

The CPRA would change the notification requirements for businesses interested in collecting, processing, and sharing personal data in Section 1798.100 of the Civil Code (i.e. language added by the CCPA and some of the follow bills the legislature passed.) This requirement would be binding on the companies that control collection and not just the entities doing the actual collecting, which suggests concern that the ultimate user of personal data would be shielded from revealing its identity to people. Worse still, the CCPA language may create an incentive to use front companies or third parties to collect personal data. Moreover, the CPRA makes clear that if a company is using another company to collect personal data it will ultimately control, it may meet its notice requirements by posting prominently on its website all the enumerated information. This may be a loophole large companies may use to avoid informing people who is controlling data collection.

The new language that tightens the information people must be provided as part of this notice, namely the purposes for which personal data is collected or used and whether the entity is proposing to sell or share this information. Moreover, the CPRA would mandate that the notice also include any additional purposes for which personal data are collected and used “that are incompatible with the disclosed purpose for which the personal information was collected.”

The changes to Section 1798.100 and the underlying CCPA language that would remain will apply to a new category of information created by the CPRA: “sensitive personal information.” This term is defined to mean:

  • personal Information that reveals
    • a consumer’s social security, driver’s license, state Identification card, or passport number;
    • a consumer’s account log-In, financial account, debit card, or credit card number in combination with any required security or access code, password, or credentials allowing access to an account;
    • a consumer’s precise geolocation;
    • a consumer’s racial or ethnic origin, religious or philosophical beliefs, or union membership;
    • the contents of a consumer’s mail, email and text messages, unless the business Is the Intended recipient of the communication;
    • a consumer’s genetic data; and
  • the processing of biometric Information for the purpose of uniquely identifying a consumer;
  • personal Information collected and analyzed concerning a consumer’s health; or
  • personal Information collected and analyzed concerning a consumer’s sex life or sexual orientation.

However, should any of these data be “publicly available” as defined by the CPRA, then it is no longer subject to the heightened requirements normally due this new class of information. For example, the new notice people must be given will list the categories of sensitive personal information collected and the purposes for which such information is collected or used. Additionally, people must be told whether this subset of personal data will be shared or sold.

The CPRA would limit collection, use, processing, and sharing of personal data to purposes “reasonably necessary and proportionate” to achieve the purpose of the information collection. Quite clearly, much will hang on what turns out to be “reasonable,” and this may be construed by the new data protection agency in regulation and ultimately courts in litigation. However, this provision also allows the “collection, use, retention, and sharing of a consumer’s personal information…for another disclosed purpose that Is compatible with the context in which the personal information was collected.” This will also need fleshing out either by regulation or litigation, or both. This seems to allow companies to specific another purpose for its data activities so long as it is compatible with the context of collection. And yet, it is not clear what would determine compatibility. If a person is agreeing to a grocery store chain’s data activities, might the company legally try to collect information regarding a person’s health?

This section also requires businesses to enter into contracts with third parties, service providers, and contractors to ensure they follow the CPRA and to specify that information sold or shared by the business is for limited and specific purposes.

Businesses are obligated to use “reasonable security procedures and practices appropriate to the nature of the personal information to protect the personal Information from unauthorized or illegal access, destruction, use, modification, or disclosure.” This is a familiar construct that contemplates a sliding scale of security measures with lesser steps being all right for less valuable information, say deidentified data, but with higher standards being needed for more sensitive personal data. The challenge in such a regime is that reasonable minds might theoretically disagree about reasonable measures, but it may be the purview of the caselaw construing the CPRA that will point the way to how businesses should secure information.

Section 1798.105 spells out a person’s right to delete personal information and expands the obligation of businesses to direct their service providers and contractors to delete information upon receipt of a valid request. Third parties would be notified of deletion requests and expected to also delete unless doing so would be impossible or “involves disproportionate effort,” a term likely to be given as expansive a reading as possible by many businesses. There still are numerous exceptions for deletion requests, many of which will also likely be read expansively by businesses reluctant to honor deletion requests, including:

  • Complete the transaction for which the personal information was collected, fulfill the terms of a written warranty or product recall conducted In accordance with federal low, provide a good or service requested by the consumer, or reasonably anticipated by the consumer within the context of a business’s ongoing business relationship with the consumer, or otherwise perform a contract between the business and the consumer.
  • Help to ensure security and integrity to the extent the use of the consumer’s personal information Is reasonably necessary and proportionate for those purposes.
  • Debug to identify and repair errors that Impair existing Intended functionality.
    Exercise free speech, ensure the right of another consumer to exercise that consumer’s right of free speech, or exercise another right provided for by law.
  • To enable solely internal uses that are reasonably aligned with the expectations of the consumer based on the consumer’s relationship with the business and compatible with the context in which the consumer provided the Information.
  • Comply with a legal obligation.

However, the CPRA eliminated the exception that could be used to deny deletion requests in the CCPA that allowed a business to “use the consumer’s personal information, internally, in a lawful manner that is compatible with the context in which the consumer provided the information.”

The CPRA creates a new section, 1798.106, titled “Consumers’ Right to Correct Inaccurate Personal Information,” that requires businesses to correct inaccurate personal information in light of the type of information and why it is being processed. Businesses must disclose that people have this right if a person submits a verifiable request to correct inaccurate personal information. However, companies are only required to make reasonable commercial efforts in correcting inaccurate personal information. It appears that a rulemaking is necessary to flesh out what would be a reasonable commercial efforts.

Section 1798.110 is amended by the CPRA but more or less keeps the CCPA’s right to know about and access being collected about them with some significant changes. For example, there is an expansion of one of the categories businesses must provide to people who utilize this right. This category is the commercial or business purpose for which collection and selling of personal information must be given to requesters currently under the CCPA. Under the CPRA, businesses would also need to inform people of the other entities with whom they share personal information, thus closing a significant loophole, for companies like Facebook share people’s information but do not sell it. Under the CCPA, a Facebook would not need to divulge to a person with which companies it is sharing one’s information.

Also, the CPRA would deem in compliance those companies that post on its website the categories of personal information, the sources of this information, its business or commercial purposes, and the categories of third parties to whom personal information is disclosed. It seems likely many companies will go this route, meaning the only personal information they would need to furnish upon a request would be the specific pieces of information on the person making the request. And yet, the CRPA strikes the CCPA requirement that businesses keep personal information for one-time transactions or to reidentify or link to these data.

Section 1798.115 of the CCPA would also be changed by expanding the universe of data a person may request and receive regarding how their personal information is shared and sold. The CPRA keeps the basic structure of the CCPA in this regard and merely expands it to include shared as well as sold for the following:

  • The categories of personal information sold or shared and the categories of third parties to whom such information was sold or shared
  • The categories of personal information disclosed about a person for business purposes and the categories of persons to whom such information was disclosed

Third parties would be barred from selling or sharing personal information that has been sold to or shared with them unless they provide explicit notice and people have the opportunity to opt-out.

The CRPA similarly changes Section 1798.120 (aka the right to opt out of the sharing or selling of one’s personal information). However, it keeps the CCPA’s right to opt out of sales or sharing at any time. Likewise, teenagers between 13-16 would need to affirmatively agree to selling or sharing, and for any child less than 13, his or her parents must affirmatively agree.

A new Section 1798.121, a “Consumers’ Right to Limit Use and Disclosure of Sensitive Personal Information,” would allow people to stop businesses from collecting and using sensitive personal information in some cases. As a general matter, if a person limits collection or use of this class of information, then the business would be limited to “that use which is necessary to perform the services or provide the goods reasonably expected by an average consumer who requests such goods or services” subject to some of the exceptions embodied in the definition of “business purpose.” Businesses may, however, provide notice of additional uses of sensitive personal information a person must further limit if these new uses ore objectionable or unwanted.

The CPRA changes the provision barring retaliation against people who opt out of certain practices or use their CCPA rights. The general prohibition on punishing people who use their rights under the bill with different prices, services, or products would be maintained. The CPRA would expand this protection to employees, contractors, and applicants for employment. However, the CPRA keeps the CCPA exemption for so-called loyalty programs to offer different prices or services but only if the difference is reasonably related to the value the person’s data provides to the business. The CCPA contains language requiring the linkage to be directly related, this is change may seen as a subtle weakening of the connection between the value of a person’s data and the rewards or prices offered through membership in a loyalty program. This will almost certainly result in businesses in California using current programs or establishing new programs to press people to share personal information in exchange for better prices or services. After all, all they would need to do is show the value of the person’s data is reasonably related to the advantages of membership. Like other similar provisions in the bill, regulation and litigation will define the parameters of what is reasonably related. Like the CCPA, the new bill would require people to opt into such programs, and should a person refuse, the business would need to wait 12 months before making the offer again.

Many of the previously discussed changes to the CCPA necessitate alterations to a key section of the statute, Section 1798.130, that details notice, disclosure, correction, and deletion requests. Businesses with physical locations must still offer two means for people to make such requests, but the CPRA would allow online businesses to merely make available an email address. Anyone who has ever tried to resolve disputes and problems via email knows this process can often be frustrating, but the new statute would allow companies like Facebook or Google to merely offer an email address.

The new 1798.130 also makes clear the 45-day window for businesses to deliver required information to people after receiving a verified request also includes making requested corrections and deletions. A potential hurdle is established for requests, however. In light of the type of information in question, a business may seek to authenticate a person’s identity before granting the request but may not force a person to create an account with the business if they do not have one. To be fair, this provision may be aimed at the mischief that could be created if a person decides to impersonate someone else and ask businesses to delete their personal information. There are likely even other such possible situations in which havoc could be wreaked by a malicious person.

In any event, the disclosure of information would need to cover the previous 12 months under the CPRA, and after new regulations are put in place, people would be able to ask for and receive information stretching back before the preceding 12 months. But such a request could be denied on the grounds of impossibility or disproportionate effort. Presumably the new regulations would address when these types of situations may be the case. Another limitation on this right is that businesses would not need to provide information before 1 January 2022.

If a person submits requests to learn what type of personal information has been collected or has been sold or shared to a business’ contractor or service provider, they have no obligation to respond. And yet, these entities must assist a business that receives such requests.

The CPRA stipulates that businesses are required to provide the following types of information if person asks for the data the entity has:

the categories of sources from which the consumer’s personal information was collected; the business or commercial purpose for collecting, or selling or sharing the consumer’s personal information; and the categories of third parties to whom the business discloses the consumer’s personal information.

A business is also obligated to provide the “specific pieces of personal information obtained from the consumer in a format that is easily understandable to the average consumer, and to the extent technically feasible, in a structured, commonly used, machine-readable format, which also may be transmitted to another entity at the consumer’s request without hindrance.”

Regarding the type of information a business must give to people who ask to know what, if any, information was sold or shared about them, a business must furnish two lists:

  • A list of the categories of personal information It has sold or shared about consumers in the preceding 12 months by reference to the enumerated category or categories in [the revised definition of personal information and new definition of sensitive personal information] that most closely describe the personal Information sold or shared, or If the business has not sold or shared consumers’ personal information in the preceding 12 months, the business shall prominently disclose that fact in Its privacy policy.
  • A list of the categories of personal information it has disclosed about consumers for a business purpose in the preceding 12 months by reference to the enumerated category in subdivision (c) that most closely describes the personal information disclosed, or If the business has not disclosed consumers’ personal information for a business purpose In the preceding 12 months, the business shall disclose that fact.

The categories of personal information a business must provide are “information that identifies, relates to, describes, is reasonably capable of being associated with, or could reasonably be linked, directly or indirectly, with a particular consumer or household. Personal information includes, but is not limited to, the following if it identifies, relates to, describes, is reasonably capable of being associated with, or could be reasonably linked, directly or indirectly, with a particular consumer or household:

(A) Identifiers such as a real name, alias, postal address, unique personal Identifier, online identifier, Internet Protocol address, email address, account name, social security number, driver’s license number, passport number, or other similar identifiers.

(B) Any personal information described in subdivision (e) of Section 1798.80.

(C) Characteristics of protected classifications under California or federal law.

(D) Commercial information, including records of personal property, products or services purchased, obtained, or considered, or other purchasing or consuming histories or tendencies.

(E) Biometric information.

(F) Internet or other electronic network activity information, including, but not limited to, browsing history, search history, and information regarding a consumer’s interaction with an Internet website, application, or advertisement.

(G) Geolocation data.

(H) Audio, electronic, visual, thermal, olfactory, or similar Information. (I) Professional or employment-related Information.

(J) Education information, defined as information that is not publicly available personally Identifiable information as defined In the Family Educational Rights and Privacy Act (20 U.S.C. section 1232g, 34 C.F.R. Part 99).

(K) Inferences drawn from any of the Information identified in this subdivision to create a profile about a consumer reflecting the consumer’s preferences, characteristics, psychological trends, predispositions, behavior, attitudes, intelligence, abilities, and aptitudes.

The CPRA modifies the CCPA standards on links on a business’ website allowing people to opt out of the sale or sharing of personal information. It also adds a requirement that such a link be placed on a website to allow people to opt out of the use or disclosure of sensitive personal information. A business would now be allowed to have one link for both if it wants, and it would also be allowed to remind people of the advantages of being a member of the business’ loyalty program and any associated charges or fees with not joining. This provision would seem to allow some businesses, at least those who can make the case of a reasonable relation between the discounts provided and the value of personal information, to pose a possibly uncomfortable dilemma to people: your privacy or your money. Put another way, the CPRA may well result in a price being put on one’s privacy with those of means or those intensely dedicated to privacy being able or willing to limit these practices while everyone else will acquiesce in the face of higher prices of worse services or products. Additionally, companies would not need to have links on their website if they allow for opting out through their platform, technology, or app.

If a person opts out, companies would have to wait 12 months before asking again whether they would not mind allowing the business to sell or share their personal information or use of disclose their sensitive personal information. But, one should bear in mind that even if a person opts out of the sale or sharing of personal information, a business may still collect or process it subject to other requirements in the CPRA. This right is limited to the dissemination of personal information through sales or a sharing arrangement.

The CPRA revises some key definitions and introduces new definitions, the most significant of which was discussed earlier: sensitive personal information. Another key change is to criteria for businesses subject to the CPRA. Each of the three thresholds for becoming a regulated business are changed:

  • First, language is changed to make clear a company must have earned $25 million in gross revenues in the preceding year to qualify on the basis of income.
  • Second, the threshold for the number of people is changed. It is raised from 50,000 to 100,000, and instead of counting people and devices, the latter is stricken and now households may be counted. Obviously, a household will likely include multiple devices, so counting by household allows for a higher threshold generally. Also, the counting is limited to the activities of businesses buying, selling, or sharing personal information, and so mere collection and processing is not counted, meaning if a business does not partake in any of the three enumerated categories, it would not qualify under this prong even if collects and processes the personal information of, say, 1 million Californians.
  • Thirdly, the threshold for businesses deriving 50% or more of their income selling consumers’ personal information is broadened to include sharing, meaning more entities might qualify on the basis of this prong.

Also, of note, the definition of business purpose was altered, and new definitions are provided for consent, contractor, cross-context behavioral advertising, dark pattern, non-personalized advertising and others.

The section on exemptions to the bars in the CCPA is rewritten and expanded by the CPRA. Businesses may disregard the obligations placed on by this privacy statute under a number of circumstances. For example, added circumstances include complying with a subpoena or court order or responding to direction by law enforcement agencies. Moreover, government agencies would able to make emergency requests for personal information to business if acting in good faith, asserts a legal right to do so, and follows with a court order within 3 days. There is also language that adds contractors to the CCPA’s provisions on the liability of a business for violations by its service providers that requires actual knowledge of such violations.

The CPRA keeps the CCPA’s grant of authority to allow people to sue for violations but casually tightens the circumstances under which this may happen to those in which one’s personal information is not encrypted and not redacted. The CCPA allows for a suit if a person’s personal information is neither encrypted nor redacted. Consequently, if a business uses either method of securing information it cannot be sued.

As noted, the bill would establish a California Privacy Protection Agency that would take over enforcement of the revised CCPA from the Office of the Attorney General. It would consist of a five-member board including a chair. At the earlier date of either 1 July 2021 or six months after the new agency informs the Attorney General it is ready to begin drafting rules, it shall have rulemaking authority. However, before this date, the Attorney General may have the authority or opportunity to begin some of the CPRA rulemakings during an interregnum that may serve to complicate implementation. Nonetheless, among other powers, the new agency would be able to investigate and punish violations with fines of up $2,500 per violation except for intentional violations and those involving the personal information of minor children, which could be fined at a rate of $7,500 per violation. Like the Federal Trade Commission, the California Privacy Protection Agency would be able to bring administrative actions inside the agency or go to court to sue. However, this new entity would only be provided $5 million during its first year and $10 million a year thereafter, which begs the question as to whether the new agency will be able to police privacy in California in a muscular way.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by David Mark from Pixabay

National Privacy Legislation Stalled in U.S.

The chances for U.S. privacy legislation are worse now than they were before the pandemic.  However, there may be some decision points approaching.     

A few weeks into the traditional August recess, Congress is no closer to enacting federal privacy legislation than before the pandemic. In fact, such legislation may be further from being sent to the White House now that more pressing, more immediate maters have eclipsed privacy such as further COVID-19 relief legislation and appropriations for the next fiscal year set to start on 30 September. There is always the chance stakeholders will dispense with their entrenched positions during a post-election session and reach agreement on a bill, but this will depend on the election results, for if Democrats take the White House and Senate, they may well conclude they will get privacy legislation more to their liking next year.

In terms of the present impasse, at present, emanates from a few different issues: a private right of action for people and state preemption. Generally speaking, Democrats favor the former and oppose the latter with Republicans’ position being the opposite. However, it is possible the two parties can agree on a limited right for people to sue companies for violating their privacy rights and some form of preemption of contrary state laws, perhaps along the lines of the preemption structure in the “Financial Services Modernization Act of 1999” (P.L. 106–102) (aka the Gramm–Leach–Bliley Act) that sets a uniform floor for privacy and data security that states may regulate above. However, industry stakeholders are likely resisting any such provisions for they would still face litigation, likely in the form of class actions, and varied, differing privacy standards across the U.S.

Otherwise, there is broad agreement that people in the U.S. would be notified of the privacy practices of entities before they can start collecting, processing, and sharing personal data and would need to explicitly agree to allow this to happen. And so, it would likely be an opt-in for most data collection, processing, and sharing. However, people would likely get a more limited set of rights to opt out of certain practices such as data transfers to third parties, but there is a great deal of variance among the leading bills on what people can choose to avoid. Likewise, people in the U.S. would generally be able to request and receive, access, correct, and delete personal data in specified situations. Most, but not all, of the bills name the Federal Trade Commission (FTC) as the regulator of a new privacy regulatory structure with varying degrees of rulemaking power. A handful of other bills seek to create out of whole cloth a new privacy regulator along the lines of Europe’s data protection authorities.

However, if the voters of California vote for the ballot initiative to enact the “California Privacy Rights Act” (CPRA), a tightening of the “California Consumer Privacy Act” (CCPA) (AB 375) that would prevent future amendments to weaken or dilute privacy protection in California, things may change in Washington. Deprived of a means of rolling back California’s new privacy regulatory structure, as many industry stakeholders tried to do in the last legislative session with the CCPA, these interests may set their sights on a national privacy bill that would ameliorate this situation. Consequently, they may pressure Republicans and Democrats in Congress to resolve the outstanding issues on federal privacy legislation.

Moreover, stakeholders in Washington are responding to what appears to be the more urgent fire: the deathblow dealt to Privacy Shield by the European Union’s highest court. Without an agreement in place to allow multinationals to transfer and process the personal data to the U.S., these entities will need to cease doing so or implement alternate means of doing so under the General Data Privacy Regulation (GDPR) such as standard contract clauses (SCC) or binding corporate rules (BCR), but even these means of transfer are not without risk. European Union (EU) data protection authorities (DPAs) may soon be reviewing these agreements to ensure they comport with the Court of Justice of the European Union’s (CJEU) ruling that the U.S. lacks controls and remedies to ensure the privacy rights of EU citizens.

It bears note that another suit has been filed in the EU to test the legality of using SCCs generally to transfer data to the U.S. Austrian privacy activist Maximillian Schrems and the organization he is working with, noyb–European Center for Digital Rights, have filed 101 complaints in all 30 EU nations and the 33 European Economic Area (EEA) nations, arguing that Google and Facebook are operating in violation of the CJEU’s ruling. Specifically, the organization is claiming:

A quick analysis of the HTML source code of major EU webpages shows that many companies still use Google Analytics or Facebook Connect one month after a major judgment by the Court of Justice of the European Union (CJEU) – despite both companies clearly falling under US surveillance laws, such as [Section 702 of the Foreign Intelligence Surveillance Act (FISA)]. Neither Facebook nor Google seem to have a legal basis for the data transfers. Google still claims to rely on the “Privacy Shield” a month after it was invalidated, while Facebook continues to use the “SCCs”, despite the Court finding that US surveillance laws violate the essence of EU fundamental rights.

Consequently, even if SCCs are used more widely as means of transferring personal data, the CJEU could find that such agreements for transfers to the U.S. do not comport with the GDPR, eliminating another means used by which U.S. multinationals. This could lead to more companies like Facebook and Google segregating EU data and processing it in the EU or another jurisdiction for which the European Commission has issued an adequacy decision. Or, this could create pressure in Washington to reform U.S. surveillance laws and practices in order that a future general data transfer agreement pass muster with the CJEU.

Still, it may serve some purpose to list the salient privacy bills and link to analysis. As mentioned, a trio of COVID-19 privacy bills were introduced a few months ago to address mainly the use of smartphones for exposure and contact tracing:

Otherwise, the major privacy bills introduced this Congress include:

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by S. Hermann & F. Richter from Pixabay