A Washington State Privacy Bill…Rises From The Dead

One of the sponsors of a privacy bill that died earlier this year has reintroduced a modified version with new language in the hopes of passing the bill next year.

Washington State Senator Reuven Carlyle (D-Seattle) has floated a new draft of privacy legislation in the hopes it will be pass after forerunner bills dying in the last two legislative sessions. Carlyle has made a number of changes in the “Washington Privacy Act 2021” documented in this chart showing the differences between the new bill, the last version of the bill passed by the Washington State Senate last year, the “California Consumer Privacy Act” (CCPA) (AB 375), and the “California Privacy Rights Act” (CPRA) (aka Proposition 24) on this year’s ballot. But in the main, the bill tracks closely with the two bills produced by the Washington Senate and House last year lawmakers could not ultimately reconcile. However, there are no provisions on facial recognition technology, which was largely responsible for sinking a privacy bill in Washington State two years ago. Carlyle has taken the unusual step of appending language covering the collection and processing of personal data to combat infectious diseases like COVID-19.

Big picture, the bill still uses the concepts of data controllers and processors most famously enshrined in the European Union’s (EU) General Data Protection Regulation (GDPR). Like other privacy bills, generally, people in Washington State would not need to consent before an entity could collect and process its information. People would be able to opt out of some activities, but most could data collection and processing could still occur as it presently does.

Washingtonians would be able to access, correct, delete, and port their personal data. Moreover, people would be able to opt out of certain data processing: “for purposes of targeted advertising, the sale of personal data, or profiling in furtherance of decisions that produce legal effects concerning a consumer or similarly significant effects concerning a consumer.” Controllers must provide at least two secure and reliable means by which people could exercise these rights and may not require the creation of a new account. Rather, a controller can require a person to use an existing account to exercise her rights.

Controllers must act on the request within 45 days and are allowed one 45-day extension “where reasonably necessary, taking into account the complexity and number of the requests.” It is not clear what would justify a 45-day extension except for numerous, complex requests, but in any event, the requester must be informed of an extension. Moreover, if a controller decides not to comply with the request, it must let the person know within 45 days, the reasons for noncompliance, and how an appeal may be filed. People would be permitted two free requests a year (although nothing stops a controller from meeting additional requests for free), and controllers may charge thereafter to cover reasonable costs and to deal with repetitive requests. Controllers may also just deny repetitive requests, too, and they may also deny requests they cannot authenticate. In the event of the latter, a controller may ask for more information so the person can prove his identity but is not required to.

Each controller would need to establish an internal appeals process for people to use if their request to exercise a right is denied. There is a specified timeline, and, at the end of this process, if a person is unhappy with the decision, the controller must offer to turn the matter over to the Office of the Attorney General of Washington for adjudication.

Like last year’s bills, this draft makes clear the differentiated roles of controllers and processors in the data ecosystem regulated by Washington State. Processors must follow a controller’s instructions and has an obligation to help the controller comply with the act. These obligations must set out in a contract between each controller and processor “that sets out the processing instructions to which the processor is bound, including the nature and purpose of the processing, the type of personal data subject to the processing, the duration of the processing, and the obligations and rights of both parties.” Additionally, who is a controller and who is a processor will necessarily be a fact driven analysis and it is possible for one entity to be both depending on the circumstances.

Notably, processors must help controllers respond to requests from people exercising their rights, secure personal data, and assist in complying with Washington State’s data breach protocol if a breach occurs. Processors must implement and use security commensurate to the personal data they are holding and processing.

Controllers are obligated to furnish privacy policies to people that must include the categories of personal data processed, the purposes for any processing, the categories of personal data shared with third parties, and the categories of third parties with whom sharing occurs. Moreover, if a controller sells personal data for targeted advertising, a controller has a special obligation to make people aware on a continuing basis, including their right to opt out if they choose. Data collection is limited to what is reasonably necessary for the disclosed purposes of the data processing. And yet, a controller may ask for and obtain consent to process for purposes beyond those reasonably necessary to effectuate the original purposes disclosed to the person. Controllers would also need to minimize the personal data it has on hand.

Controllers must “establish, implement, and maintain reasonable administrative, technical, and physical data security practices to protect the confidentiality, integrity, and accessibility of personal data…[that] shall be appropriate to the volume and nature of the personal data at issue.” Controllers would not be allowed to process personal data in a way that would violate discrimination laws. And so, controllers may not “process personal data on the basis of a consumer’s or a class of consumers’ actual or perceived race, color, ethnicity, religion, national origin, sex, gender, gender identity, sexual orientation, familial status, lawful source of income, or disability, in a manner that unlawfully discriminates against the consumer or class of consumers with respect to the offering or provision of (a) housing, (b) employment, (c) credit, (d) education, or (e) the goods, services, facilities, privileges, advantages, or accommodations of any place of public accommodation.” Controllers could not retaliate against people who exercise any of their rights to access, correct, delete, or port their personal data through offering differently priced or quality products or services. And yet, controllers may offer different prices and services as part of a loyalty program that is voluntary for people to join and may share personal data with third parties for reasons limited to the loyalty program.

Regarding another subset of personal data, consent will be needed before processing can occur. This subset is “sensitive data,” which is defined as “(a) personal data revealing racial or ethnic origin, religious beliefs, mental or physical health condition or diagnosis, sexual orientation, or citizenship or immigration status; (b) the processing of genetic or biometric data for the purpose of uniquely identifying a natural person; (c) the personal data from a known child; or (d) specific geolocation data.”

The bill also bars a person waiving his or her rights under any type of agreement, and this will be null and void for reasons of public policy.

Controllers would not need to reidentify deidentified personal data to respond to a request from a person. However, the way this section is written gives rise to questions about the drafter’s intentions. The section would not require controllers to respond to requests from people to access, correct, delete or port personal data if the “controller is not reasonably capable of associating the request with the personal data, or…it would be unreasonably burdensome for the controller to associate the request with the personal data” if other conditions are true as well. Given that this provision comes right after the language on reidentifying deidentified data, it seems like the aforementioned language would apply to other personal data. And so, some controllers could respond to a request by arguing they cannot associate the request or it would be unduly burdensome. Perhaps this is not what the drafters intend, but this could become a route whereby controllers deny legitimate requests.

This section of the bill also makes clear that people will not be able to exercise their rights of access, correction, deletion, or porting if the personal data are “pseudonymous data.” This term is defined as “personal data that cannot be attributed to a specific natural person without the use of additional information, provided that such additional information is kept separately and is subject to appropriate technical and organizational measures to ensure that the personal data are not attributed to an identified or identifiable natural person.” This is a concept that would seem to encourage controllers and processors to store data in a state that would strip identifiers from the personal data in order for them not to have to incur the cost and time of responding to requests. It bears note the concept and definition appear heavily influenced by the GDPR, which provides:

pseudonymisation’ means the processing of personal data in such a manner that the personal data can no longer be attributed to a specific data subject without the use of additional information, provided that such additional information is kept separately and is subject to technical and organisational measures to ensure that the personal data are not attributed to an identified or identifiable natural person

Data protection assessments will be necessary for a subset of processing activities: targeted advertising, selling personal data, processing sensitive data, any processing of personal data that presents “a heightened risk of harm to consumers” and another case that requires explanation. This last category is for those controllers who are profiling such that a reasonably foreseeable risk is presented of:

  • “Unfair or deceptive treatment of, or disparate impact on, consumers;
  • financial, physical, or reputational injury to consumers;
  • a physical or other intrusion upon the solitude or seclusion, or the
  • private affairs or concerns, of consumers, where such intrusion would be offensive to a reasonable person; or
  • other substantial injury to consumers.”

These “data protection assessments must take into account the type of personal data to be processed by the controller, including the extent to which the personal data are sensitive data, and the context in which the personal data are to be processed.” Moreover, data protection assessments “must identify and weigh the benefits that may flow directly and indirectly from the processing to the controller, consumer, other stakeholders, and the public against the potential risks to the rights of the consumer associated with such processing,  as mitigated by safeguards that can be employed by the controller to reduce such risks.” Moreover, the bill stipulates “[t]he use of deidentified data and the reasonable expectations of consumers, as well as the context of the processing and the relationship between the controller and the consumer whose personal data will be processed, must be factored into this assessment by the controller.” And, crucially, controllers must provide data protection assessments to the Washington Attorney General upon request, meaning they could inform an enforcement action or investigation.

Section 110 of the “Washington Privacy Act 2021” lays out the reasons one usually finds in privacy bills as to the circumstances when controllers and processors are not bound by the act, including but not limited to:

  • Comply with federal, state, or local laws, rules, or regulations;
  • Comply with a civil, criminal, or regulatory inquiry, investigation, subpoena, or summons by federal, state, local, or other governmental authorities;
  • Cooperate with law enforcement agencies concerning conduct or activity that the controller or processor reasonably and in good faith believes may violate federal, state, or local laws, rules, or regulations;
  • Provide a product or service specifically requested by a consumer, perform a contract to which the consumer is a party, or take steps at the request of the consumer prior to entering into a contract;
  • Take immediate steps to protect an interest that is essential for the life of the consumer or of another natural person, and where the processing cannot be manifestly based on another legal basis;
  • Prevent, detect, protect against, or respond to security incidents, identity theft, fraud, harassment, malicious or deceptive activities, or any illegal activity; preserve the integrity or security of systems; or investigate, report, or prosecute those responsible for any such action;

Moreover, the act does “not restrict a controller’s or processor’s ability to collect, use, or retain data to:

  • Conduct internal research solely to improve or repair products, services, or technology;
    Identify and repair technical errors that impair existing or intended functionality; or
  • Perform solely internal operations that are reasonably aligned with the expectations of the consumer based on the consumer’s existing relationship with the controller, or are otherwise compatible with processing in furtherance of the provision of a product or service specifically requested by a consumer or the performance of a contract to which the consumer is a party.

It seems reasonable to expect controllers and processors to try and read these provisions as liberally as they can in order to escape or circumvent the obligations of the act. I do not level this claim as a criticism; rather, it is what will undoubtedly occur if a regulated entity has halfway decent legal counsel.

One also finds legal liability for controllers that was in last year’s bill, too. The act makes clear that controllers cannot be liable for a processor’s violation if “at the time of disclosing the personal data, the disclosing controller or processor did not have actual knowledge that the recipient intended to commit a violation.” Consequently, even if a reasonable person could foresee that a processor would likely violate the act, unless the controller actually knows a violation is imminent, then the controller cannot be held liable. This structuring of the legal liability will likely result in controllers claiming they did not know of processors’ violations and create a disincentive for controllers to press processors to comply with the statutory and contractual requirements binding both.

The bill reiterates:

Personal data that are processed by a controller pursuant to [any of the aforementioned carveouts in Section 110] must not be processed for any purpose other than those expressly listed in this section. Personal data that are processed by a controller pursuant to this section may be processed solely to the extent that such processing is:

(i) Necessary, reasonable, and proportionate to the purposes listed in this section; and

(ii) adequate, relevant, and limited to what is necessary in relation to the specific purpose or purposes listed in this section.

Finally, controllers bear the burden of making the case that the exception being used complies with this section. This would serve to check a regulated entity’s inclination to read terms and requirements as generously as possible for them and their conduct.

The bill would not create a new right for people to sue, but if there are existing grounds a person uses to sue (e.g. product liability, tort, contract law, etc.) and wins, the liability would be distributed between a controller and processor according to their liability.

In terms of enforcement by the Attorney General, violations of this act are treated as violations of the Washington State Consumer Protection Act, and violations are considered violations of the ban on unfair and deceptive practices with civil liability as high as $7,500 per violation. However, the Attorney general must first “provide a controller thirty days’ written notice identifying the specific provisions of this title the Attorney General, on behalf of a consumer, alleges have been or are being violated.” If a cure is affected, then the Attorney General may not seek monetary damages. But if a cure is not, then the Attorney General may take the matter to court.

The act preempts all county, city, and local data processing laws.

There is new language in the bill pertaining to public health emergencies, privacy, and contact tracing. However, the provisions are divided between two different titles with one controlling private sector entities and the other public sector entities. Incidentally, at the federal level, privacy bills have not tended to include provisions to address public health emergencies and instead standalone bills have been drafted and introduced.

In regard to private sector entities, controllers and processors would not be able to process “covered data” for a “covered purpose” which relates to the symptoms of infectious diseases and tracking their spread, unless certain conditions are met. For example, these entities would need to make available a privacy policy and people must consent to such processing. Additionally, controllers and processors would not be able to disclose “any covered data processed for a covered purpose” to any law enforcement agency in the U.S., sell “any covered data processed for a covered purpose,” or “[s]hare any covered data processed for a covered purpose with another controller, processor, or third party unless such sharing is governed by contract” according to the terms laid out in this section and disclosed to the person per the privacy policy that must be disclosed. However, private sector entities could disclose covered data processed for a covered purpose to federal, state, and local agencies pursuant to laws permitting such disclosure. So, this would likely be under public health or emergency laws.

This section of the bill defines “covered purpose” as

processing of covered data concerning a consumer for the purposes of detecting symptoms of an infectious disease, enabling the tracking of a consumer’s contacts with other consumers, or with specific locations to identify in an automated fashion whom consumers have come into contact with, or digitally notifying, in an automated manner, a consumer who may have become exposed to an infectious disease, or other similar purposes directly related to a state of emergency declared by the governor pursuant to RCW 43.06.010 and any restrictions imposed under the state of emergency declared by the governor pursuant to RCW 43.06.200 through 43.06.270.

There is a section that seems redundant. This provision establishes the right of a person to opt out of processing her covered data for a covered purpose, but the previous section makes clear a person’s covered data may not be processed without her consent. Nonetheless, a person may determine whether his covered data is being processed, request a correction of inaccurate information, and request the deletion of “covered data.” The provisions on how controllers are required to respond to and process such requests are virtually identical to those established for the exercise of the rights to access, correct, delete, and port in the bill.

The relationship and responsibilities between controllers and processors track very closely to those imposed for normal data processing.

Controllers would need to make available privacy policies specific to processing covered data. The bill provides:

Controllers that process covered data for a covered purpose must provide consumers with a clear and conspicuous privacy notice that includes, at a minimum:

  • How a consumer may exercise the rights contained in section 203 of this act, including how a consumer may appeal a controller’s action with regard to the consumer’s request;
  • The categories of covered data processed by the controller;
  • The purposes for which the categories of covered data are processed;
  • The categories of covered data that the controller shares with third parties, if any; and
  • The categories of third parties, if any, with whom the controller shares covered data.

Controllers would also need to limit collection of covered data to what is reasonably necessary for processing and minimize collection. Moreover, controllers may not process covered data in ways that exceed what is reasonably necessary for covered purposes unless consent is obtained from each person. But then the bill further limits what processing of covered data is permissible by stating that controllers cannot “process covered data or deidentified data that was processed for a covered purpose for purposes of marketing, developing new products or services, or engaging in commercial product or market research.” Consequently, other processing purposes would be permissible provided consent has been obtained. And so, a covered entity might process covered data to improve the current means of collecting covered data for the covered purpose.

There is no right to sue entities for violating this section, but it appears controllers may bear more legal responsibility for the violations of its processors regarding covered data. Moreover, the enforcement language is virtually identical to the earlier provisions in the bill as to how the Attorney General may punish violators.

The bill’s third section would regulate the collection and processing of covered data for covered purposes by public sector entities, and for purposes of this section controllers are defined as “local government, state agency, or institutions of higher education which, alone or jointly with others, determines the purposes and means of the processing of covered data.” This section is virtually identical to the second section with the caveat that people would not be given the right to determine if their covered data has been collected and processed for a covered purpose, to request a correction of covered data, and to ask that such data be deleted. Also, a person could not ask to opt out of collection.

Finally, two of the Congressional stakeholders on privacy and data security hail from Washington state, and consideration and possible passage of a state law may limit their latitude on a federal bill they could support. Senator Maria Cantwell (D-WA) and Representative Cathy McMorris Rodgers (R-WA), who are the ranking members of the Senate Commerce, Science, and Transportation Committee and House Energy and Commerce’s Consumer Protection and Commerce Subcommittee respectively, are involved in drafting their committee’s privacy bills, and a Washington state statute may affect their positions in much the same the “California Consumer Privacy Act” (CCPA) (AB 375) has informed a number of California Members’ position on privacy legislation, especially with respect to bills being seen as weaker than the CCPA.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Photo Credit: Ally Laws on Pixabay

CPRA From Another View

Let’s see how the CPRA would work from the view of a Californian.

Of course, I analyzed California Ballot Proposition 24, the “California Privacy Rights Act,” at some length in a recent issue, but I think taking on the proposed rewrite of the “California Consumer Privacy Act” (AB 375) from a different angle may provide value in understanding what this law would and would not do. In this piece, I want to provide a sense of what the California resident would be presented with under the new privacy statute.

As noted in my article the other day, as under the CCPA, the CPRA would still not allow people to deny businesses the right to collect and process their personal information unlike some of the bills pending in Congress. Californians could stop the sale or sharing of personal information, but not the collection and processing of personal data short of forgoing or limiting online interactions and many in-person interactions. A person could request the deletion of personal information collected and processed subject to certain limitations and exceptions businesses are sure to read as broadly as possible.

So, businesses subject to the CPRA will have to inform people at the point of collection “the categories of personal information to be collected and the purposes for which the categories of personal Information are collected or used and whether such Information is sold or shared.” Easy enough, as far as this goes. I live in Sacramento, and I log into Facebook, there should be notice about the categories of personal information (e.g. data such as IP address, physical address, name, geolocation data, browsing history, etc.) As a citizen of California afforded privacy rights by the CPRA, I would not be able to tell Facebook not to collect and process these sorts of data. I would be able to ask that they delete these data and to stop their selling or sharing of these data subject to significant limitations on these rights. Therefore, a baseline assumption in the CPRA, as in the CCPA, that it is either in the public interest that data collection and processing are a net good for California, its people, and its businesses, or a concession that it is too late to stop such practices, for strong law stopping some of these practice will result in these companies, some of which are headquartered in the state, to stop offering their free services and/or leave the state.

In the same notice described in the preceding paragraph, I would also be told whether Facebook sells or shares my personal information. I would also be alerted as to whether “sensitive personal information” is being collected and if these are being sold or shared.

Of course, with both categories of information collected from people in California, the use of the data must be compatible with the disclosed purpose for collection. And, so, presumably, the notice provided to me would include the why of the data collection, but whatever the purpose, so long as it is disclosed to me, it would be legal, generally speaking, under the CPRA. The only limitation seems to be purposes incompatible with the context in which the personal information was collected

My personal data could not be stored by a business indefinitely, for the law limits storage for each disclosed purpose for any the time necessary to undertake and complete the purpose.

It must also be stressed that Californians will all but certainly be presented with notice in the physical world when they shop in larger grocery store chains, national or large regional retailers, airlines, car rental firms, etc. In the case of hotels, car rental firms, and airlines, just to name three categories of businesses likely to be covered by the CPRA and to be collecting data on people, the notice may be appended to the boilerplate contractual language no one I know reads. It may be written in the clearest language imaginable, but a person must be advised of what data are being collected, the purpose of the collection and use, and whether it is being sold and shared. For the privacy purist, the only way not to have one’s information collected would be to not engage with these companies. Likewise, walking into a retail establishment large enough to qualify as a business under the CPRA may entail seeing notice posted somewhere in the store, possibly alongside information indicating customers are under surveillance by camera, that personal information is being collected.

I would be able to immediately ask the business to delete my personal information, but it would be allowed to keep this on hand during the period it is completing a transaction or providing goods or services. But there is language that may be interpreted broadly by a business to keep my personal information such as an exception to conduct a product recall or to anticipate future transactions as part of our ongoing business relationship. I would expect this to be very broadly read in favor of keeping personal data. Nonetheless, if it is a service or product used frequently, say, Amazon, then I would need to go back after every use and request my personal information be deleted. But if I placed four Amazon orders a month, the platform could reasonably deny my request because it is occurring in the course of an ongoing business transaction. There are other possible grounds on which a business might not delete a person’s personal or sensitive personal information such as ensuring the security and integrity of the service and product with the caveat that my personal information would have to somehow be “reasonably necessary and proportionate for those purposes.” Would the business make this determination? Subject to guidance or regulations?

However, the exception to the right to delete that is nearly opaque is “[t]o enable solely internal uses that are reasonably aligned with the expectations of the consumer based on the consumer’s relationship with the business and compatible with the context in which the consumer provided the information.” It is not clear to me the sort of “internal uses” this encapsulates. Data processing so the business can better target the person? This provision is drafted so broadly the new privacy agency must explicate it so businesses and Californians understand what this entails. Also, keep in mind, if I lived in California, I would have to repeat these deletion requests for each and every business collecting information on me.

I would be able to correct my personal information with a business but only with “commercially reasonable efforts,” suggesting cases in which correction are difficult would allow businesses to decline my request. For anyone who has ever tried to correct one’s personal information with a company, the frustration attendant on such endeavors can be significant. A major American automaker switched two letters my wife’s last name, and no matter how many times we asked that her name be spelled correctly, this massive corporation could not or would not make this change. This may end up as a right that is largely without effect.

I would be able to ask for and receive my personal information after a fashion. For example, I would be able to ask for and obtain the exact personal information the business has collected itself but only the categories of information obtained through means other direct collection (i.e. data brokers and other businesses.). To make this section even more convoluted, I would also receive the categories of personal information the business has directly collected on me. Moreover, I could learn the commercial or business purposes for collection and processing and the third parties with whom my personal information is sold or shared. However, if a business includes all this and other information on its website as part of its privacy policy, it would only have to send me the specific pieces of personal information it has collected directly from me. Whatever the case, I would generally only be able to receive information from the previous 12 months.

Separately from the aforementioned rights, I could also learn to whom a business is selling, sharing, and disclosing my information. However, if we drew a Venn Diagram between this right and the previous one, the most significant right bestowed by this section of the CPRA would be that of learning “[t]he categories of personal information that the business disclosed about the consumer for a business purpose and the categories of persons to whom It was disclosed for a business purpose.”

The CPRA would provide me the right to opt out of a business selling or sharing my personal information, and businesses would need to alert people of this right. If I were between the age of 13 and 16, I would need to opt in to selling or sharing my personal information. Moreover, for my children under the age of 13, I, or my wife, would need to opt in for their personal information to be sold or shared.

I would also be able to limit the use and disclosure of my sensitive personal information to an uncertain extent. The CPRA makes clear this is not an absolute right, and businesses would be able to use a number of exceptions to continue using this class of information. For example, a business would be able to do so “to ensure security and Integrity to the extent the use of the consumer’s personal Information is reasonably necessary and proportionate for these purposes.” Likewise, a business could use sensitive personal information for “[s]hort-term, transient use, including but not limited to non-personalized advertising shown as part of a consumer’s current Interaction with the business.” There are other exceptions, and the new California state agency established by the CPRA would be able to promulgate regulations to further define those situations in which use and disclosure may continue against my wishes.

Otherwise, a business would be unable to use or disclose my sensitive personal information once I elect to stop this practice. However, this right pertains only to the use of this type of information to infer my characteristics subject to the drafting of regulations.  

I would not be discriminated against for exercising any of the rights the CPRA grants me with a significant catch on which I’ll say more in a moment. This right would stop businesses from denying me goods or services, charging me a different price, or providing a different level of service or quality. And yet, a business would be able to charge me a different price or rate or give me a lesser level of service or product “if that difference is reasonably related to the value provided to the business by the consumer’s data.” This strikes me as a situation where the exception will eat the rule, for any business with any level of resources will make the claim that the value of my personal information is vital to providing me a service or product for free, and if I deny them the use of this information, the value proposition has changed and I must be charged to have the same level of service, or alternatively without payment, the business could only provide me with a lesser level of service or product. It is my guess that this right would be functionally null.

Moreover, this section is tied to loyalty and reward programs, which would also be exempt from this right so long as the case could be made that the value of my data justifies the difference in price or service. It is not hard to see to incentive structure here being such that businesses would likely establish new programs in order to pressure people in California not to exercise rights in the CPRA and to continue using their personal information in the current fashion. Of course, this is this provision “[a] business shall not use financial incentive practices that are unjust, unreasonable, coercive, or usurious in nature,” but where exactly is the line between a business offering a rewards or loyalty program purportedly tied to the value of the data it collects and processes and these sorts of practices. It may be very hard to divine and will likely require a case-by-case process to delineate the legal from the illegal.

I would generally have two ways to exercise the rights I would be given under the CPRA unless the business only operates online, and then it would be by email. The business would have 45 days after verifying my request for my personal information or to correct or delete to comply, and this would need to be free of charge. However, this 45-day period may be extended once so long as the business informs me. It would seem 90 days would become the de facto norm. A business may also be able to demand “authentication of the consumer that is reasonable in light of the nature of the personal information requested.” The intent is obviously for a business to be sure someone is not malicious or mischievously trying to change someone else’s information in what may come to be an extension of doxing or other vexatious practices seen elsewhere in the online realm. However, this may also likely be read liberally by some businesses as a means of trying up another barrier in the way of my exercise of these rights.

I would be wise as a California resident to understand some of the global limitations of the rights bestowed by the CPRA. For example, all bets are off with respect to a business’ compliance “with federal, state, or local laws or…with a court order or subpoena to provide Information.” A business would be within its legal rights to comply, my wishes be damned. Moreover, law enforcement agencies can direct businesses bot to delete my personal information for up to 90 days while a proper court order is obtained. Moreover, likely as an incentive for businesses, deidentified personal information is not subject to the obligations placed on businesses, and the same is true of “aggregate consumer information.” Obviously, a business would ideally use the safe harbor of deidentification where possible in order to render stolen data less desirable and valuable to thieves. Of course, at least one study has shown that deidentified data can be used to identify and link to people fairly easily and another stated “numerous supposedly anonymous datasets have recently been released and re-identified.” This may be less safe a harbor for my personal information than the drafters of the CPRA intend.

It also bears mention that some publicly available information shall not be considered personal information under the CPRA. The catch here is that not all of my personal information in the public sphere meets the terms of this exception, for new language in the CPRA to modify the CCPA definition makes clear the information has to be “lawfully obtained,” “truthful” and “a matter of public concern.” Additionally, businesses would be barred from using personal information made widely available that is probably not being disclosed lawfully (e.g. someone plastering my home address on social media.) And yet, the California Department of Motor Vehicles (DMV) has been selling the personal information of people to private investigators, bail bondsmen, and others, a legally sanctioned activity, but allowing this practice to funnel the personal information of Californians to businesses and data brokers would arguably not be a matter of public concern. Therefore, this exception may be written tightly enough to anticipate and forestall likely abuses.

Like the CCPA, the CPRA does not bear on use of my personal information in areas of like already regulated, often by the federal government such as health information or credit information. Any rights I would have with respect to these realms would remain unaffected by the CPRA.

I would receive protection in the event of specified types of data breaches, namely if my personal information were neither encrypted nor redacted, the CPRA’s breach provisions come into play. Under the CCPA, if my personal information were not encrypted but was redacted and stolen, a breach would occur, and the same was true if it were not redacted but encrypted. So, this seems to be a weakening of the trigger that would allow me to sue if my personal information were subject to unauthorized exfiltration or access, theft, or disclosure. Additionally, if my “email address in combination with a password or security question and answer that would permit access to the account” are exposed or stolen, I could also sue. Moreover, any unauthorized stealing, accessing, disclosing, or exposure of my personal information must be due to a “business’s violation of the duty to implement and maintain reasonable security procedures and practices appropriate to the nature of the information to protect the personal information” before a breach could occur.

Once a breach has occurred, however, I can sue for between $100-750 per incident plus actual damages but only after giving a business 30 days to cure the breach if possible. If there are no tangible monetary damages, as is often the case in breaches, then I would be left to weigh suing to recover the statutory damages. But if it’s one breach or a handful of breaches, it may not be worth the time and effort it takes to litigate, meaning this is likely the circumstances in which class actions will thrive.

Alternatively, the California Privacy Protection Agency will be empowered to bring actions against businesses that violate the CPRA, but the bill is silent on whether I would be made whole if I did not sue and the agency recovers money from the business. This is not entirely clear.

Finally, there are provisions that contemplate technological means for people to make their preferences under the CPRA known to many businesses at the same time or with minimal repetitive effort. I suppose this envisions someone designing an app that one could use that would do the hard work for you. This seems like language designed to seed the ground in California for developers to create and offer CPRA compliant products. Likewise, one could designate a person to undertake this work for you, which also suggests a market opportunity for an entity that can make the economics of such a business model work. In any event, I would likely be charged for using a service like either of these, leading one to the uncomfortable conclusion that these provisions may drive a greater bifurcation in the world of technology between the haves and haves not.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by OpenClipart-Vectors from Pixabay

CPRA Analyzed

The CCPA follow on bill on the ballot in California will significantly change how the state regulates privacy, which will set the de facto standard for the U.S. in the absence of federal legislation.      

With the “California Privacy Rights Act” (CPRA) having been successfully added to the November ballot on which Californians will vote in November, it is worth taking a deeper look at a bill. This bill would replace the “California Consumer Privacy Act” (CCPA) (AB 375), which just came into full effect with the publishing of final regulations on 14 August. Nonetheless, as the Office of the Attorney General was drafting regulations, the organization that pushed for passage of the CCPA, Californians for Consumer Privacy (CCP), completed the drafting of a follow on bill. CCP Chair and Founder Alistair Mactaggart explained his reasoning for a second ballot initiative: “[f]irst, some of the world’s largest companies have actively and explicitly prioritized weakening the CCPA…[and] [s]econd, technological tools have evolved in ways that exploit a consumer’s data with potentially dangerous consequences.” Moreover, if polling released earlier this month by CCP is close to being accurate, then an overwhelming majority of Californians support enactment of the CPRA, meaning a significantly new privacy scheme will come into effect in the new few years in California.

Of course, it may be fair to assert this bill looks to solve a number of problems created by the rush in June 2018 to draft a bill all parties could accept in order to get the CCPA removed from the ballot. Consequently, the CCPA package that was enacted was sloppily drafted in some places with inconsistent provisions that necessitated two rounds of legislation to fix or clarify the CCPA.

As under the CCPA, the CPRA would still not allow people to deny businesses the right to collect and process their personal information unlike some of the bills pending in Congress. Californians could stop the sale or sharing of personal information, but not the collection and processing of personal data short of forgoing or limiting online interactions and many in-person interactions. A person could request the deletion of personal information collected and processed subject to certain limitations and exceptions businesses are sure to read as broadly as possible. Additionally, a new agency would be created to police and enforce privacy rights, but legitimate questions may be posed about its level of resources. Nonetheless, the new statute would come into effect on 1 January 2023, leaving the CCPA as the law of California in the short term, and then requiring businesses and people to adjust to the new regime.

In the findings section CCP explicitly notes the bills introduced to weaken or rollback the CCPA as part of the reason as to why the CPRA should be enacted. Changes to the California Code made by ballot initiative are much harder to change or modify than the legislative route for enacting statutes. Notably, the CPRA would limit future amendments to only those in furtherance of the act, which would rule out any attempts to weaken or dilute the new regime. Moreover, the bill looks at privacy rights through the prism of an imbalance in information and is founded on the notion that should people in California have more information and real choice in how and when their personal data is shared, proceeded, and collected, then the most egregious data practices would stop. Of course, this conceptual framework differs from the one used by others in viewing data collection and processing as being more like pollution or air quality, situations any one individual is going to have limited impact over, thus necessitating collective government action to address deleterious effects. In the view of the CCP, Californians will be on better footing to negotiate their privacy with companies like Facebook and Google. Notably, the CCP asserted:

  • In the same way that Ingredient labels on foods help consumers shop more effectively, disclosure around data management practices will help consumers become more informed counterparties In the data economy, and promote competition, Additionally, if a consumer can tell a business not to sell his or her data, then that consumer will not have to scour a privacy policy to see whether the business Is, In fact, selling that data, and the resulting savings in time Is worth, in the aggregate, a tremendous amount of money.
  • Consumers should have the information and tools necessary to limit the use of their information to non-invasive, pro-privacy advertising, where their personal information is not sold to or shared with hundreds of businesses they’ve never heard of, If they choose to do so. Absent these tools, it will be virtually Impossible for consumers to fully understand these contracts they are essentially entering into when they interact with various businesses.

The CPRA would change the notification requirements for businesses interested in collecting, processing, and sharing personal data in Section 1798.100 of the Civil Code (i.e. language added by the CCPA and some of the follow bills the legislature passed.) This requirement would be binding on the companies that control collection and not just the entities doing the actual collecting, which suggests concern that the ultimate user of personal data would be shielded from revealing its identity to people. Worse still, the CCPA language may create an incentive to use front companies or third parties to collect personal data. Moreover, the CPRA makes clear that if a company is using another company to collect personal data it will ultimately control, it may meet its notice requirements by posting prominently on its website all the enumerated information. This may be a loophole large companies may use to avoid informing people who is controlling data collection.

The new language that tightens the information people must be provided as part of this notice, namely the purposes for which personal data is collected or used and whether the entity is proposing to sell or share this information. Moreover, the CPRA would mandate that the notice also include any additional purposes for which personal data are collected and used “that are incompatible with the disclosed purpose for which the personal information was collected.”

The changes to Section 1798.100 and the underlying CCPA language that would remain will apply to a new category of information created by the CPRA: “sensitive personal information.” This term is defined to mean:

  • personal Information that reveals
    • a consumer’s social security, driver’s license, state Identification card, or passport number;
    • a consumer’s account log-In, financial account, debit card, or credit card number in combination with any required security or access code, password, or credentials allowing access to an account;
    • a consumer’s precise geolocation;
    • a consumer’s racial or ethnic origin, religious or philosophical beliefs, or union membership;
    • the contents of a consumer’s mail, email and text messages, unless the business Is the Intended recipient of the communication;
    • a consumer’s genetic data; and
  • the processing of biometric Information for the purpose of uniquely identifying a consumer;
  • personal Information collected and analyzed concerning a consumer’s health; or
  • personal Information collected and analyzed concerning a consumer’s sex life or sexual orientation.

However, should any of these data be “publicly available” as defined by the CPRA, then it is no longer subject to the heightened requirements normally due this new class of information. For example, the new notice people must be given will list the categories of sensitive personal information collected and the purposes for which such information is collected or used. Additionally, people must be told whether this subset of personal data will be shared or sold.

The CPRA would limit collection, use, processing, and sharing of personal data to purposes “reasonably necessary and proportionate” to achieve the purpose of the information collection. Quite clearly, much will hang on what turns out to be “reasonable,” and this may be construed by the new data protection agency in regulation and ultimately courts in litigation. However, this provision also allows the “collection, use, retention, and sharing of a consumer’s personal information…for another disclosed purpose that Is compatible with the context in which the personal information was collected.” This will also need fleshing out either by regulation or litigation, or both. This seems to allow companies to specific another purpose for its data activities so long as it is compatible with the context of collection. And yet, it is not clear what would determine compatibility. If a person is agreeing to a grocery store chain’s data activities, might the company legally try to collect information regarding a person’s health?

This section also requires businesses to enter into contracts with third parties, service providers, and contractors to ensure they follow the CPRA and to specify that information sold or shared by the business is for limited and specific purposes.

Businesses are obligated to use “reasonable security procedures and practices appropriate to the nature of the personal information to protect the personal Information from unauthorized or illegal access, destruction, use, modification, or disclosure.” This is a familiar construct that contemplates a sliding scale of security measures with lesser steps being all right for less valuable information, say deidentified data, but with higher standards being needed for more sensitive personal data. The challenge in such a regime is that reasonable minds might theoretically disagree about reasonable measures, but it may be the purview of the caselaw construing the CPRA that will point the way to how businesses should secure information.

Section 1798.105 spells out a person’s right to delete personal information and expands the obligation of businesses to direct their service providers and contractors to delete information upon receipt of a valid request. Third parties would be notified of deletion requests and expected to also delete unless doing so would be impossible or “involves disproportionate effort,” a term likely to be given as expansive a reading as possible by many businesses. There still are numerous exceptions for deletion requests, many of which will also likely be read expansively by businesses reluctant to honor deletion requests, including:

  • Complete the transaction for which the personal information was collected, fulfill the terms of a written warranty or product recall conducted In accordance with federal low, provide a good or service requested by the consumer, or reasonably anticipated by the consumer within the context of a business’s ongoing business relationship with the consumer, or otherwise perform a contract between the business and the consumer.
  • Help to ensure security and integrity to the extent the use of the consumer’s personal information Is reasonably necessary and proportionate for those purposes.
  • Debug to identify and repair errors that Impair existing Intended functionality.
    Exercise free speech, ensure the right of another consumer to exercise that consumer’s right of free speech, or exercise another right provided for by law.
  • To enable solely internal uses that are reasonably aligned with the expectations of the consumer based on the consumer’s relationship with the business and compatible with the context in which the consumer provided the Information.
  • Comply with a legal obligation.

However, the CPRA eliminated the exception that could be used to deny deletion requests in the CCPA that allowed a business to “use the consumer’s personal information, internally, in a lawful manner that is compatible with the context in which the consumer provided the information.”

The CPRA creates a new section, 1798.106, titled “Consumers’ Right to Correct Inaccurate Personal Information,” that requires businesses to correct inaccurate personal information in light of the type of information and why it is being processed. Businesses must disclose that people have this right if a person submits a verifiable request to correct inaccurate personal information. However, companies are only required to make reasonable commercial efforts in correcting inaccurate personal information. It appears that a rulemaking is necessary to flesh out what would be a reasonable commercial efforts.

Section 1798.110 is amended by the CPRA but more or less keeps the CCPA’s right to know about and access being collected about them with some significant changes. For example, there is an expansion of one of the categories businesses must provide to people who utilize this right. This category is the commercial or business purpose for which collection and selling of personal information must be given to requesters currently under the CCPA. Under the CPRA, businesses would also need to inform people of the other entities with whom they share personal information, thus closing a significant loophole, for companies like Facebook share people’s information but do not sell it. Under the CCPA, a Facebook would not need to divulge to a person with which companies it is sharing one’s information.

Also, the CPRA would deem in compliance those companies that post on its website the categories of personal information, the sources of this information, its business or commercial purposes, and the categories of third parties to whom personal information is disclosed. It seems likely many companies will go this route, meaning the only personal information they would need to furnish upon a request would be the specific pieces of information on the person making the request. And yet, the CRPA strikes the CCPA requirement that businesses keep personal information for one-time transactions or to reidentify or link to these data.

Section 1798.115 of the CCPA would also be changed by expanding the universe of data a person may request and receive regarding how their personal information is shared and sold. The CPRA keeps the basic structure of the CCPA in this regard and merely expands it to include shared as well as sold for the following:

  • The categories of personal information sold or shared and the categories of third parties to whom such information was sold or shared
  • The categories of personal information disclosed about a person for business purposes and the categories of persons to whom such information was disclosed

Third parties would be barred from selling or sharing personal information that has been sold to or shared with them unless they provide explicit notice and people have the opportunity to opt-out.

The CRPA similarly changes Section 1798.120 (aka the right to opt out of the sharing or selling of one’s personal information). However, it keeps the CCPA’s right to opt out of sales or sharing at any time. Likewise, teenagers between 13-16 would need to affirmatively agree to selling or sharing, and for any child less than 13, his or her parents must affirmatively agree.

A new Section 1798.121, a “Consumers’ Right to Limit Use and Disclosure of Sensitive Personal Information,” would allow people to stop businesses from collecting and using sensitive personal information in some cases. As a general matter, if a person limits collection or use of this class of information, then the business would be limited to “that use which is necessary to perform the services or provide the goods reasonably expected by an average consumer who requests such goods or services” subject to some of the exceptions embodied in the definition of “business purpose.” Businesses may, however, provide notice of additional uses of sensitive personal information a person must further limit if these new uses ore objectionable or unwanted.

The CPRA changes the provision barring retaliation against people who opt out of certain practices or use their CCPA rights. The general prohibition on punishing people who use their rights under the bill with different prices, services, or products would be maintained. The CPRA would expand this protection to employees, contractors, and applicants for employment. However, the CPRA keeps the CCPA exemption for so-called loyalty programs to offer different prices or services but only if the difference is reasonably related to the value the person’s data provides to the business. The CCPA contains language requiring the linkage to be directly related, this is change may seen as a subtle weakening of the connection between the value of a person’s data and the rewards or prices offered through membership in a loyalty program. This will almost certainly result in businesses in California using current programs or establishing new programs to press people to share personal information in exchange for better prices or services. After all, all they would need to do is show the value of the person’s data is reasonably related to the advantages of membership. Like other similar provisions in the bill, regulation and litigation will define the parameters of what is reasonably related. Like the CCPA, the new bill would require people to opt into such programs, and should a person refuse, the business would need to wait 12 months before making the offer again.

Many of the previously discussed changes to the CCPA necessitate alterations to a key section of the statute, Section 1798.130, that details notice, disclosure, correction, and deletion requests. Businesses with physical locations must still offer two means for people to make such requests, but the CPRA would allow online businesses to merely make available an email address. Anyone who has ever tried to resolve disputes and problems via email knows this process can often be frustrating, but the new statute would allow companies like Facebook or Google to merely offer an email address.

The new 1798.130 also makes clear the 45-day window for businesses to deliver required information to people after receiving a verified request also includes making requested corrections and deletions. A potential hurdle is established for requests, however. In light of the type of information in question, a business may seek to authenticate a person’s identity before granting the request but may not force a person to create an account with the business if they do not have one. To be fair, this provision may be aimed at the mischief that could be created if a person decides to impersonate someone else and ask businesses to delete their personal information. There are likely even other such possible situations in which havoc could be wreaked by a malicious person.

In any event, the disclosure of information would need to cover the previous 12 months under the CPRA, and after new regulations are put in place, people would be able to ask for and receive information stretching back before the preceding 12 months. But such a request could be denied on the grounds of impossibility or disproportionate effort. Presumably the new regulations would address when these types of situations may be the case. Another limitation on this right is that businesses would not need to provide information before 1 January 2022.

If a person submits requests to learn what type of personal information has been collected or has been sold or shared to a business’ contractor or service provider, they have no obligation to respond. And yet, these entities must assist a business that receives such requests.

The CPRA stipulates that businesses are required to provide the following types of information if person asks for the data the entity has:

the categories of sources from which the consumer’s personal information was collected; the business or commercial purpose for collecting, or selling or sharing the consumer’s personal information; and the categories of third parties to whom the business discloses the consumer’s personal information.

A business is also obligated to provide the “specific pieces of personal information obtained from the consumer in a format that is easily understandable to the average consumer, and to the extent technically feasible, in a structured, commonly used, machine-readable format, which also may be transmitted to another entity at the consumer’s request without hindrance.”

Regarding the type of information a business must give to people who ask to know what, if any, information was sold or shared about them, a business must furnish two lists:

  • A list of the categories of personal information It has sold or shared about consumers in the preceding 12 months by reference to the enumerated category or categories in [the revised definition of personal information and new definition of sensitive personal information] that most closely describe the personal Information sold or shared, or If the business has not sold or shared consumers’ personal information in the preceding 12 months, the business shall prominently disclose that fact in Its privacy policy.
  • A list of the categories of personal information it has disclosed about consumers for a business purpose in the preceding 12 months by reference to the enumerated category in subdivision (c) that most closely describes the personal information disclosed, or If the business has not disclosed consumers’ personal information for a business purpose In the preceding 12 months, the business shall disclose that fact.

The categories of personal information a business must provide are “information that identifies, relates to, describes, is reasonably capable of being associated with, or could reasonably be linked, directly or indirectly, with a particular consumer or household. Personal information includes, but is not limited to, the following if it identifies, relates to, describes, is reasonably capable of being associated with, or could be reasonably linked, directly or indirectly, with a particular consumer or household:

(A) Identifiers such as a real name, alias, postal address, unique personal Identifier, online identifier, Internet Protocol address, email address, account name, social security number, driver’s license number, passport number, or other similar identifiers.

(B) Any personal information described in subdivision (e) of Section 1798.80.

(C) Characteristics of protected classifications under California or federal law.

(D) Commercial information, including records of personal property, products or services purchased, obtained, or considered, or other purchasing or consuming histories or tendencies.

(E) Biometric information.

(F) Internet or other electronic network activity information, including, but not limited to, browsing history, search history, and information regarding a consumer’s interaction with an Internet website, application, or advertisement.

(G) Geolocation data.

(H) Audio, electronic, visual, thermal, olfactory, or similar Information. (I) Professional or employment-related Information.

(J) Education information, defined as information that is not publicly available personally Identifiable information as defined In the Family Educational Rights and Privacy Act (20 U.S.C. section 1232g, 34 C.F.R. Part 99).

(K) Inferences drawn from any of the Information identified in this subdivision to create a profile about a consumer reflecting the consumer’s preferences, characteristics, psychological trends, predispositions, behavior, attitudes, intelligence, abilities, and aptitudes.

The CPRA modifies the CCPA standards on links on a business’ website allowing people to opt out of the sale or sharing of personal information. It also adds a requirement that such a link be placed on a website to allow people to opt out of the use or disclosure of sensitive personal information. A business would now be allowed to have one link for both if it wants, and it would also be allowed to remind people of the advantages of being a member of the business’ loyalty program and any associated charges or fees with not joining. This provision would seem to allow some businesses, at least those who can make the case of a reasonable relation between the discounts provided and the value of personal information, to pose a possibly uncomfortable dilemma to people: your privacy or your money. Put another way, the CPRA may well result in a price being put on one’s privacy with those of means or those intensely dedicated to privacy being able or willing to limit these practices while everyone else will acquiesce in the face of higher prices of worse services or products. Additionally, companies would not need to have links on their website if they allow for opting out through their platform, technology, or app.

If a person opts out, companies would have to wait 12 months before asking again whether they would not mind allowing the business to sell or share their personal information or use of disclose their sensitive personal information. But, one should bear in mind that even if a person opts out of the sale or sharing of personal information, a business may still collect or process it subject to other requirements in the CPRA. This right is limited to the dissemination of personal information through sales or a sharing arrangement.

The CPRA revises some key definitions and introduces new definitions, the most significant of which was discussed earlier: sensitive personal information. Another key change is to criteria for businesses subject to the CPRA. Each of the three thresholds for becoming a regulated business are changed:

  • First, language is changed to make clear a company must have earned $25 million in gross revenues in the preceding year to qualify on the basis of income.
  • Second, the threshold for the number of people is changed. It is raised from 50,000 to 100,000, and instead of counting people and devices, the latter is stricken and now households may be counted. Obviously, a household will likely include multiple devices, so counting by household allows for a higher threshold generally. Also, the counting is limited to the activities of businesses buying, selling, or sharing personal information, and so mere collection and processing is not counted, meaning if a business does not partake in any of the three enumerated categories, it would not qualify under this prong even if collects and processes the personal information of, say, 1 million Californians.
  • Thirdly, the threshold for businesses deriving 50% or more of their income selling consumers’ personal information is broadened to include sharing, meaning more entities might qualify on the basis of this prong.

Also, of note, the definition of business purpose was altered, and new definitions are provided for consent, contractor, cross-context behavioral advertising, dark pattern, non-personalized advertising and others.

The section on exemptions to the bars in the CCPA is rewritten and expanded by the CPRA. Businesses may disregard the obligations placed on by this privacy statute under a number of circumstances. For example, added circumstances include complying with a subpoena or court order or responding to direction by law enforcement agencies. Moreover, government agencies would able to make emergency requests for personal information to business if acting in good faith, asserts a legal right to do so, and follows with a court order within 3 days. There is also language that adds contractors to the CCPA’s provisions on the liability of a business for violations by its service providers that requires actual knowledge of such violations.

The CPRA keeps the CCPA’s grant of authority to allow people to sue for violations but casually tightens the circumstances under which this may happen to those in which one’s personal information is not encrypted and not redacted. The CCPA allows for a suit if a person’s personal information is neither encrypted nor redacted. Consequently, if a business uses either method of securing information it cannot be sued.

As noted, the bill would establish a California Privacy Protection Agency that would take over enforcement of the revised CCPA from the Office of the Attorney General. It would consist of a five-member board including a chair. At the earlier date of either 1 July 2021 or six months after the new agency informs the Attorney General it is ready to begin drafting rules, it shall have rulemaking authority. However, before this date, the Attorney General may have the authority or opportunity to begin some of the CPRA rulemakings during an interregnum that may serve to complicate implementation. Nonetheless, among other powers, the new agency would be able to investigate and punish violations with fines of up $2,500 per violation except for intentional violations and those involving the personal information of minor children, which could be fined at a rate of $7,500 per violation. Like the Federal Trade Commission, the California Privacy Protection Agency would be able to bring administrative actions inside the agency or go to court to sue. However, this new entity would only be provided $5 million during its first year and $10 million a year thereafter, which begs the question as to whether the new agency will be able to police privacy in California in a muscular way.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by David Mark from Pixabay

Dueling COVID-19 Privacy Bills Released

Democratic stakeholders answer a Republican proposal on how to regulate privacy issues raised by COVID-19 contact tracing. The proposals have little chance of enactment and are mostly about positioning.  

First things first, if you would like to receive my Technology Policy Update, email me. You can find some of these Updates from 2019 and 2020 here.

Late last week, a group of Democratic stakeholders on privacy and data security issues released the “Public Health Emergency Privacy Act” (S.3749), a bill that serves as a counterpoint to the “COVID-19 Consumer Data Protection Act” (S.3663) legislation introduced a few weeks ago to pose a solution to the privacy issues raised by contact tracing of COVID-19. However, the Democratic bill contains a number of provisions that many Republicans consider non-starters such as a private right of action for individuals and no preemption of state laws. It is not likely this bill will advance in the Senate even though it may possibly be moved in the House. S. 3749 was introduced by Senators Richard Blumenthal (D-CT) and Mark Warner (D-VA) and Representatives Anna Eshoo (D-CA), Jan Schakowsky (D-IL), and Suzan DelBene (D-WA). 

In a way, the “Public Health Emergency Privacy Act” makes the case that “Health Insurance Portability and Accountability Act” (HIPAA)/“Health Information Technology for Economic and Clinical Health Act” (HITECH Act) regulations are inadequate to protect the privacy and data of people in the United States for the reason that these regulations apply only to healthcare providers, their business associates, and other named entities in the healthcare system. Entities collecting healthcare information, even very sensitive information, are not subject to HIPAA/HITECH regulations and would likely be regulated, to the extent they are, by the Federal Trade Commission (FTC) or state agencies and attorneys general.

The “Public Health Emergency Privacy Act” would cover virtually all entities, including government agencies except for public health authorities (e.g. a state department of health or the Centers for Disease Control and Prevention), healthcare providers, service providers, people acting in a household capacity. This remarkable definition likely reflects plans announced around by the world by governments to eschew Google and Apple’s contact tracing app to develop its own. Moreover, it would also touch some efforts aside and apart from contact tracing apps. Moreover, this is the first bill introduced recently that proposes to treat public and private entities the same way with respect to how and when they may collect personal data.

The types of data protected under the act are “emergency health data” which are defined as “data linked or reasonably linkable to an individual or device, including data inferred or derived about the individual or device from other collected data provided such data is still linked or reasonably linkable to the individual or device, that concerns the public COVID–19 health emergency.” The bill then lists a number of examples of emergency health data that is sweeping and comprehensive. This term includes “information that reveals the past, present, or future physical or behavioral health or condition of, or provision of healthcare to, an individual” related to testing for COVID-19 and related genetic and biometric information. Geolocation and proximity information would also be covered by this definition. Finally, emergency health data encompasses “any other data collected from a personal device,” which seems to cover all data collection from a person’s phone, the primary means of contact tracing proposed thus far. However, it would appear that data on people collected at the household or higher level may not be subject to this definition and hence much of the bill’s protections.

The authority provided to covered entities is limited. First, collection, use, and disclosure of emergency health data are restricted to good faith health purposes. And yet, this term is not defined in the act, and the FTC may need to define it during the expedited rulemaking the bill. However, it seems a fairly safe assumption the agency and courts would not construe using emergency health data for advertising would not be a good faith public health purpose. Next, covered entities must allow people to correct inaccurate information and the entity itself has the duty to take reasonable efforts on its own to correct this information. Covered entities must implement reasonable safeguards to prevent discrimination on the basis of these data, which is a new obligation placed on covered entities in a privacy or data security bill. This provision may have been included to bar government agencies collecting and using covered data for discriminatory purposes. However, critics of more expansive bills may see this as the camel’s nose under the tent for future privacy and data security bills. Finally, the only government agencies that can legally be provided emergency health data are public health authorities and then only “for good faith public health purposes and in direct response to exigent circumstances.” Again, the limits of good faith public health purposes remain unclear and then such disclosure can only occur in exigent circumstances, which likely means when a person has contracted or been exposed to COVID-19.

Covered entities and service providers must implement appropriate measure to protect the security and confidentiality of emergency health data, but the third member of the usual triumvirate, availability, is not identified as requiring protection.

The “Public Health Emergency Privacy Act” outright bars a number of uses for emergency health data:

  • Commercial advertising, recommendations for e-commerce, or machine learning for these purposes
  • Any discriminatory practices related to employment, finance, credit, insurance, housing, or education opportunities; and
  • Discriminating with respect to goods, services, facilities, privileges, advantages, or accommodations of any place of public accommodation

It bears note the covered data cannot be collected or disclosed for these purposes either.

The act contains very strong consent language. Covered entities may not collect, use, or disclose emergency health data unless a person has provided express affirmative consent, which requires knowing, informed choice that cannot be obtained through the use of deceptive practices nor inferred through a person’s inaction. However, there are significant exceptions to this consent requirement that would allow covered entities to collect, use, or disclose these data, including

  • protecting against malicious, deceptive, fraudulent, or illegal activity; or
  • detecting, responding to, or preventing information security incidents or threats;
    the covered organization is compelled to do so by a legal obligation.

But these purposes are valid only when necessary and solely for the fulfillment of named purpose.

In a related vein, covered entities must allow people to revoke their consent and it must be effective as soon as practicable but no later than 15 days afterward. Moreover, the person’s emergency health data must be destroyed or rendered not linkable within 30 days of revocation of consent.

Covered entities must provide clear and conspicuous notice before or at the point of data collection that explains how and why the emergency health data is collected, used, and disclosed. Moreover, it must also disclose the categories of recipients to whom a covered entity discloses covered data. This notice must also disclose the covered entity’s data retention policies and data security policies and practices but only with respect to emergency health data. The notice must also inform consumers on how they may exercise rights under the act and how to file a complaint with the FTC.

There would also be a public reporting requirement for those covered entities collecting, using, or disclosing the emergency health data of 100,000 or more people. Every three months, these entities would need to report on aggregate figures on the number of people from whom data was collected, used, and disclosed. These reports must also detail the categories of emergency health data collected, used and disclosed and the categories of third parties with whom such data are shared.

The bill requires covered entities to destroy or make not linkable emergency health data 60 days after the Secretary of Health and Human Services declares the end of the COVID-19 public health emergency, or a state does so, or 60 days after collection.

The FTC would be given the daunting task of beginning a rulemaking 7 days after enactment “to ensure a covered organization that has collected, used, or disclosed emergency health data before the date of enactment of this Act is in compliance with this Act, to the degree practicable” that must be completed within 45 days. There is also a provision requiring the Department of Health and Human Services (HHS) to issue guidance so that HIPAA/HITECH Act regulated entities do not need to meet duplicative requirements, and, moreover, the bill exempts these entities when operating under the HIPAA/HITECH Act regulations.

The FTC, state attorneys general, and people would be able to enforce this new act through a variety of means. The FTC would treat violations as f they were violations of regulation barring a deceptive or unfair practice, meaning the agency could levy fines in the first instance of more than $43,000 a violation. The FTC could seek a range of other relief, including injunctions, restitution, disgorgement, and remediation. However, the FTC could go to federal court without having to consult with the Department of Justice, which is a departure from current law and almost all the privacy bills introduced in this Congress. The FTC’s jurisdiction would be broadened to include common carriers and non-profits for purposes of this bill, too. The FTC would receive normal notice and comment rulemaking authority that is very broad as the agency would be free to promulgate any regulations it sees necessary to effectuate this act.

State attorneys general would be able to seek all the same relief the FTC can so long as the latter is not already bringing an action. Moreover, any other state officials empowered by state statute to bring similar actions may do so for violations of this act.

People would be allowed to sue in either federal or state court to vindicate violations. The bill states that any violation is to be considered an injury in fact to forestall any court from finding that a violation does not injure the person, meaning her suit cannot proceed. The act would allow people to recover between $100-1000 for any negligent violation and between $500-5000 for any reckless, willful, or intentional violation with no cap on total damages. People may also ask for and receive reasonable attorney’s fees and costs and any equitable or declaratory relief a court sees fit to grant. Moreover, the bill would disallow all pre-lawsuit arbitration agreements or waivers of rights. Much of the right to sue granted to people by the “Public Health Emergency Privacy Act” will be opposed by many Republicans and industry stakeholders.

The enforcement section raises a few questions given that entities covered by the bill include government agencies. Presumably, the FTC cannot enforce this act against government agencies for the very good reason they do not have jurisdiction over them. However, does the private right of action waive the federal and state government’s sovereign immunity? This is not clear and may need clarification of the bill is acted upon in either chamber of Congress.

This bill would not preempt state laws, which, if enacted, could subject covered entities to meeting more than one regime for collecting, using, and disclosing emergency health data.

Apart from contact tracing, the “Public Health Emergency Privacy Act” also bars the use of emergency health data to abridge a person’s right to vote and it requires HHS, the United States Commission on Civil Rights, and the FTC to “prepare and submit to Congress reports that examines the civil rights impact of the collection, use, and disclosure of health information in response to the COVID–19 public health emergency.”

Given the continued impasse over privacy legislation, it is little wonder that the bill unveiled a few weeks ago by four key Senate Republicans takes a similar approach that differs in key aspects. Of course, there is no private right of action and it expressly preempts state laws to the contrary.

Generally speaking, the structure of the “COVID–19 Consumer Data Protection Act of 2020” (S.3663) tracks with the bills that have been released thus far by the four sponsors: Senate Commerce, Science, and Transportation Committee Chair Roger Wicker (R-MS) (See here for analysis of the “Consumer Data Privacy Act of 2019”)and Senators John Thune (R-SD), Jerry Moran (R-KS) (See here for analysis of “Consumer Data Privacy and Security Act of 2020” (S.3456)), and Marsha Blackburn (R-TN) (See here for analysis of the “Balancing the Rights Of Web Surfers Equally and Responsibly Act of 2019” (BROWSER Act) (S. 1116)). In short, people would be provided with notice about what information the app collects, how it is processed, and with whom and under what circumstances this information will be shared. Then a person would be free to make an informed choice about whether or not she wants to consent and allow the app or technology to operate on her smartphone. The FTC and state attorneys general would enforce the new protections.

The scope of the information and entities covered is narrower than the Democratic bill. “Covered data” is “precise geolocation data, proximity data, a persistent identifier, and personal health information” but not aggregated data, business contact information, de-identified data, employee screening data, and publicly available information. Those entities covered by the bill are those already subject to FTC jurisdiction along with common carriers and non-profits, but government agencies are not, again unlike the bill put forth by Democrats. Entities must abide by this bill to the extent they “collect[], process[], or transfer[] such covered data, or determine[] the means and purposes for the collection, processing, or transfer of covered data.”

Another key definition is how an “individual” is determined, for it excludes any person acting in her role as an employee and almost all work-related capacities, making clear employers will not need to comply with respect to those working for them.

“Personal health information” is “information relating to an individual that is

  • genetic information of the individual; or
  • information relating to the diagnosis or treatment of past, present, or future physical, mental health, or disability of the individual; and
  • identifies, or is reasonably linkable to, the individual.

And yet, this term excludes educational information subject to the “Family Educational Rights and Privacy Act of 1974” or “information subject to regulations promulgated pursuant to the HIPAA/HITECH Acts.

The “COVID–19 Consumer Data Protection Act of 2020” bars the collection, processing, or transferring of covered data for a covered purpose unless prior notice is provided, a person has provided affirmative consent, and the covered entity has agreed not to collect, process, or transfer such covered data for any other purpose than those detailed in the notice. However, leaving aside the bill’s enumerated allowable purposes for which covered data may collected with consent, the bill provides a number of exemptions from this general bar. For example, collection, processing, or transfers necessary for a covered entity to comply with another law are permissible apparently in the absence of a person’s consent. Moreover, a covered entity need not obtain consent for operational or administrative tasks not disclosed in the notice provided to people.

The act does spell out “covered purposes” for which covered entities may collect, process, or transfer covered data with consent after notice has been given:

  • Collecting, processing, or transferring the covered data of an individual to track the spread, signs, or symptoms of COVID–19.
  • Collecting, processing, or transferring the covered data of an individual to measure compliance with social distancing guidelines or other requirements related to COVID–19 that are imposed on individuals under a Federal, State, or local government order.
  • Collecting, processing, or transferring the covered data of an individual to conduct contact tracing for COVID–19 cases.

Covered entities would be required to publish a privacy policy detailing for which of the above covered purposes a person’s covered data would be collected, processed, or transferred. This policy must also detail the categories of entities that receive covered data, its data retention and data security policies.

There would be reporting requirements that would affect more covered entities than the Democratic bill. Accordingly, any covered entity collecting, processing or transferring covered data for one of the enumerated covered processes must issue a public report 30 days after enactment and then every 60 days thereafter.

Among other provisions in the bill, people would be able to revoke consent, a request that covered entities must honor within 14 days. Covered entities must also ensure the covered data are accurate, but this requirement falls a bit short of people being granted to right to correct inaccurate data as they would, instead, be merely able to report inaccuracies. There is no recourse if a covered entity chooses not to heed these reports. Covered entities would need to “delete or de-identify all covered data collected, processed, or transferred for a [covered purpose] when it is no longer being used for such purpose and is no longer necessary to comply with a Federal, State, or local legal obligation, or the establishment, exercise, or defense of a legal claim.” Even though there is the commandment to delete or de-identify, the timing as to when that happens seems somewhat open-ended as some covered entities could seek legal obligations to meet in order to keep the data on hand.

Covered entities must also minimize covered data by limiting collection, processing, and transferring to “what is reasonably necessary, proportionate, and limited to carry out [a covered purpose.]” The FTC must draft and release guidelines “recommending best practices for covered entities to minimize the collection, processing, and transfer of covered data.” However, these guidelines would most likely be advisory in nature and would not carry the force of law or regulation, leaving covered entities to disregard some of the document if they choose. Covered entities must “establish, implement, and maintain reasonable administrative, technical, and physical data security policies and practices to protect against risks to the confidentiality, security, and integrity of such data,” a mandate broader than the duty imposed by the Democratic bill.

The FTC and state attorneys general would enforce the new regime. The FTC would be able to seek civil fines for first violations in the same way as the Democrat’s privacy bill. However, unlike the other bill, the Republican bill would nullify any Federal Communications Commission (FCC) jurisdiction to the extent it conflicted with the FTC’s in enforcement of the new act. Presumably, this would address jurisdictional issues raised by placing common carriers under FTC jurisdiction when they are usually overseen by the FCC. Even though state laws are preempted, state attorneys general would be able to bring actions to enforce this act at the state level. And, as noted earlier, there is no private right of action.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

CCPA 2.0 Backers Submit Ballot Initiative for November Election

A new California ballot initiative is submitted for approval that would revise the CCPA and impose new requirements starting in 2023, if enacted. This new statute could not be amended to weaken it per ballot initiative law in California.

The organization that forced action on the “California Consumer Protection Act” (CCPA) (AB 375) by getting its proposed measure approved for California’s November 2018 ballot announced that it has the sufficient number of signatures to get its preferred revision of the CCPA on the ballot for this fall’s coming election. If this effort succeeds, and Californians vote for this measure, it would throw the state’s efforts to establish and enforce the new CCPA into doubt as the new regime would commence in 2023 and there would likely again be a rulemaking process to implement the new statute. It is possible that should this initiative be placed on the November ballot, new life could be breathed into Congressional efforts to pass a national privacy and data protection bill.

The Californians for Consumer Privacy claimed in its press release “it is submitting well over 900,000 signatures to qualify the “California Privacy Rights Act” (CPRA) for the November 2020 ballot.” The Californians for Consumer Privacy have been negotiating extensively with stakeholders on the CCPA’s follow on bill and actually released a draft bill last fall. Nonetheless, even though some stakeholders were able to secure desired changes in the base text, others were not. This fact along with the reality that it is next to impossible to weaken or dilute statutes added to the California Code through ballot initiative suggest a serious campaign to defeat this ballot initiative.

In a summary, the Californians for Consumer Privacy claimed the CPRA would:

1) Make it almost impossible to weaken privacy in California in the future, absent a new initiative allowing such weakening. CPRA would give the California Legislature the power to amend the law via a simple majority, but any amendment would have to be “in furtherance of the purpose and intent” of CPRA, which is to enhance consumer privacy. This would protect privacy in California from a business onslaught to weaken it in Sacramento.

2) Establish a new category of sensitive personal information (SPI), and give consumers the power to restrict the use of it. SPI includes: SSN, DL, Passport, financial account info, precise geolocation, race, ethnicity, religion, union membership, personal communications, genetic data, biometric or health information, information about sex life or sexual orientation.

3) Allow consumers to prohibit businesses from tracking their precise geolocation for most purposes, including advertising, to a location within roughly 250 acres.

a. This would mean no more tracking consumers in rehab, a cancer clinic, at the gym (for how long) at a fast food restaurant (how often), sleeping in a separate part of the house from their partner (how recently), etc., all with the intention of monetizing that most intimate data that makes up people’s lives.

4) Add email +password to the list of items covered by the ‘negligent data breach’ section to help curb ID theft. Your sensitive information (i.e. your health or financial data)would now include your email and password; and if mishandled, you would be able to sue the business for damages, without having to prove an actual financial loss (and let’s face it—who can ever link the data breach from one company, to the ID theft six months later.  It’s impossible, and this would change that). 

5) Establish the California Privacy Protection Agency to protect privacy for Californians, funded with $10M from the State’s General Fund

a. This funding would equate to roughly the same number of privacy enforcement staff as the FTC has to police the entire country (the FTC has 40 privacy professionals).

A predecessor bill, “The California Privacy Rights and Enforcement Act of 2020” (CPREA), was released last fall (See 3 October 2019 Technology Update for write up.) At the time, Californians for Consumer Privacy Chair and Founder Alistair Mactaggart explained his reasoning for a second ballot initiative: “First, some of the world’s largest companies have actively and explicitly prioritized weakening the CCPA…[and] [s]econd, technological tools have evolved in ways that exploit a consumer’s data with potentially dangerous consequences.”

As noted, changes to the California Code made by ballot initiative are much harder to change or modify than the legislative route for enacting statutes. Notably, the CPRA would limit future amendments to only those in furtherance of the act, which would rule out any attempts to weaken or dilute the new regime. Consequently, industry and allied stakeholders can be expected to fight this ballot initiative.

As mentioned, stakeholders in Congress may be motivated by this new effort to resolve differences and reach agreement on a bill to govern privacy and protect data at the federal level, sweeping aside state laws like the CPRA. However, a new, stronger law in California may cause key Democrats to dig in and insist on the policy changes Republicans have been reluctant to give way on such as a federal private right of action. In such a scenario, it is conceivable Democrats would use their leverage to extract even more changes from Republicans. As it stands, Republicans have moved a fair distance from their original positions on privacy and data protection and may be willing to cede more policy ground.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Senate Commerce Republicans Vow To Introduce Privacy Bill To Govern COVID-19 Apps and Tech

Key Republican stakeholders on privacy legislation float a bill on COVID-19 relating to privacy that seems unlikely to garner the necessary Democratic buy-in to advance.  

Late last week, key Republicans on the Senate Commerce, Science, and Transportation announced they would introduce the “COVID-19 Consumer Data Protection Act” that provide new privacy and data security protections for the use of a COVID-19 contact tracing app and similar technologies. To date, text of the legislation has not been released and so any analysis of the bill is derived from a short summary issued by the committee and reports from media outlets that have apparently been provided a copy of the bill.

Based on this information, to no great surprise, the basic structure of the bill tracks privacy and data protection legislation previously introduced by the co-sponsors of the new bill: Chair Roger Wicker (R-MS) (See here for analysis of the “Consumer Data Privacy Act of 2019”)and Senators John Thune (R-SD), Jerry Moran (R-KS) (See here for analysis of “Consumer Data Privacy and Security Act of 2020” (S.3456)), and Marsha Blackburn (R-TN) (See here for analysis of the “Balancing the Rights Of Web Surfers Equally and Responsibly Act of 2019” (BROWSER Act) (S. 1116)). In short, people would be provided with notice about what information the app collects, how it is processed, and with whom and under what circumstances this information will be shared. Then a person would be free to make an informed choice about whether or not she wants to consent and allow the app or technology to operate on her smartphone. The Federal Trade Commission (FTC) and state attorneys general would enforce the new protections, and as there was no mention of a private right of action, and given these Members opposition to such provisions, it is likely the bill does not provide such redress. Moreover, according to media reports, the bill would preempt state laws contrary to its provision, which would be another likely non-starter among Democrats.

Wicker, Thune, Moran, and Blackburn claimed their bill “would provide all Americans with more transparency, choice, and control over the collection and use of their personal health, geolocation, and proximity data…[and] would also hold businesses accountable to consumers if they use personal data to fight the COVID-19 pandemic” as they asserted in their press release.

Wicker, Thune, Moran, and Blackburn provided this summary of the “COVID-19 Consumer Data Protection Act:”

  • Require companies under the jurisdiction of the Federal Trade Commission to obtain affirmative express consent from individuals to collect, process, or transfer their personal health, geolocation, or proximity information for the purposes of tracking the spread of COVID-19.
  • Direct companies to disclose to consumers at the point of collection how their data will be handled, to whom it will be transferred, and how long it will be retained.
  • Establish clear definitions about what constitutes aggregate and de-identified data to ensure companies adopt certain technical and legal safeguards to protect consumer data from being re-identified.
  • Require companies to allow individuals to opt out of the collection, processing, or transfer of their personal health, geolocation, or proximity information.
  • Direct companies to provide transparency reports to the public describing their data collection activities related to COVID-19.
  • Establish data minimization and data security requirements for any personally identifiable information collected by a covered entity.
  • Require companies to delete or de-identify all personally identifiable information when it is no longer being used for the COVID-19 public health emergency.
  • Authorize state attorneys general to enforce the Act.

If such legislation were to pass, it would add to the patchwork of privacy and data security bills already enacted that are geared to addressing certain sectors or populations (e.g. the “Health Insurance Portability and Accountability Act” (HIPAA) protects some healthcare information and “Children’s Online Privacy Protection Act” (COPPA) broadly protects children online.)

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

The BROWSER Act (S. 1116)

First things first, if you would like to receive my Technology Policy Update, email me. You can find some of these Updates from 2019 here.

My apologies. I thought I had posted this write up and others on the various privacy and data protection bills. In any event, I’ll be doing some remedial work of a sort in putting these materials up, which is not to say I see any great movement on Congress passing a U.S. privacy and data protection bill.

In this post, we will examine one of the Senate bills sponsored by Senators Marsha Blackburn (R-TN), Tammy Duckworth (D-IL), and Martha McSally (R-AZ): the “Balancing the Rights Of Web Surfers Equally and Responsibly Act of 2019” (BROWSER Act) (S. 1116). S. 1116 would set up an enhanced notice and consent regime for consumers policed by the Federal Trade Commission (FTC) but only for certain classes of private sector entities collecting, sharing, selling, and using consumer information, mainly broadband providers and so-called “edge Providers,” that is entities like Google and Facebook that provide services online. This bill is much closer to the current FTC means for regulating privacy and data security even though the scope of the agency’s jurisdiction to police privacy practices for some types of consumer information would be expanded.

As noted, this bill would cover only “broadband internet access service[s]” and “edge service[s],” which as these terms are defined in the bill would mostly be technology and communications companies. Therefore, this bill would sweep much more narrowly than many of the other privacy bills introduced thus far. Accordingly, S. 1116 defines “broadband internet access service” as “a mass-market retail service by wire or radio that provides the capability to transmit data to and receive data from all or substantially all internet endpoints, including any capabilities that are incidental to and enable the operation of the communications service, but excluding dial-up internet access service.” The bill also provides a definition of “edge service:” “a service provided over the internet—

for which the provider requires the user to subscribe or establish an account in order to use the service;

that the user purchases from the provider of the service without a subscription or account;

by which a program searches for and identifies items in a database that correspond to keywords or characters specified by the user, used especially for finding particular sites on the world wide web; or

by which the user divulges sensitive user information; and

includes a service described in subparagraph (A) that is provided through a software program, including a mobile application.

Clearly, big technology companies like Facebook, Google, Instagram, Amazon, etc. would be classified as “edge providers.” Moreover, the definition of broadband internet access service would clearly include all of the internet service providers like Comcast or AT&T but would also seem to include cell phone service providers like Verizon and T-Mobile.

All covered service providers must “provide a user of the service with clear and conspicuous notice of the privacy policies of the provider with respect to the service.” Additionally, covered service providers must also give users “clear and conspicuous advance notice of any material change to the privacy policies of the provider with respect to the service.”

Whether consumers need to opt-in or opt-out on data use will turn on whether the information is “sensitive” or not. Under S. 1116, “sensitive user information” includes any of the following:

  • Financial information.
  • Health information.
  • Information pertaining to children under the age of 13.
  • Social Security number.
  • Precise geolocation information.
  • Content of communications.
  • Web browsing history, history of usage of a software program (including a mobile application), and the functional equivalents of either.

Among the information that would be deemed non-sensitive under the bill are meta-data (aka call detail records) from usage of a phone such as the addressee of a communication and the time, one’s order history from a site like Amazon, matters relating to employment, and other categories of information not enumerated above. Additionally, the bill deems “precise geolocation information” as sensitive information, suggesting “geolocation information” that is less than precise might be non-sensitive. So, perhaps a trip to a mall would not be considered “precise” but the stores a customer visits might be?

Covered service providers would need to “obtain opt-in approval from a user to use, disclose, or permit access to the sensitive user information of the user.” However, what constitutes the “approval” necessary to satisfy this requirement is not spelled out in the bill. Conversely, the provider of covered services must only offer consumers the option to opt out of the use, disclosure, and accessing of their non-sensitive personal information. Again “approval” is a key word as covered service providers need only obtain a consumer’s approval in order to opt-out.

As is usually the case, there are some exceptions to this seemingly general rule against using, collecting, sharing, or selling sensitive user information. Notably, in the following situations, covered service providers need not obtain opt-in approval from consumers:

(1) In providing the covered service from which the information is derived, or in providing services necessary to, or used in, the provision of the service.

(2) To initiate, render, bill for, and collect for the covered service.

(3) To protect the rights or property of the provider, or to protect users of the covered service and other service providers from fraudulent, abusive, or unlawful use of the service.

(4) To provide location information or non-sensitive user information—

(A) to a public safety answering point, emergency medical service provider or emergency dispatch provider, public safety, fire service, or law enforcement official, or hospital emergency or trauma care facility, in order to respond to the request of the user for emergency services;

(B) to inform the legal guardian of the user, or members of the immediate family of the user, of the location of the user in an emergency situation that involves the risk of death or serious physical harm; or

(C) to providers of information or database management services solely for purposes of assisting in the delivery of emergency services in response to an emergency.

(5) As otherwise required or authorized by law.

Covered service providers would not be able to require consumers to waive their privacy rights in exchange for use of a service. The bill stipulates that “[a] provider of a covered service may not—

(1) condition, or effectively condition, provision of the service on agreement by a user to waive privacy rights guaranteed by law or regulation, including this Act; or

(2) terminate the service or otherwise refuse to provide the service as a direct or indirect consequence of the refusal of a user to waive any privacy rights described in paragraph (1).”

The FTC would enforce this new privacy scheme under its existing Section 5 powers to police unfair and deceptive practices and crucially not as if a violation of an existing FTC regulation against unfair and deceptive practices. If the FTC is seeking to punish a violation of such a regulation, it may seek civil fines in the first instance. And, this is in contrast to the FTC’s general powers to punish unfair and deceptive practices with respect to data security and privacy violations, which is limited to monetary remedies in the form of equitable relief such as disgorgement and restitution. The BROWSER Act would be at odds with most other privacy bills that contain language such as “[a] violation of this Act or a regulation promulgated under this Act shall be treated as a violation of a rule under section 18(a)(1)(B) of the Federal Trade Commission Act (15 U.S.C. 57a(a)(1)(B)) regarding unfair or deceptive acts or practices.”

Again unlike other bills, the BROWSER Act does not provide the FTC with the authority to promulgate regulations under the Administrative Procedures Act (APA) process, and to the extent the agency would be able to write regulations to implement the bill, it would be under the much more lengthy and involved Moss-Magnuson procedures that have effectively halted the FTC’s regulatory activity (seeIt’s  Time  to  Remove  the  “Mossified” Procedures  for  FTC  Rulemaking” for a summary of these procedures.) Therefore, the FTC would essentially extend to privacy regulation its current practice of penalizing companies for not maintaining “reasonable” data security standards on a case-by-case basis and not providing any bright lines to assure companies of the practices.

The FTC’s jurisdiction would be expanded, however, to police the privacy practices under the bill for broadband providers that would otherwise be subject to the jurisdiction and enforcement powers of the Federal Communications Commission (FCC.)

The bill would preempt state privacy laws. To wit, “[n]o State or political subdivision of a State shall, with respect to a provider of a covered service subject to this Act, adopt, maintain, enforce, or impose or continue in effect any law, rule, regulation, duty, requirement, standard, or other provision having the force and effect of law relating to or with respect to the privacy of user information.” Of course, preemption of state laws is a non-starter for many Democrats but a sine non qua for many Republicans, leaving this as an area of ongoing dispute.

Regarding another issue that has split Democrats and Republicans in the past regarding data security legislation, the BROWSER Act would not provide a role for state attorneys general to enforce the new regulatory regime. However, Republicans may be willing to give on this issue provided consumers have no private right of action, and the BROWSER Act would not allow consumers to sue those providing covered services for violating the bill.

© Michael Kans and Michael Kans Blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans and Michael Kans Blog with appropriate and specific direction to the original content.

Moran Releases Long Awaited Privacy Bill Without Blumenthal

Senator Jerry Moran (R-KS) has released his long-awaited privacy and data security bill, the “Consumer Data Privacy and Security Act of 2020” (S.3456) that is not cosponsored by Senator Richard Blumenthal (D-CT) even though the two Senators have been in talks since late 2018 along with other Senators to draft a bipartisan bill. Of course, Moran chairs the Senate Commerce, Science, and Transportation Committee’s Manufacturing, Trade, and Consumer Protection Subcommittee and so is a key stakeholder with input on any privacy and data security legislation coming from that committee. However, Moran’s bill is likely a nonstarter with Senate and House Democrats because it does not provide people with a private right of action and it would preempt state laws like the “California Consumer Privacy Act” (CCPA) (AB 375). Moreover, the Federal Trade Commission’s (FTC) ability to obtain civil fines would be limited only to situations where the entity in question had actual knowledge of the violations as opposed to the standard many agencies use to enforce: constructive knowledge (i.e. knew or should have known.) This, too, is contrary to not only the Democratic privacy bills but also some of the Republican bills, which would allow the FTC to levy fines on the basis of constructive knowledge.

However, like almost all the other bills, the “Consumer Data Privacy and Security Act of 2020” would require covered entities to obtain express affirmative consent to collect from and process the personal data of people after providing extensive disclosure and notice about who and with whom their personal information would be shared. Likewise, this bill would give people certain rights, such as a right to access, correct, delete, and port their personal data. People would also be granted the right of erasure under which a covered entity must delete or de-identify the personal data of any person who submits a verified request. However, small businesses would be exempted from from granting requests to access and the right to correct. There are, again like many other privacy bills, circumstances under which a covered entity may decline to grant a request to exercise these rights. For example, if doing so would violate a law or legal process, then the covered entity could say no to a person. Likewise, if a person’s life is in imminent danger, then a request could also be denied. There are other such circumstances, some of which privacy and civil liberties advocates will assert will turn out to be such wide loopholes that the rights will cease to be meaningful as they have with some of the other bills.

In terms of who would be subject to the Act, entities covered by the bill would be those currently subject to FTC jurisdiction and non-profits and common carriers. Moreover, the bill has fairly expansive definitions of “personal data” and “sensitive personal data,” like many of the other bills.

Like some of the privacy bills, large covered entities would have additional privacy obligations and responsibilities. For those entities that collect and process the personal data of 20 million or more people per year or the sensitive personal data of 1 million or more a year, then these entities must have a privacy officer to advise the entity on compliance and monitoring. Also, these large entities must also take extra steps for making material changes to their privacy policies, including privacy impact assessments and the development and implementation of a comprehensive privacy policy.

The Consumer Data Privacy and Security Act of 2020 tracks with other privacy bills in requiring that covered entities must also implement data security safeguards to protect the integrity, confidentiality, and security of personal data. There would be a sliding scale of sorts with less sensitive data requiring less rigorous protection and conversely the more sensitive the data, the more stringent the safeguards that must be used. Covered entities must also conduct periodic, regular risk assessments and then remediate any turned up risks. Covered entities must also ensure their service providers and any third parties with whom they are sharing personal data are instituting data security standards but at a lower defined standard than the covered entity itself. For example, the latter entities must only protect the security and confidentiality of the information they hold, collect, or process for a covered entity and are not responsibility for the integrity of the information.

When a covered entity uses a service provider to collect or process personal data, it must use a binding contract and perform due diligence to ensure the service provider has the appropriate procedures and controls to ensure the privacy and security of personal data. The covered entity also has the responsibility to investigate the service provider’s compliance with the act if a reasonable person would determine there is a high probability of future non-compliance.

As noted, the FTC would be the federal enforcer of the Act under the rubric of its current Section 5 powers to seek a range of injunctive and equitable remedies to punish unfair and deceptive practices. The FTC would also be able to seek civil fines of up to $43,530 per violation but only for knowing violations, and there is no language for adjusting the per violation fine amount for inflation, a power the FTC otherwise has. State attorneys general could enforce the Act just as the FTC could.

The bill expressly preempts state laws on privacy and data security and makes clear that state laws may not interfere with HIPAA, Gramm-Leach-Bliley, FERPA, and others. Moreover, the “Consumer Data Privacy and Security Act of 2020” would not affect federal privacy laws like Gramm-Leach-Bliley, COPPA, FCRA, and others, and if entities currently subject to those federal laws are in compliance with the privacy and data security requirements, then they will be deemed in compliance with the Act.

Democrat Proposes Creating Data Protection Authority To Address Privacy

Another Senate Democrat has introduced a privacy and data security bill. Senator Kirsten Gillibrand’s “Data Protection Act of 2020” (S. 3300) would create a federal data protection authority along the lines of the agencies each European Union member nation has. This new agency would be the primary federal regulator of privacy laws, including a number of existing laws that govern the privacy practices of the financial services industries, healthcare industry, and others. This new agency would displace the Federal Trade Commission (FTC) regarding privacy matters but would receive similar enforcement authority but with the ability to levy fines in the first instance. However, state laws would be preempted only if they are contrary to the new regime, and state attorneys general could enforce the new law. A private right of action would not, however, be created under this law.

The bill would establish the Data Protection Agency (DPA), an independent agency headed by a presidentially nominated and Senate confirmed Director who may serve for a five year term normally or more time until a successor is nominated and confirmed. Hence, Directors would not serve at the pleasure of the President and would be independent from the political pressure Cabinet Members may feel from the White House. However, the Director may be removed for “inefficiency, neglect of duty, or malfeasance in office.” Generally, the DPA “shall seek to protect individuals’ privacy and limit the collection, disclosure, processing and misuse of individuals’ personal data by covered entities, and is authorized to exercise its authorities under this Act for such purposes.”

Personal data is defined widely as “any information that identifies, relates to, describes, is capable of being associated with, or could reasonably be linked, directly or indirectly, with a particular individual or device” including a number of different enumerated types of data such as medical information, biometric information, browsing history, geolocation data, political information, photographs and videos not password protected, and others. The bill also creates a term “high-risk data practice” to cover the collection of processing of personal data that is sensitive, novel, or may have adverse, discriminatory real world effects and would be subject to heightened scrutiny and regulation. For example, new high-risk data practices “or related profiling techniques” may not be used before the DPA conducts “a formal public rulemaking process,” which under administrative law is usually meant as a lengthy process including a public hearing.

Those entities covered by the bill are “any person that collects, processes, or otherwise obtains personal data with the exception of an individual processing personal data in the course of personal or household activity,” an incredibly broad definition that sweeps in virtually any commercial entity collecting or processing personal data. There is no carve out for businesses below a certain revenue level or number of persons whose data they collect and process. Large covered entities would be subject to extra scrutiny from the DPA and extra responsibility. Entities falling into category are those with “gross revenues that exceed $25,000,000;” that buy, receive for the covered entity’s commercial purposes, sells, or discloses for commercial purposes the personal information of 50,000 or more individuals, households, or devices; or that drive “50 percent or more of its annual revenues from the sale of personal data.” The DPA “may require reports and conduct examinations on a periodic basis” from large covered entities to ensure compliance with federal privacy laws, examine their practices, compliance processes, and procedures, “detecting and assessing associated risks to individuals and groups of individuals;” and “requiring and overseeing ex-ante impact assessments and ex-post outcome audits of high-risk data practices to advance fair and just data practices.”

Most notably, it appears that the enforcement and rulemaking authority of current privacy statutes would be transferred to the agency, including Title V of the “Financial Services Modernization Act of 1999” (aka Gramm-Leach-Bliley), Subtitle D of the Health Information Technology for Economic and Clinical Health Act (i.e. HIPAA’s privacy provisions), the “Children’s Online Privacy Protection Act,” and the “Fair Credit Reporting Act.” Specifically, the bill provides “[t]he Agency is authorized to exercise its authorities under this Act and Federal privacy law to administer, enforce, and otherwise implement the provisions of this Act and Federal privacy law.” The bill defines “federal privacy law” to include all the aforementioned statutes. Consequently, the agencies currently enforcing the privacy provisions of those statutes and related regulations would turn over enforcement authority to the DPA. This, of course, is not without precedent. Dodd-Frank required the FTC to relinquish some of its jurisdiction to the Consumer Financial Protection Bureau (CFPB) to cite but one recent example. In any event, this approach sets the “Data Protection Act of 2020” apart from a number of the privacy bills, and aside from the policy elegance of housing privacy statutes and regulations at one agency, this would likely cause the current regulators and the committees that oversee them to oppose this provision of the bill.

The DPA would receive authority to punish unfair and deceptive practices (UDAP) regarding the collection, processing, and use of personal data, but unlike the FTC, notice and comment rulemaking authority to effectuate this authority as needed. However, like the FTC, before the agency may use its UDAP powers regarding unfairness, it must establish the harm would is causing or is likely to cause substantial injury, is unavoidable by the consumer, and is not outweighed by countervailing benefits.

The DPA would receive many of the same authorities the FTC currently has to punish UDAP violations, including injunctions, restitution, disgorgement, damages, and other monetary relief, and also the ability to levy civil fines. However, the fine structure is tiered with reckless and knowingly violations subject to much higher liability. The first tier would expose entities to fines of $5,000 per day the violation is occurring or that the entity fails to heed a DPA order. The language could use clarification as to whether this means per violation per day or just a per day fine regardless of the number of separate violations. Nonetheless, the second tier is for reckless violations and the fines could be as high as $25,000, and the third tier for knowing violations for $1,000,000. However, the DPA must either give notice to entities liable to fines an opportunity and a hearing before levying a fine through its administrative procedures or go to federal court to seek a judgment. However, the DPA could enforce the other federal privacy laws under their terms and not bring to bear the aforementioned authority.

There would be no preemption of state laws to the extent such privacy laws are not inconsistent with the “Data Protection Act of 2020” and states may maintain or institute stronger privacy laws so long as they do not run counter to this statute. This is the structure used under Gramm-Leach-Bliley, and so there is precedence. Hence, it is possible there would be a federal privacy floor that some states like California could regulate above. However, the bill would not change the preemption status quo of the federal privacy laws the DPA will be able to enforce, and those federal statutes that preempt state laws would continue to do so. State attorneys general could bring actions in federal court to enforce this law, but no federal private right of action would be created.

Of course, the only other major privacy and data security bill that would create a new agency to regulate these matters instead of putting the FTC in charge is Representatives Anna Eshoo (D-CA) and Zoe Lofgren’s (D-CA) bill, the “Online Privacy Act of 2019” (H.R. 4978) that would create the U.S. Digital Privacy Agency (DPA) that would supersede the FTC on many privacy and data security issues. For many sponsors of privacy bills, creating a new agency may be seen as a few bridges too far, and so they have opted to house new privacy regulation at the FTC.

Finally, as can be seen in her press release, Gillibrand’s bill has garnered quite a bit of support from privacy and civil liberties advocates, some of which generally endorses the idea of a U.S. data protection authority and not this bill per se. Nonetheless, this is another bill that is on the field, and it remains to be seen how much Gillibrand will engage on the issue. It also bears note that she serves on none of the committees of jurisdiction in the Senate.

Revised Data Care Act Released

Senator Brian Schatz (D-HI) and his cosponsors have reintroduced a slightly changed version of the “Data Care Act” (S. 2961), a privacy bill that would impose upon many entities that collect and use the personal data of people a fiduciary duty of care. In December 2018, Schatz and his cosponsors introduced the “Data Care Act” (S. 3744) at a time when the Senate Commerce, Science, and Transportation Committee and other committees of jurisdiction had just begun examining the issues related to privacy in light of the recent passage of the “California Consumer Privacy Act” (CCPA) (A.B. 375).  Fourteen other Democratic Senators joined Schatz, including presidential candidates Senators Michael Bennet (D-CO), Amy Klobuchar (D-MN) and Cory Booker (D-NJ). This bill took a novel approach to the issues presented by mass collection and processing ;personal data by extending the concept of fiduciary responsibility currently binding on health care professionals and attorneys with respect to the patients and clients’ information to “online service providers.” Most of the original cosponsors are again sponsoring this bill; however, no Republicans cosponsored the first or current iteration of the bill, suggesting the fiduciary framework is not appealing to Senate Republicans.

Of course, Schatz and Klobuchar are also sponsoring the “Consumer Online Privacy Rights Act” (COPRA) (S. 2968) (see here for more analysis) along with Senate Commerce, Science, and Transportation Committee Ranking Member Maria Cantwell (D-WA). COPRA that would empower the Federal Trade Commission (FTC) to police privacy and data security violations through augmented authority, not preempt state laws to the extent they provide greater protection, largely leave in place existing federal privacy statutes such as the “Financial Services Modernization Act of 1999” (aka Gramm-Leach-Bliley) and “Health Insurance Portability and Availability Act of 1996” (HIPAA), and allow individuals to sue.

Incidentally, Senator Ed Markey (D-MA) is also sponsoring both bills, and he has his own bill, the “Privacy Bill of Rights Act” (S. 1214), which was one of the only bill to get an A in the Electronic Privacy Information Center’s report on privacy bills. (See here for more analysis.) Finally, Klobuchar had also released a narrower bill with a Republican cosponsor, the “Social Media Privacy Protection and Consumer Rights Act of 2019” (S. 189), that would require major tech companies to give consumers an opportunity to opt in or opt out of the company’s data usage practices after offering enhanced notice of the practices for which the personal data may used. (See here for more analysis.)

And, Schatz has been in negotiations with other members of the Senate Commerce, Science, and Transportation Committee with the goal of developing a bipartisan bill to regulate privacy at a federal level. As discussed in past issues of the Technology Policy Update, stakeholders in both the House and Senate continue to negotiate privacy bills but significant disagreements have been reported regarding whether such a bill has a private right of action, preempts the CCPA and other state laws, and whether a new regime is primarily enhanced notice and consent or certain conduct would no longer be allowed amongst other issues.

Turning to the Data Care Act, this legislation was built on a concept fleshed out by law professor Jack Balkin in his article “Information Fiduciaries and the First Amendment“ that would place duties on companies collecting and using consumer data similar to those that lawyers and doctors must meet in how they handle client and patient information. Balkin explained that these so-called “information fiduciaries” should “have special duties to act in ways that do not harm the interests of the people whose information they collect, analyze, use, sell, and distribute.”

In short, under the “Data Care Act,” “online service providers” would be severely be limited on how they collect, share, and sell the personally identifiable information (PII) (known as “individual identifying data” in the bill), for these companies would need to treat their customers’ PII as privileged and deserving of a greater level of protection, much like the HIPAA regulations impose this standard on health care providers or bar associations’ rules on attorneys. What’s more, the scope of who is an online service provider would seem to encompass most consumer-facing companies doing business on the internet.

An “online service provider” is defined as an entity “engaged in interstate commerce over the internet or any other digital network; and in the course of business, collects individual identifying data about end users, including in a manner that is incidental to the business conducted.” This very sweeping definition would cover almost any business or entity doing business in the U.S. even if it is not across state lines as the Supreme Court has often construed the Commerce Clause. However, unlike other bills, the FTC would have the discretionary authority to exclude categories of online service providers from the fiduciary duties the bill would otherwise impose. Normally, the other privacy bills create a threshold below which limited obligations attach for smaller and mid-sized businesses except for data brokers. The FTC is directed to consider the privacy risks posed by the category of online service provider.

The bill requires that “[a]n online service provider shall fulfill the duties of care, loyalty, and confidentiality” towards consumers’ personal information, which is also broadly defined in the bill.  The duty of care requires online service providers to “reasonably” safeguard “individual identifying data” from unauthorized access and notify consumers of any breach of this duty, subject to FTC regulations that would be promulgated. The duty of loyalty would require online service providers to not use the information in a way that benefits them to the detriment of consumers, including uses that would result in reasonably foreseeable material physical or financial harm to the consumer. Finally, the duty of confidentiality limits the disclosure or sale of consumers’ information to instances where the duties of care and loyalty are observed (i.e. when the information must be safeguarded and not used to the detriment of consumers).

Moreover, the bill would require that should an online service provider wish to share or sell consumers’ information with a third party, they would need to enter into a contract with the other party that requires them to meet the same duties of care, loyalty, and confidentiality. The revised bill further tightens this requirement by stipulating that “If an online service provider transfers or otherwise provides access to individual identifying data to another person, the requirements of [the duties of loyalty, care, and confidentiality] shall apply to such person with respect to such data in the same manner that such requirements apply to the online service provider.” Note that this additional requirement pertains to the transfer of PII to any person and not just other online service providers, meaning virtually any transfer would be captured by this standard and thus a potential loophole in the bill was closed.

The FTC would enforce the act and would have the authority to levy fines in the first instance for violations, but state attorneys general would also be able to bring actions for violations in the event the FTC does not act or after FTC action. This latter power has long been a Democratic priority in the realm of data security and may be a non-starter with Republicans. Moreover, the bill does not preempt state laws, meaning the FTC could investigate a violation under this act and states could investigate under their laws. The FTC would be given authority under the Administrative Procedure Act (APA) to promulgate regulations regarding data breach notification instead of the much more onerous Moss-Magnuson rulemaking procedures the FTC must otherwise use. These regulations include the aforementioned regulations on breach notification, some possible exemptions to the duties that would otherwise apply to online service providers (e.g. small companies) but also more broadly . The bill expands the FTC’s jurisdiction over non-profit entities and common carriers that may also be online service providers.

There is no private right of action like many of the Democratic bills, which would disappoint many stakeholders on the left but would conversely please many industry and Republican stakeholders. Nor would people have the explicit right to access, correct, delete, or port their information as they would in other bills; and yet, the fiduciary concept would necessarily entail some of these rights. There are no provisions on obtaining a person’s consent, for the onus is entirely on how the covered entity handles the information. In short, this seems to be a framework that would sidestep issues related to notice and consent regimes. Additionally, unlike almost all the other bills, there are not detailed exceptions under which a person’s consent would not be needed to collect and process information (e.g. for security processes, to protect against fraud, or to develop new products.)