Last week, we took a look at two bills that approach privacy issues from the vantage of data ownership: Senator John Kennedy’s (R-LA) “Own Your Own Data Act” (S. 806), and Senators Mark Warner (D-VA) and Josh Hawley’s (R-MO) “Designing Accounting Safeguards To Help Broaden Oversight and Regulations on Data” (S. 1951). This week, we are going to be time-traveling, in a way, as we will look at the last bill put forth by a White House on privacy, the discussion draft of the “Consumer Privacy Bill of Rights Act of 2015” released by the Obama Administration. This bill was released in conjunction with a report on privacy issues and then proceeded to go nowhere as there was scant appetite on Capitol Hill to legislate on privacy. Nonetheless, this bill is fairly comprehensive and contains a number of concepts that are present in the most recent bills.
The bill has a very broad definition of “personal data,” at least as broad as some of the more consumer-friendly bills, but it has a safe harbor for de-identified information, which would not be considered “personal data” for purposes of the act. To wit, “personal data” are “any data that are under the control of a covered entity, not otherwise generally available to the public through lawful means, and are linked, or as a practical matter linkable by the covered entity, to a specific individual, or linked to a device that is associated with or routinely used by an individual.” The bill provides examples of what might be personal data but makes clear that the enumerated examples are not the only possible information that will be covered under the bill. Aside from the usual types of data named in privacy bills, a few bear mention. First “biometric identifiers” are considered “personal data” such as fingerprints or voice prints, but not biometric data more generally. Additionally, genetic data are not identified either. However, the base definition of personal data is so broad, it would be a hard argument to make that biometric and genetic data, much of which is easily linkable to individuals or devices and not generally made available to the public, do not qualify. Besides, the definition itself does state that the examples “include” but are “not limited to” those specifically spelled out.
However, like a number of the other bills, the “Consumer Privacy Bill of Rights Act of 2015” provides detailed exceptions to what might other be “personal data.” Consequently, de-identified, deleted, and cybersecurity data are not personal data subject to the requirements of the bill. Regarding de-identified data, a covered entity would need to render the data such that it could reasonably be expected to be identified once to a person or device. Presumably encryption would suffice so long as the encryption keys are not compromised, and other processes such as anonymization would qualify. However, such covered entities must commit publicly to not try to re-identify such data and put in place processes to execute this promise. Moreover, any third party with whom the covered entity shares the de-identified data must also make the same public commitment not to re-identify such data. The definition of “deleted data” is pretty straightforward and squares with the common understanding of what deleting is.
That the bill singles out “cybersecurity data” dates the bill. As many might recall, this was the point during the Obama Administration when the there was a push to enact cybersecurity information sharing legislation that ultimately culminated in Title I of the “Cybersecurity Sharing Act of 2015” (Division N of P.L. 114-113). Consequently, this definition is tailor-made for the new procedures that bill set up: “cyber threat indicators collected, processed, created, used, retained, or disclosed in order to investigate, mitigate, or otherwise respond to a cybersecurity threat or incident, when processed for those purposes.” Not surprisingly, “cyber threat indicator” is also defined, and the salient part of the legislation is that for any such information, there must be “reasonable efforts…to remove information that can be used to identify specific persons reasonably believed to be unrelated to the cyber threat.” And, employee data is also excepted, but this definition is tightly written and would pertain to only the types of and uses for which employment information has usually been used, and any information uses beyond these would possibly then become “personal data” subject to enforcement. Also of note, this definition does not include the personal data of job applicants, but those data are excluded for small businesses in the definition of a covered enity.
Those entities covered by the new privacy regime are any person “that collects, creates, processes, retains, uses, or discloses personal data in or affecting interstate commerce” subject to exceptions, namely:
- federal, state, and local governments, including any contractors and agents working on their behalf;
- any “natural person” acting in a “de minimis” capacity in collecting and processing personal data;
- Any entity with 25 or less employees would not be covered based on the data processing it does of applicant’s personal data “in the ordinary course”
- Any other entity, or class of entities, the FTC identifies through a rulemaking; and
- Those entities using personal data “to conduct research relating directly to security threats to or vulnerabilities in devices or networks, or to address threats or vulnerabilities identified by that research” subject to additional security and disclosure requirements
The “covered entity” exception list includes a more detailed, complicated exception: an entity with 5 or fewer employees that “collects, creates, processes, uses, retains, or discloses” the personal data of less than 10,000 people during any 12-month period might also be excepted. However, such entity may not “knowingly collect, use, retain, or disclose any information that is linked with personal data and includes, or relates directly to
- that individual’s medical history;
- national origin;
- sexual orientation;
- gender identity;
- religious beliefs or affiliation;
- income, assets, or liabilities;
- precise geolocation information;
- unique biometric data; or
- Social Security number.
However, any entities that “knowingly collect, use, retain, or disclose” this information would be covered only regarding this information and would need to comply with the bill. And yet, excluded from that list of activities is processing, suggesting that the knowing processing of those personal data would not be excepted and would be also covered by the requirements of the bill regardless of the size of the entity.
The bill provides that “[e]ach covered entity shall provide individuals in concise and easily understandable language, accurate, clear, timely, and conspicuous notice about the covered entity’s privacy and security practices.” However, notice must be “reasonable in light of context,” and the bill defines what “context” which are “circumstances surrounding a covered entity’s processing of personal data,” including a number of enumerated considerations such as the “extent and frequency” of interaction, the history between consumers and the covered entity, a reasonable person’s understanding of how the covered entity processes and uses data in providing services and products
Among other information covered entities would need to provide include:
- The personal data processed, including data acquired from other sources
- The purposes of its data processing
- The people, or categories of people, to whom data is disclosed and the purposes to which such data may be used
- When personal data may be deleted, destroyed, or de-identified
- How consumers may “access their personal data and grant, refuse, or revoke consent for the processing of personal data”
- How personal data is secured; and
- To whom may a consumer complain or inquire regarding the covered entity’s data processing practices
In terms of the protections people would get under the “Consumer Privacy Bill of Rights of 2015,” “[e]ach covered entity shall provide individuals with reasonable means to control the processing of personal data about them in proportion to the privacy risk to the individual and consistent with context.” This sounds like strong language that would shift the balance in the user-company data relationship, but there appears to be an inherent balancing of interests in this right such that a person’s control of data processing would be proportional to privacy risks and appropriate for the context.
Nonetheless, covered entities must provide easy to find, use, access, and understand controls for people. Additionally, people must be able to withdraw their consent in similarly easy fashion, so covered entities would also need to offer this option to users. What’s more, covered entities must delete the personal data of any user that withdraws consent no later than 45 days after such a withdrawal is made. Of course, should a user indicate she wants to withdraw consent, a covered entity may offer her the option that the company still hold the data but de-identify it. The bill is silent on whether the covered entity would ever be able to re-identify the data without the consent of the user, however. Likewise, it is also not clear whether “alternative means of compliance” can wholly replace the requirement that a person’s data be deleted in the event he withdraws consent. If a covered entity can offer only de-identification instead of deletion in the event a consumer withdraws consent, it’s my guess that most entities would do so in the hopes of one day reobtaining the person’s consent.
There is also a curiously constructed limitation of the covered entity’s obligations regarding the right to withdraw consent. This right pertains only to the data the covered entity has in its control, and hence does not cover personal data the entity may have collected and then shared with other entities. Does this mean consumers intent on ensuring their withdrawal is effective across entities would need to somehow determine which entities are holding their personal data? It appears so. Putting that issue aside, the withdrawal and subsequent deletion or de-identification does not apply to “enumerated exceptions,” a term defined in the bill as including:
- Preventing or detecting fraud, child exploitation, or serious violent crime
- Protecting device, network or facility security
- Protecting a covered entity’s rights or property or those of an entity’s consumer if consent has been given
- Monitoring and enforcing agreements, including terms of service
- “Processing customary business records”
- “Complying with a legal requirement or an authorized government request”
So, any such information would not need to be deleted or de-identified in response to a withdrawal of consent.
The latter two exceptions are quoted directly since they seem to present the most latitude. “Customary business records” is defined in the bill as “data, including personal data, typically collected in the ordinary course of conducting business and that is retained for generally accepted purposes for that business, including accounting, auditing, tax, fraud prevention, warranty fulfillment, billing, or other customary business purposes.” Therefore, it would hard going to try to shoehorn into this definition the collection and processing of personal data by a data broker or company conducting similar operations. The last exception seems a bit more elastic, however. Complying with. Legal requirement would seem to cover all federal, state, and local legal requirements that are not otherwise contrary to the preemption language in the bill. However, an “authorized government request” would seem to run the gamut from an administrative subpoena to a warrant. Of course, under the Electronics Communications Privacy Act (ECPA), the type of provider determines the threshold a law enforcement agency needs to clear to access stored communications. Furthermore, this definition of “enumerated exceptions” is used throughout the bill to carve out these activities from those that may be regulated by the FTC.
Returning to the withdrawal of consent, covered entities may also offer an alternative to deletion such they would instead offer to de-identify personal data. And yet, it is not clear whether the covered entity is complying by only presenting the option to de-identify instead of delete. Of course, the definition of de-identified data entails a public commitment not to re-identify and this obligation travels with the de-identified data such that any third parties to whom such data is disclosed would then need to honor it. Presumably, violations of this commitment are grounds for FTC action. However, it may be contrary to the larger goals of the bill and the public interest to allow companies to sit on troves of de-identified data that may well prove easy enough to re-identify after being exfiltrated or accessed. Finally, users must be given advance notice of material changes to the collection, use, or sharing practices of a covered entity and also a mechanism to control the resulting privacy risk.
Turning to the section titled “Respect for Context,” it is established that any covered entity processing personal data “in a manner that is reasonable in light of context” is not subject to the extensive requirements in this section of the bill. Two definitions bear scrutiny if this exception is to make sense. First, “process[ing] personal data” is “any action regarding data that is linked to an individual or a specific device, including but not limited to collecting, retaining, disclosing, using, merging, linking, and combining data.” I would wonder if the processing of personal data that is linkable to a person or device would qualify, and if not, this would seem to be a significant loophole. The other definition worthy of a look is “context,” which is detailed and lengthy, but most succinctly, it “means the circumstances surrounding a covered entity’s processing of personal data,” which may include the history and frequency of direct actions between an individual and a covered entity,
However, the enumerated circumstance that can constitute “context” that is among the more flexible is “the level of understanding that reasonable users of the covered entity’s goods or services would have of how the covered entity processes the personal data that it collects, including through any notice provided by the covered entity.” This construct employs the reasonable person construct from Tort law to set a baseline. More significantly, if a covered entity provides easily understood notice that is easily accessible that a reasonable person can understand regarding the covered entity’s data processing, then it would appear there would not be much off-limits.
However, any data processing that is not reasonable in the context would trigger additional responsibilities for covered entities to conduct a privacy risk analysis to examine possible privacy risks and steps to mitigate these risks. Additionally, a covered entity must provide notice of any data processing that is unreasonable in light of context and provide a mechanism that allows for a reduction in risk exposure. This section would allow an exception for “data analysis” supervised by a Privacy Review Board,” a type of entity that would be permitted under FTC regulations, based on a range of factors. It bears note that “data analysis” is a new concept in this legislation and appears to be a subset of data processing; however, it is not entirely clear what is encompassed by data analysis. Nonetheless, any personal data analysis that is unreasonable in light of the context that results in adverse action against multiple individuals triggers a requirement that a covered entity conducts a disparate impact analysis according to accepted standards that it must keep on file.
The “Consumer Privacy Bill of Rights” established a process for a new class of entities, Privacy Review Boards, that would need to apply to and be certified by the FTC before they could operate to supervise the data analysis of covered entities.
Covered entities may only collect, retain, and use personal data that is reasonable in light of the context and must consider ways and means to minimize privacy risks. However, it is unclear if any such identified means of reducing privacy risks must actually be implemented. Additionally, any such personal data must be destroyed, deleted, or de-identified after a reasonable period of time following the purposes for which the data was collected has been achieved. But, there are exceptions to these two general requirements, including the “enumerated exceptions” discussed before, data analysis performed under Privacy Review Board supervision, and under the heightened notice and control procedures discussed earlier for data processing unreasonable in light of the context.
Covered entities would need to establish and maintain security and privacy programs to guard against unauthorized access disclosure, misuse, alternation, destruction, or compromise of personal data. Such programs would start with risk assessments to suss out weaknesses and vulnerabilities that the subsequent security and privacy programs would ideally remedy with an eye towards addressing foreseeable risks as well. This section of the bill spelals out the sort of considerations covered entities should be heeding and quite likely the approach the FTC would take in policing security and privacy violations:
- The privacy risks posed by the personal data being held, for not all data are equally valuable
- The foreseeability of threats
- Widely accepted and used administrative, technical, and physical safeguards; and
- The costs associated with implementing security and privacy safeguards.
This approach to spurring entities to implement security and privacy programs is familiar and has been the general approach since at least the safeguards rules promulgated per the “Financial Modernization Act of 1999” (Gramm-Leach-Bliley).
Covered entities will also need to provide each individual access to her personal data in a reasonable timeframe if such a request is made subject to verifying the requester’s identity, relevant laws and regulations, the degree to which the request is vexatious or frivolous, and whether a fraud investigation or national security, intelligence, or law enforcement purpose presents a compelling reason to deny access. What’s more, there is a duty to ensure that such information is accurate and individuals will have a means to dispute or amend inaccurate personal data held by a covered entity. And yet, if the personal data subject to a request to correct or amend would not likely result in adverse action against an individual, the covered entity may decline the request. However, an individual may further request that these data be deleted and or destroyed, and covered entities would need to comply with 45 days with the personal data from government records being excepted.
Each covered entity must take appropriate measures consistent with the privacy risks connected to its personal data processing practices, including:
- Train staff who handles these data
- Executing both internal and independent audits and evaluations for privacy and security
- Building privacy and security into systems
- Binding third parties with whom personal data are shared with the obligation to meet the same responsibilities incumbent on covered parties.
The “Consumer Privacy Bill of Rights” makes any violations as being contrary to Section 5 of the FTC Act, which bars unfair and deceptive practices. The FTC would receive authority to levy civil fines in some circumstances and its jurisdiction widened to include non-profits under the bill. The FTC could seek fines for first offenses committed knowingly or with constructive knowledge of up to $35,000 per day of violations, and not like other bills, on a per victim basis. However, if the FTC puts a covered entity on notice with particularity as to the ways it is violating the bill, then the FTC could seek per victim fines of $5,000 per person. In any event, civil penalties are capped at $25 million. State attorneys general would also be allowed to enforce the act, in part, and in acting without the FTC may only seek injunctive relief and not civil fines. And yet, there is no private right of action for individuals.
The bill would allow for the development of codes of conduct for processing personal data that covered entities could abide by in exchange for liability protection. These codes would need to provide an equal or greater level of protection for processing than the underlying statute. Within six months of enactment, the FTC would need to establish the regulations spelling out the process by which entities may craft and submit such codes of conduct. The FTC would examine whether the code provides an equal or greater level of protection for processing than the statute itself and resulting regulations. Any codes developed through a transparent, multi-lateral process led by the Department of Commerce must be approved or denied within 90 days, any transparent and multi-lateral process to develop a code led by another entity within 120 days, and all others within 180 days. However, these codes must be published for public comment before the FTC rules on them. If approved, a code must be reviewed every 3-5 years to determine how it has worked and whether it is still viable given technological and societal changes and possibly extended.
Additionally, entities may apply to the FTC to administer a code of conduct for the processing of personal data once it’s been approved, and such certification may be granted if the entity can prove it can expeditiously and efficiently adjudicate violations. All such certifications will be reviewed by the FTC within 3-4 years of being granted and possibly renewed.
Covered entities that publicly commit to a code of conduct that adhere to the code may assert it as a complete defense to an enforcement action brought by the FTC or a state attorneys general, and any claims regarding the data processing the code covers might be null and void. And, it is wise to revisit the definition of processing personal data which “means taking any action regarding data that is linked to an individual or a specific device, including but not limited to collecting, retaining, disclosing, using, merging, linking, and combining data.” Consequently, a code could provide quite a bit of liability protection provided on how it is drafted and what it covers, of course.
Regarding preemption, the bill would preempt all conflicting state or local laws “to the extent” one “imposes requirements on covered entities with respect to personal data processing,” but then also stipulates that “[n]o State or local government may enforce any personal data processing law against a covered entity to the extent that that entity is entitled to safe harbor protection” under a code of conduct.” But if such laws are already preempted, how could states or localities enforce one? Perhaps, this passage is intended to ward off attempts by states or local governments to use consumer protection statutes, which are expressly not preempted, to try to get around the preemption of their data processing laws. Moreover, other state causes of action remain untouched such as those under contract, tort, trespass, fraud, and others, meaning that covered entities would stull possibly face such actions. The “Consumer Privacy Bill of Rights Act” also would impinge First Amendment rights and any activities under Section 230 would also be exempted. Finally, the bill does not modify, alter, or supersede the operation of any federal privacy or security statute (e.g. HIPAA) in governing the conduct of an otherwise covered entity. But, this goes only as far as the four corners of the other statute, and all conduct outside that statute regarding data processing would seem to fall into the FTC’s jurisdiction to enforce this Act unless the agency lacks jurisdiction over that class of entities (e.g. banks and credit unions.)