A Privacy Bill A Week: the Obama Administration’s “Consumer Privacy Bill of Rights Act of 2015”

Last week, we took a look at two bills that approach privacy issues from the vantage of data ownership: Senator John Kennedy’s (R-LA) “Own Your Own Data Act” (S. 806), and Senators Mark Warner (D-VA) and Josh Hawley’s (R-MO) “Designing Accounting Safeguards To Help Broaden Oversight and Regulations on Data” (S. 1951). This week, we are going to be time-traveling, in a way, as we will look at the last bill put forth by a White House on privacy, the discussion draft of the “Consumer Privacy Bill of Rights Act of 2015” released by the Obama Administration. This bill was released in conjunction with a report on privacy issues and then proceeded to go nowhere as there was scant appetite on Capitol Hill to legislate on privacy. Nonetheless, this bill is fairly comprehensive and contains a number of concepts that are present in the most recent bills.

The bill has a very broad definition of “personal data,” at least as broad as some of the more consumer-friendly bills, but it has a safe harbor for de-identified information, which would not be considered “personal data” for purposes of the act. To wit, “personal data” are “any data that are under the control of a covered entity, not otherwise generally available to the public through lawful means, and are linked, or as a practical matter linkable by the covered entity, to a specific individual, or linked to a device that is associated with or routinely used by an individual.” The bill provides examples of what might be personal data but makes clear that the enumerated examples are not the only possible information that will be covered under the bill. Aside from the usual types of data named in privacy bills, a few bear mention. First “biometric identifiers” are considered “personal data” such as fingerprints or voice prints, but not biometric data more generally. Additionally, genetic data are not identified either. However, the base definition of personal data is so broad, it would be a hard argument to make that biometric and genetic data, much of which is easily linkable to individuals or devices and not generally made available to the public, do not qualify. Besides, the definition itself does state that the examples “include” but are “not limited to” those specifically spelled out.

However, like a number of the other bills, the “Consumer Privacy Bill of Rights Act of 2015” provides detailed exceptions to what might other be “personal data.” Consequently, de-identified, deleted, and cybersecurity data are not personal data subject to the requirements of the bill. Regarding de-identified data, a covered entity would need to render the data such that it could reasonably be expected to be identified once to a person or device. Presumably encryption would suffice so long as the encryption keys are not compromised, and other processes such as anonymization would qualify. However, such covered entities must commit publicly to not try to re-identify such data and put in place processes to execute this promise. Moreover, any third party with whom the covered entity shares the de-identified data must also make the same public commitment not to re-identify such data. The definition of “deleted data” is pretty straightforward and squares with the common understanding of what deleting is.

That the bill singles out “cybersecurity data” dates the bill. As many might recall, this was the point during the Obama Administration when the there was a push to enact cybersecurity information sharing legislation that ultimately culminated in Title I of the “Cybersecurity Sharing Act of 2015” (Division N of P.L. 114-113). Consequently, this definition is tailor-made for the new procedures that bill set up: “cyber threat indicators collected, processed, created, used, retained, or disclosed in order to investigate, mitigate, or otherwise respond to a cybersecurity threat or incident, when processed for those purposes.” Not surprisingly, “cyber threat indicator” is also defined, and the salient part of the legislation is that for any such information, there must be “reasonable efforts…to remove information that can be used to identify specific persons reasonably believed to be unrelated to the cyber threat.” And, employee data is also excepted, but this definition is tightly written and would pertain to only the types of and uses for which employment information has usually been used, and any information uses beyond these would possibly then become “personal data” subject to enforcement. Also of note, this definition does not include the personal data of job applicants, but those data are excluded for small businesses in the definition of a covered enity.

Those entities covered by the new privacy regime are any person “that collects, creates, processes, retains, uses, or discloses personal data in or affecting interstate commerce” subject to exceptions, namely:

  • federal, state, and local governments, including any contractors and agents working on their behalf;
  • any “natural person” acting in a “de minimis” capacity in collecting and processing personal data;
  • Any entity with 25 or less employees would not be covered based on the data processing it does of applicant’s personal data “in the ordinary course”
  • Any other entity, or class of entities, the FTC identifies through a rulemaking; and
  • Those entities using personal data “to conduct research relating directly to security threats to or vulnerabilities in devices or networks, or to address threats or vulnerabilities identified by that research” subject to additional security and disclosure requirements

The “covered entity” exception list includes a more detailed, complicated exception: an entity with 5 or fewer employees that “collects, creates, processes, uses, retains, or discloses” the personal data of less than 10,000 people during any 12-month period might also be excepted. However, such entity may not “knowingly collect, use, retain, or disclose any information that is linked with personal data and includes, or relates directly to

  • that individual’s medical history;
  • national origin;
  • sexual orientation;
  • gender identity;
  • religious beliefs or affiliation;
  • income, assets, or liabilities;
  • precise geolocation information;
  • unique biometric data; or
  • Social Security number.

However, any entities that “knowingly collect, use, retain, or disclose” this information would be covered only regarding this information and would need to comply with the bill. And yet, excluded from that list of activities is processing, suggesting that the knowing processing of those personal data would not be excepted and would be also covered by the requirements of the bill regardless of the size of the entity.

The bill provides that “[e]ach covered entity shall provide individuals in concise and easily understandable language, accurate, clear, timely, and conspicuous notice about the covered entity’s privacy and security practices.” However, notice must be “reasonable in light of context,” and the bill defines what “context” which are “circumstances surrounding a covered entity’s processing of personal data,” including a number of enumerated considerations such as the “extent and frequency” of interaction, the history between consumers and the covered entity, a reasonable person’s understanding of how the covered entity processes and uses data in providing services and products

Among other information covered entities would need to provide include:

  • The personal data processed, including data acquired from other sources
  • The purposes of its data processing
  • The people, or categories of people, to whom data is disclosed and the purposes to which such data may be used
  • When personal data may be deleted, destroyed, or de-identified
  • How consumers may “access their personal data and grant, refuse, or revoke consent for the processing of personal data”
  • How personal data is secured; and
  • To whom may a consumer complain or inquire regarding the covered entity’s data processing practices

In terms of the protections people would get under the “Consumer Privacy Bill of Rights of 2015,” “[e]ach covered entity shall provide individuals with reasonable means to control the processing of personal data about them in proportion to the privacy risk to the individual and consistent with context.” This sounds like strong language that would shift the balance in the user-company data relationship, but there appears to be an inherent balancing of interests in this right such that a person’s control of data processing would be proportional to privacy risks and appropriate for the context.

Nonetheless, covered entities must provide easy to find, use, access, and understand controls for people. Additionally, people must be able to withdraw their consent in similarly easy fashion, so covered entities would also need to offer this option to users. What’s more, covered entities must delete the personal data of any user that withdraws consent no later than 45 days after such a withdrawal is made. Of course, should a user indicate she wants to withdraw consent, a covered entity may offer her the option that the company still hold the data but de-identify it. The bill is silent on whether the covered entity would ever be able to re-identify the data without the consent of the user, however. Likewise, it is also not clear whether “alternative means of compliance” can wholly replace the requirement that a person’s data be deleted in the event he withdraws consent. If a covered entity can offer only de-identification instead of deletion in the event a consumer withdraws consent, it’s my guess that most entities would do so in the hopes of one day reobtaining the person’s consent.

There is also a curiously constructed limitation of the covered entity’s obligations regarding the right to withdraw consent. This right pertains only to the data the covered entity has in its control, and hence does not cover personal data the entity may have collected and then shared with other entities. Does this mean consumers intent on ensuring their withdrawal is effective across entities would need to somehow determine which entities are holding their personal data? It appears so. Putting that issue aside, the withdrawal and subsequent deletion or de-identification does not apply to “enumerated exceptions,” a term defined in the bill as including:

  • Preventing or detecting fraud, child exploitation, or serious violent crime
  • Protecting device, network or facility security
  • Protecting a covered entity’s rights or property or those of an entity’s consumer if consent has been given
  • Monitoring and enforcing agreements, including terms of service
  • “Processing customary business records”
  • “Complying with a legal requirement or an authorized government request”

So, any such information would not need to be deleted or de-identified in response to a withdrawal of consent.

The latter two exceptions are quoted directly since they seem to present the most latitude. “Customary business records” is defined in the bill as “data, including personal data, typically collected in the ordinary course of conducting business and that is retained for generally accepted purposes for that business, including accounting, auditing, tax, fraud prevention, warranty fulfillment, billing, or other customary business purposes.” Therefore, it would hard going to try to shoehorn into this definition the collection and processing of personal data by a data broker or company conducting similar operations. The last exception seems a bit more elastic, however. Complying with. Legal requirement would seem to cover all federal, state, and local legal requirements that are not otherwise contrary to the preemption language in the bill. However, an “authorized government request” would seem to run the gamut from an administrative subpoena to a warrant. Of course, under the Electronics Communications Privacy Act (ECPA), the type of provider determines the threshold a law enforcement agency needs to clear to access stored communications. Furthermore, this definition of “enumerated exceptions” is used throughout the bill to carve out these activities from those that may be regulated by the FTC.

Returning to the withdrawal of consent, covered entities may also offer an alternative to deletion such they would instead offer to de-identify personal data. And yet, it is not clear whether the covered entity is complying by only presenting the option to de-identify instead of delete. Of course, the definition of de-identified data entails a public commitment not to re-identify and this obligation travels with the de-identified data such that any third parties to whom such data is disclosed would then need to honor it. Presumably, violations of this commitment are grounds for FTC action. However, it may be contrary to the larger goals of the bill and the public interest to allow companies to sit on troves of de-identified data that may well prove easy enough to re-identify after being exfiltrated or accessed. Finally, users must be given advance notice of material changes to the collection, use, or sharing practices of a covered entity and also a mechanism to control the resulting privacy risk.

Turning to the section titled “Respect for Context,” it is established that any covered entity processing personal data “in a manner that is reasonable in light of context” is not subject to the extensive requirements in this section of the bill. Two definitions bear scrutiny if this exception is to make sense. First, “process[ing] personal data” is “any action regarding data that is linked to an individual or a specific device, including but not limited to collecting, retaining, disclosing, using, merging, linking, and combining data.” I would wonder if the processing of personal data that is linkable to a person or device would qualify, and if not, this would seem to be a significant loophole. The other definition worthy of a look is “context,” which is detailed and lengthy, but most succinctly, it “means the circumstances surrounding a covered entity’s processing of personal data,” which may include the history and frequency of direct actions between an individual and a covered entity,

However, the enumerated circumstance that can constitute “context” that is among the more flexible is “the level of understanding that reasonable users of the covered entity’s goods or services would have of how the covered entity processes the personal data that it collects, including through any notice provided by the covered entity.” This construct employs the reasonable person construct from Tort law to set a baseline. More significantly, if a covered entity provides easily understood notice that is easily accessible that a reasonable person can understand regarding the covered entity’s data processing, then it would appear there would not be much off-limits.

However, any data processing that is not reasonable in the context would trigger additional responsibilities for covered entities to conduct a privacy risk analysis to examine possible privacy risks and steps to mitigate these risks. Additionally, a covered entity must provide notice of any data processing that is unreasonable in light of context and provide a mechanism that allows for a reduction in risk exposure. This section would allow an exception for “data analysis” supervised by a Privacy Review Board,” a type of entity that would be permitted under FTC regulations, based on a range of factors. It bears note that “data analysis” is a new concept in this legislation and appears to be a subset of data processing; however, it is not entirely clear what is encompassed by data analysis. Nonetheless, any personal data analysis that is unreasonable in light of the context that results in adverse action against multiple individuals triggers a requirement that a covered entity conducts a disparate impact analysis according to accepted standards that it must keep on file.

The “Consumer Privacy Bill of Rights” established a process for a new class of entities, Privacy Review Boards, that would need to apply to and be certified by the FTC before they could operate to supervise the data analysis of covered entities.

Covered entities may only collect, retain, and use personal data that is reasonable in light of the context and must consider ways and means to minimize privacy risks. However, it is unclear if any such identified means of reducing privacy risks must actually be implemented. Additionally, any such personal data must be destroyed, deleted, or de-identified after a reasonable period of time following the purposes for which the data was collected has been achieved. But, there are exceptions to these two general requirements, including the “enumerated exceptions” discussed before, data analysis performed under Privacy Review Board supervision, and under the heightened notice and control procedures discussed earlier for data processing unreasonable in light of the context.

Covered entities would need to establish and maintain security and privacy programs to guard against unauthorized access disclosure, misuse, alternation, destruction, or compromise of personal data. Such programs would start with risk assessments to suss out weaknesses and vulnerabilities that the subsequent security and privacy programs would ideally remedy with an eye towards addressing foreseeable risks as well. This section of the bill spelals out the sort of considerations covered entities should be heeding and quite likely the approach the FTC would take in policing security and privacy violations:

  • The privacy risks posed by the personal data being held, for not all data are equally valuable
  • The foreseeability of threats
  • Widely accepted and used administrative, technical, and physical safeguards; and
  • The costs associated with implementing security and privacy safeguards.

This approach to spurring entities to implement security and privacy programs is familiar and has been the general approach since at least the safeguards rules promulgated per the “Financial Modernization Act of 1999” (Gramm-Leach-Bliley).

Covered entities will also need to provide each individual access to her personal data in a reasonable timeframe if such a request is made subject to verifying the requester’s identity, relevant laws and regulations, the degree to which the request is vexatious or frivolous, and whether a fraud investigation or national security, intelligence, or law enforcement purpose presents a compelling reason to deny access. What’s more, there is a duty to ensure that such information is accurate and individuals will have a means to dispute or amend inaccurate personal data held by a covered entity. And yet, if the personal data subject to a request to correct or amend would not likely result in adverse action against an individual, the covered entity may decline the request. However, an individual may further request that these data be deleted and or destroyed, and covered entities would need to comply with 45 days with the personal data from government records being excepted.

Each covered entity must take appropriate measures consistent with the privacy risks connected to its personal data processing practices, including:

  • Train staff who handles these data
  • Executing both internal and independent audits and evaluations for privacy and security
  • Building privacy and security into systems
  • Binding third parties with whom personal data are shared with the obligation to meet the same responsibilities incumbent on covered parties.

The “Consumer Privacy Bill of Rights” makes any violations as being contrary to Section 5 of the FTC Act, which bars unfair and deceptive practices. The FTC would receive authority to levy civil fines in some circumstances and its jurisdiction widened to include non-profits under the bill. The FTC could seek fines for first offenses committed knowingly or with constructive knowledge of up to $35,000 per day of violations, and not like other bills, on a per victim basis. However, if the FTC puts a covered entity on notice with particularity as to the ways it is violating the bill, then the FTC could seek per victim fines of $5,000 per person. In any event, civil penalties are capped at $25 million. State attorneys general would also be allowed to enforce the act, in part, and in acting without the FTC may only seek injunctive relief and not civil fines. And yet, there is no private right of action for individuals.

The bill would allow for the development of codes of conduct for processing personal data that covered entities could abide by in exchange for liability protection. These codes would need to provide an equal or greater level of protection for processing than the underlying statute. Within six months of enactment, the FTC would need to establish the regulations spelling out the process by which entities may craft and submit such codes of conduct. The FTC would examine whether the code provides an equal or greater level of protection for processing than the statute itself and resulting regulations. Any codes developed through a transparent, multi-lateral process led by the Department of Commerce must be approved or denied within 90 days, any transparent and multi-lateral process to develop a code led by another entity within 120 days, and all others within 180 days. However, these codes must be published for public comment before the FTC rules on them. If approved, a code must be reviewed every 3-5 years to determine how it has worked and whether it is still viable given technological and societal changes and possibly extended.

Additionally, entities may apply to the FTC to administer a code of conduct for the processing of personal data once it’s been approved, and such certification may be granted if the entity can prove it can expeditiously and efficiently adjudicate violations. All such certifications will be reviewed by the FTC within 3-4 years of being granted and possibly renewed.

Covered entities that publicly commit to a code of conduct that adhere to the code may assert it as a complete defense to an enforcement action brought by the FTC or a state attorneys general, and any claims regarding the data processing the code covers might be null and void. And, it is wise to revisit the definition of processing personal data which “means taking any action regarding data that is linked to an individual or a specific device, including but not limited to collecting, retaining, disclosing, using, merging, linking, and combining data.” Consequently, a code could provide quite a bit of liability protection provided on how it is drafted and what it covers, of course.

Regarding preemption, the bill would preempt all conflicting state or local laws “to the extent” one “imposes requirements on covered entities with respect to personal data processing,” but then also stipulates that “[n]o State or local government may enforce any personal data processing law against a covered entity to the extent that that entity is entitled to safe harbor protection” under a code of conduct.” But if such laws are already preempted, how could states or localities enforce one? Perhaps, this passage is intended to ward off attempts by states or local governments to use consumer protection statutes, which are expressly not preempted, to try to get around the preemption of their data processing laws. Moreover, other state causes of action remain untouched such as those under contract, tort, trespass, fraud, and others, meaning that covered entities would stull possibly face such actions. The “Consumer Privacy Bill of Rights Act” also would impinge First Amendment rights and any activities under Section 230 would also be exempted. Finally, the bill does not modify, alter, or supersede the operation of any federal privacy or security statute (e.g. HIPAA) in governing the conduct of an otherwise covered entity. But, this goes only as far as the four corners of the other statute, and all conduct outside that statute regarding data processing would seem to fall into the FTC’s jurisdiction to enforce this Act unless the agency lacks jurisdiction over that class of entities (e.g. banks and credit unions.)

Privacy Bill A Week: “Own Your Own Data Act” (S. 806) and the “Designing Accounting Safeguards To Help Broaden Oversight and Regulations on Data” (S. 1951).

This week, we will look at a pair of bills referenced by Senate Banking, Housing, and Urban Affairs Committee Chair Mike Crapo (R-ID) at a recent hearing on data ownership that take a different approach to privacy. In short, these bills would approach the issues presented by mass collection and use of consumer data by granting ownership rights.

Senator John Kennedy (R-LA) introduced the “Own Your Own Data Act” (S. 806), and Senators Mark Warner (D-VA) and Josh Hawley (R-MO) introduced the “Designing Accounting Safeguards To Help Broaden Oversight and Regulations on Data” (S. 1951).

The “Own Your Own Data Act” provides that “[e]ach individual owns and has an exclusive property right in the data that an individual generates on the internet under section 5 of the Federal Trade Commission Act.” This provision of a new right raises many more questions than it answers. Presumably, the required rulemaking the Federal Trade Commission (FTC) must undertake to effectuate this language will fill some gaps and define the terms that this brief three-page bill does not.

Additionally, every “social media company,” a term not defined by the bill, must

  • have a prominently and conspicuously displayed icon each user may click to obtain a copy of the user’s data with any analysis of the user’s data performed by the social media company;
  • have a prominently and conspicuously displayed icon each user may click to easily export the user’s data with any analysis of the user’s data performed by the social media company.

These provisions would seem to lend themselves to greater transparency in how one’s personal data is being used and portability should someone want to use a different platform.

The key provision of the bill, however, is that every user of a social media company’s offerings must “knowingly and willfully enter into a licensing agreement” during the registration of the account. For future users this legislation would grant them the ability to license the exclusive property that is their data, but what of existing accounts such as the millions of Facebook, Twitter, and Google accounts in the U.S.? Would this be only prospective as legislation typically is? And, if so, then current users of Twitter, and Facebook may not be able to license their accounts as the companies might not need to offer them the opportunity. As a practical matter, these companies might offer current users the opportunity, but within the four corners of the bill, they would be under no obligation to do so.

The FTC would be able to enforce this act. However, it is not altogether clear how the FTC would enforce this act. Would the misuse or stealing of a person’s personal data be considered a violation of the Section 5 prohibition on unfair and deceptive practices? Will the FTC’s required rulemaking deem a violation of one’s exclusive property right in their personal data a violation of the Section 5 bar against deceptive and unfair practices? Or is the FTC to wade into enforcing personal licenses and punishing violations? Would the agency husband its resources and wait until it has a sizeable number of complaints about social media company X before it investigates? This may be a likely outcome given that a number of critics of the FTC already claim the agency is stretched too thin and brings too few enforcement actions for data security and privacy violations.

Regarding the rulemaking, the FTC “promulgate regulations carrying out this [bill], which shall be approved by Congress.” Presumably the agency must use the more cumbersome Moss-Magnuson procedures for rulemaking instead of the Administrative Procedure Act (APA) notice and comment process? However, the bill does not speak directly this point, and so it is likely the FTC would be stuck using the Moss-Magnuson process which has effectively choked off the agency’s rulemaking capability.

How exactly will Congress must approve these regulations? Will it be like reprogramming requests that usually require the assent of the Appropriations Committees often through a formal process? Or will the informal sign off from the committees of jurisdiction over the FTC suffice? Or must Congress pass a resolution of approval or disapproval as it may under a number of statutes designed to police executive branch actions? The bill leaves this question unanswered.

A different privacy bill we examined, the “American Data Dissemination (ADD) Act” (S. 142) also requires the FTC to submit regulations to Congress. In the case of that bill, the agency needs to send “detailed recommendations [to the House Energy and Commerce Committee and the Senate Commerce, Science, and Transportation Committee] for privacy requirements that Congress could impose on covered providers that would be substantially similar, to the extent practicable, to the requirements applicable to agencies under the Privacy Act of 1974.” 12-15 months after the FTC submits this report, it would be required to submit to the same committees proposed regulations that would similarly make covered entities subject to requirements along the lines of how the Privacy Act of 1974 applies to federal agencies.

However, despite creating a property right, there is no right of action provided by the bill. Consumers would not be able to sue if their licensing of their “exclusive property right in the data” they generate is violated. Normally, for most property rights, consumers may go to court if they think their rights to this property have been impinged. This bill would not grant such a right to consumers, and I do not know of any other federal grounds under which consumers would be able to sue. Or would a person’s data be similar to trademarked or copyrighted information? Among the many questions raised under this scheme, would consumers be able to use existing state property statutes to sue in state courts? Could a state like California enact a right to sue for a violation of this newly created federal right?

This week’s other bill, the “Designing Accounting Safeguards To Help Broaden Oversight and Regulations on Data” (S. 1951), would force a select class of online entities to disclose how much they earn from users’ data and also provide consumers the right to delete their data subject to some exceptions. The entities would need to file additional disclosures with the Securities and Exchange Commission (SEC) to bring greater transparency to consumers, shareholders, and investors regarding the value of the data that companies collect and then share.

The bill defines which companies or entities would be “commercial data operators” those “acting in its capacity as a consumer online services provider or data broker that—

  • generates a material amount of revenue from the use, collection, processing, sale, or sharing of the user data; and
  • has more than 100,000,000 unique monthly visitors or users in the United States for a majority of months during the previous 1-year period.”

This definition would seem to include a small class of online entities while excluding most businesses that generate a material amount of their revenue from other activities. But, how “material” is defined would determine how a company like an auto manufacturer that derives significant revenue from both auto sales and the sale or sharing of personal data would be treated. Nonetheless, those entities that act as data brokers would be swept into this definition of commercial data operators, and they would need to meet the new responsibilities imposed on them.

Generally, the bill would require every commercial data operator to “provide each user of the commercial data operator with an assessment of the economic value that the commercial data operator places on the data of that user.” The agency charged with effectuating this portion of the bill, the FTC, would likely need to spell out what constitutes an “assessment of economic value.” Would this need to be consumer friendly and easily understandable?

Additionally, commercial data operators would have to reveal to all users the following

  • the types of data collected from users of the commercial data operator, whether by the commercial data operator or another person pursuant to an agreement with the commercial data operator; and
  • the ways that the data of a user of the commercial data operator is used if the use is not directly or exclusively related to the online service that the commercial data operator provides to the user

These disclosures seems straightforward and seem designed to better inform consumers about all the sources from which a commercial data operator is obtaining data and all the additional uses of user data beyond those immediate uses of the commercial data operator. Again, how this information is presented to consumers would be key, for if the format is barely intelligible or a sprawling spreadsheet, then one wonders how much the average use of Twitter would understand it. Additionally, would the FTC be able to aggregate these data and publish de-identified statistics on industry-wide data usage practices for commercial data operators? It would appear so. Additionally, the filings that must be made to the SEC would seem to present the FTC and the Department of Justice with a new source of data to investigate possible anti-competitive activity in the markets where commercial data operators are present.

Users must also be able to delete all the data a commercial data operator possesses subject to certain exceptions by the use of “a single setting” or “another clear and conspicuous mechanism by which the user may make such a deletion.” The excepted circumstances under which deletion may not occur are

  • in cases where there is a legal obligation of the commercial data operator to maintain the data;
  • for the establishment, exercise, or defense of legal claims; or
  • if the data is necessary to detect security incidents, protect against malicious, deceptive, fraudulent, or illegal activity, or assist in the prosecution of those responsible for such activity.

However, commercial data operators may not retain any more user data than is necessary to “carry out” the aforementioned exceptions to the general right of users to delete their data. This would seem to serve as a limit to an entity’s likely inclination to interpret such restrictions in ways most favorable to them. However, the extent to which these companies did not push the boundaries egregiously will hinge on FTC enforcement.

As mentioned, the FTC would enforce this new regime. Like virtually all the other privacy bills, the FTC would be empowered to treat acts contrary to the bill “as a violation of a rule defining an unfair or deceptive act or practice prescribed under section 18(a)(1)(B) of the Federal Trade Commission Act,” meaning the ability right off the bat to ask federal courts for civil fines of more than $40,000 per violation in addition to all the other enforcement tools the FTC normally wields in data security and privacy cases. Of course, the full panoply of the FTC’s other powers would still be available for such cases.

In a twist for a privacy bill, commercial data operators would need “to file an annual or quarterly report” with the SEC that must disclose” the aggregate value, if material, of—

  • user data that the commercial data operator holds;
  • contracts with third parties for the collection of user data through the online service provided by the commercial data operator; and
  • any other item that the [SEC] determines, by rule, is necessary or useful for the protection of investors and in the public interest.

The SEC must also “develop a method or methods for calculating the value of user data required to be disclosed” and “provide quantitative and qualitative disclosures about the value of user data held” by some commercial data operators.”

These data disclosure requirements would likely bring much greater transparency into the data practices of a company like Facebook or Google, presumably allowing investors to better understand and value such companies. In a section-by-section summary, Warner and Hawley asserted two additional ways the bill would address data privacy and usage:

  • making the value more transparent could increase competition by attracting competitors to the market.
  • disclosing the economic value of consumer data will also assist antitrust enforcers in identifying unfair transactions and anticompetitive transactions and practices.

While these two bills take different approaches on data privacy by trying to leverage the economics of data, it is not clear how appealing these are to Democrats whose agreement will be needed before any privacy leverage can move forward. Possibly a modified version of the concepts in these bills could be added to a broader privacy bill such that entities collecting and sharing data would need to make additional disclosures to the SEC.

Spotlight: A Privacy Bill A Week: “Information Transparency & Personal Data Control Act” (H.R. 2013)

For this week, let’s examine a House bill, the “Information Transparency & Personal Data Control Act” (H.R. 2013) which is sponsored by Suzan DelBene (D-WA) and cosponsored by 22 other House Democrats. DelBene worked in Washington state’s technology sector before transitioning to public service, including a stint with Microsoft. At present, this is not a bipartisan bill and consequently may be viewed as one of the House Democratic bills released this Congress.

This bill’s profile was raised a bit last week the the New Democrat Coalition, “the largest ideological House caucus…more than forty percent of the Democratic Caucus” according to their website, have formally endorsed H.R. 2013. The group says of itself: “[t]he New Democrat Coalition is made up of 104 forward-thinking Democrats who are committed to pro-economic growth, pro-innovation, and fiscally responsible policies.” In their press release, the New Democrat Coalition summarized H.R. 2013 thusly:

This bill will give people control over their most sensitive information and improve enforceability. This legislation requires the Federal Trade Commission (FTC) to mandate disclosure from companies on what information they are collecting and why, especially if it is being shared with another party.

The primary sponsor of the “Information Transparency & Personal Data Control Act,” Suzan DelBene, serves as the Vice Chair for Policy Coordination for the New Democrat Coalition.

And, while the New Democrat Coalition may be the largest single group among House Democrats, their endorsement does not necessarily mean H.R. 2013 will now become the party’s de facto bill. Firstly, Speaker Nancy Pelosi (D-CA) has said she will oppose any bill that would weaken strong state laws like those in California under the soon to take effect “California Consumer Privacy Act” (CCPA) (A.B. 375). This is a position shared by a number of Democrats in the House or Senate. H.R. 2013 is not nearly as stringent a bill as the CCPA even though it does not entirely preempt state laws, so in order for this bill to pass the House, the bill itself would need to change or Members like Pelosi would need to soften their position. Also 23 of the New Democrats are from California and would likely feel pressure from some California stakeholders to oppose any bill that would weaken the CCPA and quite possibly pressure from the Speaker herself, too. Moreover, DelBene does not sit on House Energy and Commerce, the primary committee of jurisdiction, and it is more likely than any bill the House considers will be drafted by the Democrats on the committee such as Chair Frank Pallone Jr (D-NJ) and Consumer Protection and Commerce Subcommittee Chair Jan Schakowsky (D-IL).

However, let’s turn to the substance of H.R. 2013. Generally, this bill would require that all data “controllers” must secure opt-in consent from consumers to collect, use, share, or sell their “sensitive personal information” subject to significant exceptions. Controllers would need to draft and publish their data usage, security, and privacy plans, and then be audited annually by independent, third-parties. The FTC would implement and oversee this new regime with state attorneys general being able to bring enforcement actions if the FTC does not act. Controllers who violate the new standards would be subject to enforcement including fines in the first instance and injunctive and equitable remedies under the FTC Act.

In terms of who would be part of the new privacy regulation scheme, the bill sweeps fairly wide. A “controller” is defined as “a person that, on its own or jointly with other entities, determines the purposes and means of processing sensitive personal information.” The bill would explicitly pull “common carriers” (i.e. telecommunications companies) into the FTC’s jurisdiction. Common carriers are normally subject to the jurisdiction of the Federal Communications Commission in regards to privacy. However, because common carriers are explicitly named as being part of the FTC’s jurisdiction, that would suggest that other entities not usually under the agency’s jurisdiction would not be subject to this bill (e.g. non-profits). Would entities all over the world that qualify as controllers or processors be subject to the FTC’s enforcement powers the way U.S. firms are subject to the General Data Privacy Regulation (GDPR)? It would seem so.

Also, the FTC would have jurisdiction over “processors” who are people “that process[] data on behalf of the controller,” meaning that data brokers may get swept into the new privacy protection regulatory regime. However, it is not immediately clear if a data broker would be considered a controller or a processor. And finally, unlike some proposed data security bills, there is no carve out for entities subject to and in compliance with existing federal data security and privacy regimes like HIPAA and Gramm-Leach-Bliley.

In terms of implementation, like many other privacy bills, the FTC would be required to promulgate regulations within one year under the Administrative Procedure Act (i.e. notice and comment rulemaking) instead of the lengthier Moss-Magnuson procedures the agency usually must use. These regulations would put in place the requirements that controllers and processors of data would need to meet, including obtaining opt-in consent from consumers before their data could be collected and shared. As a general matter, consumers would need to opt-into the use and sharing of their “sensitive personal information” but they would need to opt-out of such practices if they pertain to “non-sensitive personal information.” The dividing line between the two types of information would be crucial, and the bill provides broad categories of information that would qualify as “sensitive personal information.” The FTC will undoubtedly need to flesh out some of the categories of “sensitive personal information” such as “health information,” “genetic information,” “biometric information,” and other terms.

Likewise, the FTC will need to grapple with the term “information related to employment,” which is one of the categories of non-sensitive personal information controllers would not need opt-in consent to collect, share, and use. It is easy to see how this term may overlap with some categories of sensitive personal information such as health information, Social Security number, financial account information, genetic information, and/or biometric information amongst others. This discussion of non-sensitive personal information also must mention another significant exception: “de-identified information (or the process of transforming personal data so that it is not directly relatable to an identified or identifiable consumer).” This provision seems to provide an incentive to controllers to de-identify sensitive personal information to the extent possible so that it is protected in the event of unauthorized access of acquisition but also so that it may be subject to the lesser requirements due for handling and using non-sensitive personal information. Presumably encrypting sensitive personal information would result in it being de-identified, for properly encrypted data could not be traced back to an identified or identifiable consumer. The bill is not entirely clear, and the FTC may well see the need to fill this gap when it promulgates regulations to effectuate this provision if it is enacted.

The FTC would also be charged with enforcing the new regime, but state attorneys general would also be empowered to bring enforcement actions in certain situations. Notably, state attorneys general could bring actions in the event the FTC does not act regarding alleged violations. However, state attorneys general would not be able to seek the full range of remedies available to the FTC and would instead only be able “to obtain appropriate injunctive relief,” which may be temporary and permanent injunctions, disgorgement, restitution, rescission, and other such relief. But, a recent Seventh Circuit case (see article below) may cause the sponsors to broaden this term to all equitable relief to ensure that all such remedies may be sought.

In addition to controllers needing to get consumers to opt-in for some types of data collection and sharing, they would also need to “[p]rovide users with an up-to-date, transparent privacy, security, and data use policy that meets general requirements” including being “concise and intelligible,” “clear and prominent in appearance,” and “uses clear and plain language.” This policy would also need to include the following, among other information:

  • The “[i]dentity and contact information of the entity collecting the sensitive personal information.
  • [T]he purpose or use for collecting, storing, processing, selling, sharing, or otherwise using the sensitive personal information.
  • Third parties with whom the sensitive personal information will be shared and for what purposes.
  • How consent to collecting, storing, processing, selling, sharing, or otherwise using the sensitive personal information, including sharing with third parties, may be withdrawn.
  • What kind of sensitive personal information is collected and shared.
  • Whether the sensitive personal information will be used to create profiles about users and whether they will be integrated across platforms.
  • How sensitive personal information is protected from unauthorized access or acquisition.

Presumably the failure of a controller to comply with its own privacy, security, and data use policy could result in the FTC or a state attorney general bringing an action for unfair or deceptive practices under the FTC Act.

The exceptions are significant and depending on how the FTC construes these in regulation could determine how stringent or permissive the new data privacy regime would be. Despite the seemingly robust opt-in and transparency requirements, there are some significant exceptions to the general rule that consumers must opt-in before controllers may collect and share their sensitive personal information, namely:

  • Preventing or detecting fraud, identity theft, or criminal activity.
  • The use of such information to identify errors that impair functionality or otherwise enhancing or maintaining the availability of the services or information systems of the controller for authorized access and use.
  • Protecting the vital interests of the consumer or another natural person.
  • Responding in good faith to valid legal process or providing information as otherwise required or authorized by law.
  • Monitoring or enforcing agreements between the controller and an individual, including but not limited to, terms of service, terms of use, user agreements, or agreements concerning monitoring criminal activity.
  • Protecting the property, services, or information systems of the controller against unauthorized access or use.
  • Advancing a substantial public interest, including archival purposes, scientific or historical research, and public health, if such processing does not create a significant risk of harm to consumers.

Yet, the most significant exception may be in section (b)(2), which I’ll quote in full: “[t]he [FTC] regulations promulgated pursuant to subsection (a) with respect to the requirement to provide opt-in consent shall not apply to the processing, storage, and collection of sensitive personal information or behavioral data in which such processing does not deviate from purposes consistent with a controller’s relationship with users as understood by the reasonable user.” Consequently, for the consumer using their Gmail account, any of Google’s processing of sensitive personal information may not be considered a deviation “from purposes consistent with a controller’s relationship with users as understood by the reasonable user.” The same may also apply to the current practices of Apple, Yahoo!, Microsoft, Amazon, etc. Not only would this represent a huge carve out to the exception that consumers must opt-in after receiving clear and easy to understand notice of what data is being collected, shared, and processed, with who, and for what purposes, it would seem to advantage those controllers already operating in the marketplace for they would not need to give consumers the choice of whether to opt-in.

Controllers of sensitive personal data would need a “qualified, objective, independent third-party” to conduct an annual “privacy audit,” and then the controller would need to reveal publicly if it is in compliance. There may be issues related to the incentive structure in that these third-parties will be competing for the business of data controllers and may be inclined to slant their audit towards compliance for the sake of client management. Perhaps the bill would benefit from some of the measures enacted under Sarbanes-Oxley to weaken the incentives for auditors to water down their audits. Another issue may be that these audits do not need to be submitted to the FTC or state attorneys general until one of these regulatory officials makes known to the controller “allegations that a violation of this Act or any regulation issued under this Act has been committed by the controller.” From a compliance standpoint, submitting all audits to the FTC in the same way companies must submit financial information to the Securities and Exchange Commission (SEC) would allow the FTC to have a better sense of compliance with its regulations, flag early any industry-wide trends or problems, or, yes, take enforcement action against non-compliant controllers. Of course such a system would be generally less attractive to data controllers. Finally, audits would not be necessary for small businesses for controller with the sensitive personal information of less than 5,000 people, and no audits would be necessary for non-sensitive information.

In terms of preempting state laws like the CCPA, this bill takes a seeming middle path. H.R. 2013 would preempt state laws “to the degree the law is focused on the reduction of privacy risk through the regulation of the collection of sensitive personal information and the collection, storage, processing, sale, sharing with third parties, or other use of such information.” However, this preemption applies only to controllers subject to this bill. In what may prove important language, any controllers outside the scope of this bill would find themselves subject to state laws on privacy. Moreover, any state laws on processors would not be preempted by H.R. 2013, meaning entities like data brokers may still be subject to the CCPA, for example.

And, yet, this bill would seem to create some sunlight for states to add privacy and data security requirements above the federal floor created by this bill. To wit, the bill provides that “[a]ny private contract based on a State law that requires a party to provide additional or greater privacy for sensitive personal information or data security protections to an individual than this Act” would not be preempted. Therefore, in statute a state could make reference to H.R. 2013 as enacted and then require controllers and processors operating in those states to provide additional privacy or data security measures above and beyond those in FTC regulations.

The FTC would be directed to hire “50 new full-time employees to focus on privacy and data security, 15 of which shall have technology expertise,” and appropriations of $35 million would authorized for the FTC “for issues related to privacy and data security.” Of course, appropriators would then have to actually appropriate these funds before the FTC ever saw an additional dollar. And, to contextualize this funding increase, the House’s FY 2020 bill that funds the FTC would provide the agency with $349.7 million, so the “Information Transparency & Personal Data Control Act” would increase the agency’s funding by roughly 10% above the House’s preferred FY 2020 funding level and by a slightly higher percentage compared to FY 2019 funding for the FTC.

© Michael Kans and Michael Kans Blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans and Michael Kans Blog with appropriate and specific direction to the original content.

Spotlight: A Privacy Bill A Week

Last week, we took a look at Senate Finance Committee Ranking Member Ron Wyden’s (D-OR) “Consumer Data Protection Act” discussion draft, not to be confused with Senator Bob Menendez’s (D-NJ) “Consumer Data Protection Act” (S. 2188), a data security and breach notification bill. As discussed at some length, in short, Wyden’s bill would vastly expand the power of the Federal Trade Commission (FTC) to police both the security and privacy practices off many U.S. and international multinational companies. The FTC would receive the authority to levy fines in the first instance, potentially as high as the European Union’s General Data Protection Regulation of 4% of annual gross revenue. Moreover, the operative definition of the “personal information” that must be protected or subject to the privacy wishes of a consumer is very broad. The bill would also sweep into the FTC’s jurisdiction artificial intelligence (AI) and algorithms (i.e. so-called big data).

While the “Consumer Privacy Protection Act of 2017” (H.R. 4081) from the 115th Congress also focuses on data security, it still contains provisions that would require those entities covered by the bill to better protect consumers’ privacy.  Representative David Cicilline (D-RI) sponsored the House bill and is now the chairman of the House Judiciary Committee’s Antitrust, Commercial and Administrative Law Subcommittee that is conducting an investigating into possible anti-competitive practices in the technology industry. 11 other House Democrats cosponsored this bill, which was not considered at all in the last Congress. Senator Patrick Leahy (D-VT) and some Senate Democrats introduced S. 2124, a bill that is substantially similar to the House version.

Not surprisingly, this bill would make certain conduct related to data security subject to possible criminal liability. This would differentiate this bill from a number of the other bills, save for Senator Ron Wyden’s (D-OR) discussion draft. A likely reason for this difference is that a number of the sponsors of both bills serve on the Judiciary Committees, and in order for data security and privacy bills to be referred to those committees there must be matter in the bill subject to the jurisdiction of those committees. However, this is not to suggest there is merely craven politics at work. Instead there is likely legitimate concern that the problems presented by these areas will not be solved absent stiff penalties for egregious conduct.

Generally, covered entities must design a consumer privacy and data security program tailored to the risks associated with the entity’s data activities, including conducting risk assessments, managing and controlling risks, performing vulnerability tests, and periodically assessing and upgrading hardware, software, and technology. Covered entities would include almost all entities except those in compliance with the Financial Services Modernization Act of 1999 (aka Gramm-Leach-Bliley) or Health Insurance Portability and Accountability Act of 1996 (HIPAA)/Health Information Technology for Economic and Clinical Health (HITECH) Act and “service providers” (i.e. ISPs that are solely engaged in the “transmission, routing, or temporary, intermediate, or transient storage of [electronic] communication.” An additional exception exists for those entities that would be otherwise covered, for there is a threshold of collecting, using, storing, transmitting or disposing at least 10,000 people in any 12-month period before the data security requirements of the bill attach.

“Sensitive personally identifiable information” is defined as “any information or compilation of information, in electronic or digital form that includes” the usual sort of information policymakers want protected (e.g. Social Security number, driver’s license, biometric data, etc.) However, this definition sweeps into it the types of data protected under HIPAA/HITECH Act regulations, geolocation data, financial account numbers or credit or debit card numbers, and password-protected digital photographs and digital videos not otherwise available to the public.

The FTC is directed to promulgate regulations under APA notice and comment procedures, but the phrasing suggests the FTC’s latitude in drafting regulations may be limited. The bill provides that covered entities must comply with “following safeguards and any other administrative, technical, or physical safeguards identified by the FTC in a rulemaking process…for the protection of sensitive personally identifiable information.” Consequently, covered entities would need to understand and hew to the new consumer privacy and data security program laid out in Section 202(a) and the subsequent “other administrative, technical, or physical safeguards identified by the FTC” in a rulemaking, possibly leading to additional to be determined requirements. Additionally, the choice of the word “identified” is what seems to be key here. A fair reading of this provision is that the FTC would merely identify the additional standards as opposed to a traditional rulemaking under which the agency would have greater discretion to determine the standards with which entities should comply. Additionally, the bill stipulates covered entities must “implement a consumer privacy and data security program pursuant to this subtitle” within one year of enactment. However, there is no timeline by which the FTC must promulgate its regulations. So, covered entities would need to read the requirements in Subtitle A of Title II (i.e. Consumer Privacy and Security of Sensitive Personally Identifiable Information) and make their best effort to comply and then wait for the FTC’s additional regulations at some point in the future.

With respect to enforcement, either the Department of Justice (DOJ) or the FTC could file civil litigation in federal court. Both agencies could seek fines of up to $16,500 per individual whose sensitive personally identifiable information has been breached with a cap of $5 million on total fines unless the conduct is found to be willful and intentional at which point fines would be uncapped. Like the other bills, the FTC may treat alleged violations of the new security and privacy regime as a “unfair or deceptive act or practice in commerce in violation of a regulation,” allowing the agency to pursue civil fines in the first instance.  State attorneys general could also bring actions under this section but usually only after alerting the DOJ and the FTC.

H.R. 4081 does not create a private right of action for consumers allegedly harmed by a breach but it explicitly does not preempt avenues a consumer could file a lawsuit under state laws (e.g. tort or contract actions). Likewise, the bill sets a floor for security and privacy standards and only those state laws less stringent than the new federal regime would be preempted.

As mentioned, some failures to meet the requirements of this bill would result in criminal liability. Title I of the bill would make it a felony to conceal a security breach of sensitive personally identifiable information. However, any such person accused of concealing such a breach must have knowledge of the breach, must “intentionally and willfully” act to conceal, and the breach must result in economic harm to at least one person in the amount of at least $1,000. This title would also require the Department of Justice (DOJ) to report on the number of prosecutions under the Computer, Fraud and Abuse Act (CFAA) related to exceeding authorization on a computer system or unauthorized access to a computer system. The federal government would also receive authority to shut down bot networks.

Spotlight: A Privacy Bill A Week: “Consumer Data Protection Act”

Last week, we dived into Senator Catherine Cortez Masto’s (D-NV) “Digital Accountability and Transparency to Advance Privacy Act” (DATA Privacy Act) (S. 583). Of course, Cortez Masto served as the attorney general of Nevada for eight years prior to succeeding former Senator Harry Reid (D-NV), and this bill demonstrates her background as her state’s top prosecutor. This week, we will analyze the most stringent, most pro-consumer bill on privacy that I have seen introduced in this or the last Congress.

In November, Senate Finance Committee Ranking Member Ron Wyden (D-OR) released the “Consumer Data Protection Act” discussion draft, section-by-section, and one-pager, legislation not to be confused with Senator Bob Menendez’s (D-NJ) “Consumer Data Protection Act” (S. 2188), a data security and breach notification bill. In short, Wyden’s bill would vastly expand the power of the Federal Trade Commission (FTC) to police both the security and privacy practices off many U.S. and international multinational companies. The FTC would receive the authority to levy fines in the first instance, potentially as high as the European Union’s General Data Protection Regulation of 4% of annual gross revenue. Moreover, the operative definition of the “personal information” that must be protected or subject to the privacy wishes of a consumer is very broad. The bill would also sweep into the FTC’s jurisdiction artificial intelligence (AI) and algorithms (i.e. so-called big data).

The “Consumer Data Protection Act” would dramatically expand the types of harms the FTC could use its authority to punish to explicitly include privacy violations and noneconomic injuries. Currently, the FTC must use its Section 5 powers to punish unfair and deceptive practices, or another statutory basis such as COPPA, to target the privacy practices it considers unacceptable. Wyden’s bill would allow the FTC to enforce the FTC Act, as amended by his bill, to punish “noneconomic impacts and those creating a significant risk of unjustified exposure of personal information” as among those “substantial injur[ies]” made illegal. It is worth seeing the proposed language in the context of the section of the FTC’s organic statute (i.e. 15 U.S.C. 45(n)):

(n) Standard of proof; public policy considerations

The Commission shall have no authority…to declare unlawful an act or practice on the grounds that such act or practice is unfair unless the act or practice causes or is likely to cause substantial injury including those involving noneconomic impacts and those creating a significant risk of unjustified exposure of personal information to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition (emphasis added to differentiate the language the bill would add.)

The FTC’s new authority would likely be defined in court actions to test the outer limits of what constitutes “noneconomic impacts” and the types of substantial injuries that create a significant risk of unjustified exposure of personal information. If this language were enacted, undoubtedly industry groups and conservative advocacy organizations would zealously search for test cases to try to circumscribe this authority as narrowly as possible. Finally, it bears note that this sort of language harkens back to the FTC’s construction of its statutory powers in the 1960’s and 1970’s that was considered so expansive that a Democratic Congress reined in the agency and limited its purview.

The FTC’s authority to levy civil fines through an administrative proceeding would be dramatically expanded along the lines of the EU’s power to levy massive fines under the General Data Protection Regulation. Notably, without securing a court order, the agency could impose civil fines as part of a cease and desist order which shall be the higher of $50,000 per violation or 4% of the annual gross revenue of the offender in the previous fiscal year. The upper limits of such a fine structure get very high, very quickly. For example, a violation with 100,000 people affected yields an upper boundary of $5 billion assuming one violation per person. The privacy violations associated with Facebook’s conduct with Cambridge Analytica affected 87 million worldwide, and again assuming one violation per person, the upper boundary of the fine the FTC could levy would be $4,350,000,000,000. However, the FTC would likely not exercise this power to the utmost possible fine but rather dial back the fine to a more reasonable but still punitive amount. Nonetheless, the FTC would have the ability to recover up to $50,000 per violation or 4% of gross annual revenue for any violations of cease and desist orders by filing an action in federal court.

Despite expanding the FTC’s powers dramatically, those entities subject to the agency’s new enforcement powers would not include many medium and small businesses. Covered entities are described as those entities with more “than $50,000,000 in average annual gross receipts for the 3-taxable-year period preceding the fiscal year” and the “personal information” of more than 1,000,000 consumers, and 1,000,000 consumer devices. Additionally, a covered entity may be an affiliate or subsidiary of an entity that meets the aforementioned qualifications. Finally, the term “covered entity” covers all data brokers or commercial entities “that, as a substantial part of their business, collects, assembles, or maintains personal information concerning an individual who is not a customer or an employee of that entity in order to sell or trade the information or provide third- party access to the information.”

Additionally, a subset of these covered entities with more than $1 billion in annual revenues that “stores, shares, or uses personal information on more than 1,000,000 consumers or consumer devices” or those “that stores, shares, or uses personal information on more than 50,000,000 consumers or consumer devices” must submit annual data protection reports to the FTC. Those entities must report “in detail whether, during the reporting period, the covered entity complied with the regulations” the FTC will promulgate to effectuate the “Consumer Data Protection Act” and the extent to which they did not comply by detailing which regulations were violated and the number of consumers affected.

Each report must “be accompanied by a written statement by the chief executive officer, chief privacy officer (or equivalent thereof), and chief information security officer (or equivalent thereof) of the company” that certifies the report fully complies with the requirements of the new statute. If any such person certifies an annual data protection report while knowing it does not meet the requirements of this section or with intentional knowledge it does not faces jail time and/or a personal fine based on income depending on which state of knowledge the actor had in falsely certifying a report. Any CEO, chief privacy officer, or chief information security officer that knowingly certifies a false report faces a fine of the greater of $1 million or 5% of the highest annual compensation for the previous three years and up to ten years in prison. Intentional violations expose these corporate officials to the greater of a $5 million fine or 25% of the highest annual compensation for the previous three years and 20 years in prison.

Of course, falsely certifying knowing that a report fails to meet all the requirement exposes a person to less criminal liability than intentionally certifying. However, the substantive difference between knowing certification and intentional certification is not immediately clear. Perhaps the bill intends knowing to be constructive knowledge (i.e. known or should have known) while intentionality in this context means actual knowledge.

With respect to the information covered entities would need to safeguard, the bill defines “personal information,” which is “any information, regardless of how the information is collected, inferred, or obtained that is reasonably linkable to a specific consumer or consumer device,” which is a very broad definition. Wyden’s bill also defines “use,” “share,” and “store” in the context of personal information:

  • “share’’—
    • means the actions of a person, partnership, or corporation transferring information to another person, partnership, or corporation; and
    • includes actions to knowingly—
    • share, exchange, transfer, sell, lease, rent, provide, disclose, or otherwise permit access to information; or
    • enable or facilitate the collection of personal information by a third party.
  • ‘‘store’’—
    • means the actions of a person, partnership, or corporation to retain information; and
    • includes actions to store, collect, assemble, possess, control, or maintain information.
  • ‘‘use’’ means the actions of a person, partnership, or corporation in using information, including actions to use, process, or access information.

The FTC would be required to promulgate detailed regulations discussed in more detail below within two years of enactment. This timeline may be more realistic than many of the other bills which task the agency with detailed, extensive rulemakings within a year, a deadline the FTC may have trouble meeting. Nonetheless, the agency could take the first year or even 15 months to draft proposed regulations for comment.

The bill would task the FTC with establishing and running a ‘‘Do Not Track’’ data sharing opt-out website that would stop covered entities from sharing a consumer’s personal information subject to certain exceptions including the use of personal information acquired before a consumer opts out. These would be in the case when a covered entity needs to share the information to achieve the primary purpose under which the information was initially acquired. Additionally, this bar would be in effect for personal information a covered entity acquires from non-covered entities.

The FTC would also need to determine technological means that a consumer’s opt-out on its website can be effectuated through web browsers or operating systems. The agency would also need to devise a method by which covered entities could determine which consumers have opted out, possibly through the development of an FTC Application Programming Interface (API). Thereafter, covered entities would have a duty to check at regular intervals the FTC’s opt-out database to ensure they are honoring the consumers’ decisions to opt out. Covered entities would not need to respect a consumer’s desire to opt-out in the event of required legal disclosures they need to make to the government such as under warrants or subpoenas. The FTC would also need to “establish standards and procedures, including through an API, for a covered entity to request and obtain consent from a consumer who has opted-out…for the covered entity to not be bound by the opt-out,” including providing a list of third parties with whom personal information might be shared and a description of such information. And, if the covered entity requires consumers to consent to usage of their personal information before its products or services can be used, then the covered entity must “notify the consumer that he or she can obtain a substantially similar product or service in exchange for monetary payment or other compensation rather than by permitting the covered entity to share the consumer’s personal information.”

The FTC must also “establish standards and procedures requiring that when a non-covered entity that is not the consumer shares personal information about that consumer with a covered-entity, the covered entity shall make reasonable efforts to verify the opt-out status of the consumer whose personal information has been shared with the covered entity.” Thereafter covered entities may only use or store this personal information if a consumer has not opted out on the FTC’s website or if the covered entity has received the consumer’s consent for non-covered entities to collect and share their information.

Additionally, the FTC must draft regulations detailing the “standards and procedures” covered entities and non-covered entities must follow “to request and obtain consent from a consumer…that clearly identifies the covered entity that will be storing or using the personal information and provides the consumer” at the time consent is sought. Consumers must be informed “in a form that is understandable to a reasonable consumer” detailing the entity from whom personal information is to be obtained, the type of personal information to be collected, and the purposes for which such information shall be used.

Certain acts would be prohibited. Covered entities could not require consumers to change their opt-out election on the FTC’s website in order to access products and services “unless the consumer is also given an option to pay a fee to use a substantially similar service that is not conditioned upon a requirement that the consumer give the covered entity consent to not be bound by the consumer’s opt-out status.” Moreover, this fee “shall not be greater than the amount of monetary gain the covered entity would have earned had the average consumer not opted-out.”

Wyden’s bill also marries data security requirements with privacy protections for consumers, a position articulated by a number of prominent Democrats. Notably, the FTC would need to promulgate regulations that

  • require each covered entity to establish and implement reasonable cyber security and privacy policies, practices, and procedures to protect personal information used, stored, or shared by the covered entity from improper access, disclosure, exposure, or use;
  • require each covered entity to implement reasonable physical, technical, and organizational measures to ensure that technologies or products used, produced, sold, offered, or leased by the covered entity that the covered entity knows or has reason to believe store, process, or otherwise interact with personal information are built and function consistently with reasonable data protection practices;

The FTC would also need to draft regulations requiring “each covered entity to provide, at no cost, not later than 30 business days after receiving a written request from a verified consumer about whom the covered entity stores personal information” a way to review any personal information stored, including how and when such information was acquired and a process for challenging the accuracy of any stored information. Additionally, these regulations would “require each covered entity to correct the stored personal information of the verified consumer if, after investigating a challenge by a verified consumer…the covered entity determines that the personal information is inaccurate.” Covered entities would also need to furnish a list of the entities with whom the consumer’s personal information was shared and other detailed information, including the personal information of the consumer the covered entity acquired not from the consumer but a third party.

The “Consumer Data Protection Act” would also institute regulations and requirements related to the increasing use of so-called “big data,” algorithms, machine learning, and artificial learning. The FTC would need to promulgate regulations mandating that each covered entity must “conduct automated decision system impact assessments of existing high-risk automated decision systems, as frequently as the Commission determines is necessary; and…new high-risk automated decision systems, prior to implementation.” However, it would be helpful to examine the bill’s definitions of ‘‘automated decision system,’’ “automated decision system impact assessment,’’ ‘‘high-risk automated decision system’’ and “high-risk information system:”

  • ‘‘automated decision system’’ means “a computational process, including one derived from machine learning, statistics, or other data processing or artificial intelligence techniques, that makes a decision or facilitates human decision making, that impacts consumers.
  • “automated decision system impact assessment’’ means a study evaluating an automated decision system and the automated decision system’s development process, including the design and training data of the automated decision system, for impacts on accuracy, fairness, bias, discrimination, privacy, and security
  • ‘‘high-risk automated decision system’’ means an automated decision system that—
    • taking into account the novelty of the technology used and the nature, scope, context, and purpose of the automated decision system, poses a significant risk—
      • to the privacy or security of personal information of consumers; or
      • of resulting in or contributing to inaccurate, unfair, biased, or discriminatory decisions impacting consumers;
      • makes decisions, or facilitates human decision making, based on systematic and extensive evaluations of consumers, including attempts to analyze or predict sensitive aspects of their lives, such as their work performance, economic situation, health, personal preferences, interests, behavior, location, or movements, that—
        • alter legal rights of consumers; or
        • otherwise significantly impact consumers;
        • involves the personal information of a significant number of consumers regarding race, color, national origin, political opinions, religion, trade union membership, genetic data, biometric data, health, gender, gender identity, sexuality, sexual orientation, criminal convictions, or arrests;
        • systematically monitors a large, publicly accessible physical place; or
    • meets any other criteria established by the Commission in regulations…
  • ‘high-risk information system’’ means an information system that—
    • taking into account the novelty of the technology used and the nature, scope, context, and purpose of the information system, poses a significant risk to the privacy or security of personal information of consumers;
    • involves the personal information of a significant number of consumers regarding race, color, national origin, political opinions, religion, trade union membership, genetic data, biometric data, health, gender, gender identity, sexuality, sexual orientation, criminal convictions, or arrests;
    • systematically monitors a large, publicly accessible physical place; or
    • meets any other criteria established by the Commission in regulations…

Consequently, algorithmic decision-making would be swept into the FTC’s new regime to govern privacy and data security. However, politically, this is not close to being on most Members’ consciousness as being related to privacy and data security. This reality marks the “Consumer Data Protection Act” as among the most forward looking of the bills that have been introduced over the last year. And, yet it is likely that any privacy or data security bill Congress passes will not include such provisions; however, a state like California could decide to wade into this area, which, again like with privacy, this could force policymakers in Washington to consider an issue percolating up to the federal level from one of the state laboratories of democracy.

In terms of enforcement, the bill explicitly bars the use of any contracts contrary to the rights and requirements in the “Consumer Data Protection Act.” Like virtually all the other bills on privacy, the FTC would be able to ask a federal court for civil fines for a first offense as high as a bit more than $40,000 per violation in addition to all the FTC’s other powers.

This bill is likely the outer bounds desired by the most ardent privacy and civil liberties advocate, and therefore is highly unlikely to get enacted in its current form. Other Democratic bills are far more modest in scope, and few of them address both security and privacy. The chances of enactment are very low, but Congressional interest in privacy legislation will continue because of the GDPR and the California Consumer Privacy Act.

Rubio’s Privacy Bill

Okay, so I lied. I’m back with a privacy bill sooner than a week. In January, Senator Marco Rubio (R-FL) released a bill, the “American Data Dissemination (ADD) Act” (S. 142) that offers a different approach on privacy and technology by using the “Privacy Act of 1974” as a template for regulating those entities providing services on the internet. However, this approach, and other details in the bill, make it a likely non-starter for many House and Senate Democrats, particularly since it would preempt in significant part (if not entirely) the “California Consumer Privacy Act” (AB 375) and other privacy-oriented state statutes.  Nonetheless, Rubio is a new entrant to the field of privacy and data security policy and may influence whatever legislation Congress produces.

Like most other data security and privacy bills, the Federal Trade Commission (FTC) would be the agency to enforce the new requirements and would be given jurisdiction over “covered provider[s]” a term defined as “a person that provides a service that uses the internet; and in providing the service…collects records.” This definition would encompass most entities doing business over the internet but would seem to exclude data brokers and other entities that buy, sell, collect, or share the personally identifiable information of people. Consumers would be given the right to access the “records” “covered providers” hold on them and then request changes to erroneous information. If the ultimate regulations align with the “Privacy Act of 1974” (5 USC 552a), then there may be significant exemptions that would function to limit consumer access to and control over the information held, used, and shared by businesses.

Rubio’s bill takes the unusual step of requiring that the FTC essentially clear its regulations with the House Energy and Commerce Committee and the Senate Commerce, Science, and Transportation Committee. The FTC would be required to submit to Congress “detailed recommendations for privacy requirements that Congress could impose on covered providers that would be substantially similar, to the extent practicable, to the requirements applicable to agencies under the Privacy Act of 1974.” 12-15 months after the FTC submits this report, it would be required to submit to Congress proposed regulations that would similarly make covered entities subject to requirements along the lines of how the Privacy Act of 1974 applies to federal agencies. The FTC is directed by the legislation to address a number of topics in these regulations, including

  • criteria by which the FTC could exempt certain small covered providers that would otherwise be subject to this bill based on the time an entity has been covered by the ADD Act, its revenues, and the number of people for whom they have records
  • establishing a process by which people could request access to a record and possibly have that record deleted if the covered provider elects to do so
  • requiring that consumers show that a record is “not accurate, relevant, timely, or complete” (terms to be defined by the FTC) before a covered entity is required to amend a record
  • establishing a dispute resolution process like the one for disputes between consumers and credit reporting agencies under the Fair Credit Reporting Act (FCRA) regarding one’s credit file
  • the establishment of “a code of ‘‘fair information practices’’, for the secure collection, maintenance, and dissemination of records, with which a covered provider must comply.”

These regulations would also be published, presumably for comment from interested parties. However, the bill is silent on whether the FTC would have to use the more extensive Moss-Magnuson rulemaking procedures or the Administrative Procedures Act process, which most agencies utilize. Yet, the drafters of the bill may intend for the FTC to use the process outlined in the bill, meaning a new means by which the FTC would promulgate regulations. In any event, if a statute based on the initial recommendations is not enacted within two years of passage of the ADD Act, then the FTC would be required to promulgate final regulations.

The Privacy Act of 1974 has been criticized by privacy and civil liberties advocates as being inadequate for protecting the privacy of Americans given how exceptions have been utilized by agencies and the arguably out-of-date definitions and concepts in the 45-year-old legislation. Additionally, unlike many Democratic bills, state attorneys general would have no role in enforcing the new regulations or laws. With respect to enforcement, the FTC could request that a federal court levy civil fines as high as $40,000 per violation. The bill would exempt HIPAA-covered entities and those regulated under the “Family Educational Rights and Privacy Act of 1974.” The FTC is given authority to determine whether the follow-on statute or regulations put in place under the ADD Act supersede Gramm-Leach-Bliley and the Children’s Online Privacy Protection Act (COPPA) in the case of conflicts.

A Privacy Bill A Week: The Data Care Act

As we wait for stakeholders in Congress to finalize and release their proposals to regulate how private sector companies handle, use, and distribute the private information of Americans, we thought there would be value in reviewing some of the key bills already introduced this Congress and some introduced over the last few Congresses so when bills are finally introduced, we will have a baseline by which to judge the proposal.

This week, let’s examine the “Data Care Act” (S. 3744). In December 2018, fifteen Democratic Senators led by Senator Brian Schatz (D-HI) and including presidential candidates Senators Michael Bennet (D-CO), Amy Klobuchar (D-MN) and Cory Booker (D-NJ) introduced a bill that would extend the concept of fiduciary responsibility currently binding on health care professionals and attorneys with respect to the patients and clients’ information to “online service providers.”

This bill built on a concept fleshed out by law professor Jack Balkin in his article “Information Fiduciaries and the First Amendment“ that would place duties on companies collecting and using consumer data similar to those that lawyers and doctors must meet in how they handle client and patient information. Balkin explained that these so-called “information fiduciaries” should “have special duties to act in ways that do not harm the interests of the people whose information they collect, analyze, use, sell, and distribute.”

Schatz has been in negotiations with other members of the Senate Commerce, Science, and Transportation Committee with the goal of developing a bipartisan bill to regulate privacy at a federal level. As discussed in past issues of the Technology Policy Update, stakeholders in both the House and Senate continue to negotiate privacy bills but significant disagreements have been reported regarding whether such a bill has a private right of action, preempts the “California Consumer Privacy Act” (CCPA) (A.B. 375) and other state laws, and whether a new regime is primarily enhanced notice and consent or certain conduct would no longer be allowed amongst other issues.

In short, under the “Data Care Act,” “online service providers” would be severely be limited on how they collect, share, and sell the personally identifiable information (PII), for these companies would need to treat their customers’ PII as privileged and deserving of a greater level of protection, much like the HIPAA regulations impose this standard on health care providers or bar associations’ rules on attorneys. However, the scope of who is an online service provider would seem to encompass most consumer-oriented companies doing business on the internet. Yet, like most other privacy and data security bills, the Federal Trade Commission (FTC) would enforce the new regime.

An “online service provider” is defined as an entity “engaged in interstate commerce over the internet or any other digital network; and in the course of business, collects individual identifying data about end users, including in a manner that is incidental to the business conducted.” This very sweeping definition would cover almost any business or entity doing business in the U.S. even if it is not across state lines as the Supreme Court has often construed the Commerce Clause. However, the FTC would have the discretionary authority to exclude categories of online service providers from the fiduciary duties the bill would otherwise impose. The FTC is directed to consider the privacy risks posed by the category of online service provider.

The bill requires that “[a]n online service provider shall fulfill the duties of care, loyalty, and confidentiality” towards consumers’ personal information, which is also broadly defined in the bill.  The duty of care requires online service providers to “reasonably” safeguard “individual identifying data” from unauthorized access and notify consumers of any breach of this duty, subject to FTC regulations that would be promulgated. The duty of loyalty would require online service providers to not use the information in a way that benefits them to the detriment of consumers, including uses that would result in reasonably foreseeable material physical or financial harm to the consumer. Finally, the duty of confidentiality limits the disclosure or sale of consumers’ information to instances where the duties of care and loyalty are observed (i.e. when the information must be safeguarded and not used to the detriment of consumers). Moreover, under this duty, should an online service provider wish to share or sell consumers’ information with a third party, they would need to enter into a contract with the other party that requires them to meet the same duties of care, loyalty, and confidentiality.

As noted, the FTC would enforce the act and would have the authority to levy fines in the first instance for violations, but state attorneys general would also be able to bring actions for violations in the event the FTC does not act or after FTC action. This latter power has long been a Democratic priority in the realm of data security and may be a non-starter with Republicans. Moreover, the bill does not preempt state laws, meaning the FTC could investigate a violation under this act and states could investigate under their laws. The FTC would be given authority under the Administrative Procedure Act (APA) to promulgate regulations regarding data breach notification instead of the much more onerous Moss-Magnuson rulemaking procedures the FTC must otherwise use. These regulations include the aforementioned regulations on breach notification and some possible exemptions to the duties that would otherwise apply to online service providers (e.g. small companies). The bill expands the FTC’s jurisdiction over non-profit entities and common carriers that may also be online service providers.

Possible Preview of Federal Data Security Regulations?

If privacy legislation gets passed by the Congress this year or next (although recent reports suggest a number of impasses between Republicans and Democrats), it might also contain language on data security standards. Such legislation would also likely direct the Federal Trade Commission (FTC) to conduct an Administrative Procedure Act (APA) rulemaking to promulgate regulations on privacy and data security. As most of the major bills provide that the FTC would use APA notice and comment procedure instead of the far lengthier Moss-Magnuson procedures, it is not far-fetched to envision FTC regulations on privacy and/or data security coming into effect, say, in the first year of the next Administration. However, what might FTC regulations on data security look like? Well, the FTC’s recent proposed update to the Safeguards Rule may provide a roadmap, but first a little background.

The “Financial Services Modernization Act of 1999” (P.L. 106-102) (aka Gramm-Leach-Bliley) required financial services regulators to promulgate regulations to “protect the security and confidentiality of…customers’ nonpublic personal information.” The FTC, among other regulators, were required to “establish appropriate standards for the financial institutions…relating to administrative, technical, and physical safeguards-

  • (1) to insure the security and confidentiality of customer records and information;
  • (2) to protect against any anticipated threats or hazards to the security or integrity of such records; and
  • (3) to protect against unauthorized access to or use of such records or information which could result in substantial harm or inconvenience to any customer.”

The current Safeguards regulations were promulgated in May 2002 and reflect the thinking of the agency in the era before big data, widespread data breaches, smartphones, and other technological developments. Consequently, the regulations those financial services companies subject to FTC regulation under Gramm-Leach-Bliley now seem vague and almost permissive in light of best practices and requirements subsequently put in place for many entities. The current Safeguards rule is open-ended and allows the regulated entity the discretion and flexibility to determine what constitutes the “information security program” it must implement based on the entity’s “size and complexity, the nature and scope of your activities, and the sensitivity of any customer information at issue.” Covered entities must perform risk assessments to identify and ideally remediate foreseeable internal and external risks. Subsequently, the covered entity must then “[d]esign and implement information safeguards to control the risks you identify through risk assessment, and regularly test or otherwise monitor the effectiveness of the safeguards’ key controls, systems, and procedures.”

These regulations are not prescriptive and are more general in nature, or at least they seem so in retrospect. One would hope that any entities holding any modicum of sensitive consumer information is regularly and vigorously engaged in an ongoing practice of assessing and addressing risks. However, the repromulgation of the Safeguards rule suggest this may not be the case.

The FTC is using its very broad grant of authority under Gramm-Leach-Bliley to revisit the Safeguards Rule as part of its periodic sweep of its regulations. The FTC explained that when it issued the current Safeguards Rule in 2002 “it opted to provide general requirements and guidance for the required information security program, without providing detailed descriptions of what the information security program should contain.” The FTC claimed that“[i]t took this approach in order to provide financial institutions with the flexibility to shape the information security programs to their particular business and to allow the programs to adapt to changes in technology and threats to the security and integrity of customer information.” The FTC asserted its beliefthe new provisions “continue to provide companies with flexibility, they also attempt to provide more detailed guidance as to what an appropriate information security program entails.”

In the proposed changes to the Safeguards Rule, the FTC is calling for “more specific security requirements” that “will benefit financial institutions by providing them more guidance and certainty in developing their information security programs, while largely preserving that flexibility.” It is possible that in offering more detailed prescriptions that the FTC is responding to criticisms generally that its data security standards are vague[1]. The FTC contends that the “proposed amendments provide more detailed requirements as to the issues and threats that must be addressed by the information security program, but do not require specific solutions to those problems.” The Commission claims “the proposed amendments retain the process-based approach of the Rule, while providing a more detailed map of what information security plans must address.”

The FTC explains

These amendments are based primarily on the cybersecurity regulations issued by the New York Department of Financial Services, 23 NYCRR 500 (“Cybersecurity Regulations”), and the insurance data security model law issued by the National Association of Insurance Commissioners (“Model Law”).The Cybersecurity Regulations were issued in February 2017 after two rounds of public comment. The Model Law was issued in October 2017. The Commission believes that both the Cybersecurity Regulations and the Model Law maintain the balance between providing detailed guidance and avoiding overly prescriptive requirements for information security programs. The proposed amendments do not adopt either law wholesale, instead taking portions from each and adapting others for the purposes of the Safeguards Rule.

However, the FTC does not merely lift provisions from each but rather uses these as guidelines in drafting its own regulations, and the agency picks, chooses, modifies and discards from the regulations. Going over the three sets of data security requirements and providing a detailed analysis is outside the scope of this article. Rather, I would like to hit some of the high points by way of illustrating both the FTC’s reliance on the two predecessor schemes and also to show how the agency’s thinking on what constitutes adequate data security has evolved since 2002.

The FTC’s proposed Safeguards rule would generally require covered entities to encrypt consumer’s personal information when at rest and in transit on external systems. Similarly, the use of multi-factor authentication would be required in most circumstances, and covered entities would need to engage in regular penetration testing.

As a threshold matter, the Commission defines what a “security event” is and how regulated entities must gear their data security to preventing or reducing the risk that a “security event” occurs. Under the currently effective regulations, there is no definition. The agency proposes that a “security event” will mean “an event resulting in unauthorized access to, or disruption or misuse of, an information system or information stored on such information system.” In the Federal Register notice, the FTC explained that “[t]his term is used in proposed provisions requiring financial institutions to establish written incident response plans designed to respond to security events and to implement audit trails to detect and respond to security events.”

The FTC would generally require “covered entities” to encrypt consumer information at rest or in transit subject to a significant exception. The agency’s reasoning seems to be that encryption would be used for sensitive consumer information and when it is not prohibitively difficult or expensive to do so. However, The NAIC model statute charges regulated entities to “[d]etermine which security measures…are appropriate and implement such security measures” including encryption whereas the NYDFS would require the use of encryption of nonpublic information at rest or transmitted by covered entities unless it has been determined doing either would be “infeasible,” a decision that the entity’s CISO may agree with. The FTC followed the NYDFS in substantial part and the language on the exemption to the requirement that encryption must be used follows word for word: “[t]o the extent you determine that encryption of customer information, either in transit over external networks or at rest, is infeasible, you may instead secure such customer information using effective alternative compensating controls reviewed and approved by your CISO.” It is not clear, moreover, under this loophole what would stop organizations regulated by the FTC to make this determination, for it appears the agency would have limited recourse in questioning the covered entity’s decision not to encrypt customer data. It is unclear if the FTC will keep this provision in the final regulation. In contrast, the NYDFS requires the CISO to revisit such decisions annually.

Tellingly, however, the FTC opted against the safe harbors the NAIC model statute offers if entities have encrypted the exfiltrated or accessed data and the encryption, process or key has not also been breached. The NYDFS also does not have such a safe harbor to what constitutes a security event that triggers the reporting and notification requirements. Nonetheless, the FTC lifts its definition for encryption almost word-for-word from the NAIC model statute.

Another new requirement for covered entities is the use of multi-factor authentication. The FTC’s draft regulations provide that “[i]n order to develop, implement, and maintain your information security program, you shall…[d]esign and implement safeguards to control the risks you identity through risk assessment, including…multi-factor authentication.” The agency defines multi-factor authentication as “authentication through verification of at least two of the following types of authentication factors:

  • (1) Knowledge factors, such as a password;
  • (2) Possession factors, such as a token; or
  • (3) Inherence factors, such as biometric characteristics.”

The FTC explained that it “views multi-factor authentication as a minimum standard to allowing access to customer information for most financial institutions…[and] believes that the definition of multi-factor authentication is sufficiently flexible to allow most financial institutions to develop a system that is suited to their needs.” Nonetheless, “[t]he Commission seeks comment on whether this definition is sufficiently flexible, while still requiring the elements of meaningful multi-factor authentication.”

Like the NYDFS and NAIC standards, the FTC would require “information systems under the Rule to include audit trails designed to detect and respond to security events.” The agency uses a National Institute of Standards and Technology (NIST) definition of audit trail: “chronological logs that show who has accessed an information system and what activities the user engaged in during a given period.” The FTC noted that this standard will “not require any specific type of audit trail, nor does it require that every transaction be recorded in its entirety,” but, crucially, “the audit trail must be designed to allow the financial institution to detect when the system has been compromised or when an attempt to compromise has been made.” Also, the FTC will not require that audit trails be retained for any set period of time; rather, covered entities must hold them for a “reasonable” period of time. What should be the FTC’s expectations on maintaining audit trails that date back to “security event” that first occurred two years before it was discovered? Is two years a reasonable period of time to store audit rail materials?

Similarly, the draft Safeguards rule would “require financial institutions to take steps to monitor those users and their activities related to customer information in a manner adapted to the financial institution’s particular operations and needs.” The FTC noted that “[t]he monitoring should allow financial institutions to identify inappropriate use of customer information by authorized users, such as transferring large amounts of data or accessing information for which the user has no legitimate use.”

The FTC would bolster the current mandate that covered financial institutions “[r]egularly test or otherwise monitor the effectiveness of the safeguards’ key controls, systems, and procedures, including those to detect actual and attempted attacks on, or intrusions into, information systems.” The agency calls for “either ‘continuous monitoring’ or ‘periodic penetration testing and vulnerability assessments.’” However, in lieu of continuous monitoring, the FTC is willing to accept annual penetration testing and biannual vulnerability testing “reasonably designed to identify publicly known security vulnerabilities in your information systems based on the risk assessment.”

Finally, the FTC is proposing to carve out very small institutions that would otherwise fall within the scope of the rule because they “maintain relatively small amounts of customer information.” As a result, the draft Safeguards rule would exempt small covered entities from needing to:

  • 314.4(b)(1), requiring a written risk assessment;
  • 314.4(d)(2), requiring continuous monitoring or annual penetration testing and biannual vulnerability assessment;
  • 314.4(h), requiring a written incident response plan; and
  • 314.4(i), requiring an annual written report by the CISO.

The FTC articulated its belief that these are the requirements most likely “to cause undue burden on smaller financial institutions.”

The FTC seems to be balancing the expense imposed on these smaller institutions with presumably less resources for compliance against the mandate of Gramm-Leach-Bliley to safeguard customer records and information. But, on its face, the underlying statute does not seem to delegate authority to the FTC to exempt small entities unless the directive to “establish appropriate standards” can be read as a grant of discretion in how the agency meets this Congressional mandate (emphasis added).

And yet, putting aside that issue for the moment, one wonders why the agency drew the line at those institutions that “maintain customer information concerning fewer than five thousand consumers.” Is there a quantitative difference between the resources available to businesses of this size and those “maiantain[ing]” the consumer records and information of 7,500 or 10,000 or 20,000 consumers? Also, how exactly will maintain be construed? Will it be an annual average of the consumer information held by an institution? A monthly average? A threshold that an entity clears once and then the Safeguards’ requirements attach? The agency did not explain its thinking on this point.

Incidentally, the FTC actually split on the proposed Safeguards regulation with Commissioners Noah Joshua Phillips and Christine S. Wilson issuing a dissenting statement, in which they extol the virtues of the current rule and assert the proposed regulation “trades flexibility for a more prescriptive approach, potentially handicapping smaller players or newer entrants.” This may suggest a future FTC may not propose a similarly prescriptive approach for privacy and/or data security regulations under to be enacted legislation.

And yet, regardless of whether the FTC does proceed in this fashion, might the agency’s thinking on what constitutes acceptable data security under the powers granted by Section 5 of the FTC Act begin to resemble the more directive regime under the Safeguards rule? Given that the agency has not exactly spelled out what is “reasonable” data security, the general requirements for encryption, multi-factor authentication, and penetration testing could well get folded into what the FTC considers the sorts of practices entities will need to use in order not to violate the ban on deceptive and unfair practices.


[1] In LabMD, Inc. vs. FTC, the Eleventh Circuit ruled against the FTC’s use of its Section 5 powers to enter into settlements requiring private entities to establish and maintain remedial, “reasonable” data security practices. The court held that such settlements are contrary to the FTC Act because they do not enjoin specific acts or practices and rather command entities to institute data security practices. The court also found that such settlements are ultimately unenforceable because they are vague as to what is a reasonable data security regime.