Biden Administration Tech Policy: Federal Trade Commission (FTC)

Under President Joe Biden, the FTC will face most of the same issues presently before the agency.

In a Biden Administration, the FTC may tip from three Republican Commissioners, including the chair, to a majority of Democrats if Chair Joseph Simons steps down as has been rumored for some months now, in part because of political pressure and displeasure from the Trump White House. However, it is not uncommon for chairs to stay on even if a President of a different party comes to power, and, in fact, it rarely occurs that a sitting chair resigns at the beginning of a new presidency as occurred when then Chair Edith Ramirez resigned at the beginning of the Trump Administration in 2017. However, by law, the President may not remove the FTC chair or any commissioner except for “inefficiency, neglect of duty, or malfeasance in office.”

However, the President may, and almost always does in the event the White House changes hands, designate a new chair, and either of the sitting Commissioners could become the new chair: Rebecca Kelly Slaughter or Rohit Chopra. However, the latter’s term actually ended in September 2019 and can serve until he is re-confirmed or a successor is confirmed. It is not clear whether Chopra would be re-nominated given his view on regulating is to the left of Biden’s historical position on such issues. However, Chopra has support from Senator Elizabeth Warren (D-MA), a key stakeholder a Biden White House may try to keep happy. However, Chopra’s name was floated for the head of the Consumer Financial Protection Bureau (CFPB), the agency where he served as the Deputy Director. So, it may come to pass that President-elect Joe Biden gets to appoint two Democrats to the FTC if Simons steps down and Chopra moves on to the CFPB.

Of course, the FTC will almost certainly continue as the de facto federal data protection authority (DPA) for the United States and will use its Section 5 powers to investigate and punish privacy, data security, and cybersecurity violations. The agency is one of the two federal antitrust enforcers, a recently revived area of federal law that has bipartisan interest and support, and is on the verge of filing an antitrust action against Facebook, alleging violations of antitrust law in the social messaging market, especially on account of its WhatsApp and Instagram acquisitions. Conceivably, the FTC under Democratic leadership may have a more aggressive posture towards technology companies and other swaths of the economy that have undergone increased consolidation.

Moreover, most of the privacy bills in Congress would assign the responsibility of enforcing the regime at the federal level to the FTC, a power it would share with state attorneys general as is the current case with respect to antitrust and data security enforcement. The crucial question will be whether the agency receives the resources necessary to maintain its current responsibilities while taking on new responsibilities. At present, the House is proposing a $10 million increase to the agency’s budget from $331 million to $341 million.

Another aspect of the FTC that bears watching is how federal courts construe the agency’s power because a significant portion of the FTC’s ability to use its enforcement powers will hinge on court cases and possible Congressional tweaks to the FTC Act.

A few weeks ago, the FTC recently wrote the House and Senate committees with jurisdiction over the agency, asking for language restoring the power to seek and obtain restitution for victims of those who have violated Section 5 of the FTC Act and disgorgement of ill-gotten gains. The FTC is also asking that Congress clarify that the agency may act against violators even if their conduct has stopped as it has for more than four decades. Two federal appeals courts have ruled in ways that have limited the FTC’s long used powers, and now the Supreme Court of the United States is set to rule on these issues sometime next year. The FTC is claiming, however, that defendants are playing for time in the hopes that the FTC’s authority to seek and receive monetary penalties will ultimately be limited by the United States (U.S.) highest court. Judging by language tucked into a privacy bill introduced by the chair of one of the committees, Congress may be willing to act soon.

The FTC asked the House Energy and Commerce and Senate Commerce, Science, and Transportation Committees “to take quick action to amend Section 13(b) [of the FTC Act i.e. 15 U.S.C. § 53(b)] to make clear that the Commission can bring actions in federal court under Section 13(b) even if conduct is no longer ongoing or impending when the suit is filed and can obtain monetary relief, including restitution and disgorgement, if successful.” The agency asserted “[w]ithout congressional action, the Commission’s ability to use Section 13(b) to provide refunds to consumer victims and to enjoin illegal activity is severely threatened.” All five FTC Commissioners signed the letter.

The FTC explained that adverse rulings by two federal appeals courts are constraining the agency from seeking relief for victims and punishment for violators of the FTC Act in federal courts below those two specific courts, but elsewhere defendants are either asking courts for a similar ruling or using delaying tactics in the hopes the Supreme Court upholds the two federal appeals courts:

  • …[C]ourts of appeals in the Third and Seventh Circuits have recently ruled that the agency cannot obtain any monetary relief under Section 13(b). Although review in the Supreme Court is pending, these lower court decisions are already inhibiting our ability to obtain monetary relief under 13(b). Not only do these decisions already prevent us from obtaining redress for consumers in the circuits where they issued, prospective defendants are routinely invoking them in refusing to settle cases with agreed-upon redress payments.
  • Moreover, defendants in our law enforcement actions pending in other circuits are seeking to expand the rulings to those circuits and taking steps to delay litigation in anticipation of a potential Supreme Court ruling that would allow them to escape liability for any monetary relief caused by their unlawful conduct. This is a significant impediment to the agency’s effectiveness, its ability to provide redress to consumer victims, and its ability to prevent entities who violate the law from profiting from their wrongdoing.

Earlier in the year, by a split vote across party lines, the Federal Trade Commission (FTC) asked a United States (U.S.) appeals court to reconsider a ruling that overturned a lower court’s ruling that Qualcomm has violated antitrust laws in the licensing of its technology and patents vital to smartphones. Republican Commissioners Noah Joshua Phillips and Christine Wilson voted against filing the brief asking for a rehearing with Chair Joseph Simons joining the two Democratic Commissioners Rohit Chopra and Rebecca Kelly Slaughter in voting to move forward with the brief. This case could have major ramifications for antitrust law and the technology sector in the U.S. and for the 5G market as Qualcomm is a major player in the development and deployment of the technology necessary for this coming upgrade in wireless communications expected to bring a host of intended and unintended improvements in communications.

In the brief, the FTC argued the (U.S.) Court Of Appeals for The Ninth Circuit (Ninth Circuit) did not disagree with the District Court’s factual findings of anticompetitive conduct and rather took issue with the lack of “a cogent theory of anticompetitive harm.” The FTC argued the case should be reconsidered on three grounds:

  • The Ninth Circuit ruled on the basis of formal labels and not economic substance contrary to established Supreme Court law
  • Facially neutral surcharges by one market participant to its rivals is, in fact, an unequal and exclusionary burden on rivals, conduct the Supreme Court has ruled violates antitrust law; and
  • Harm to customers is indeed a central focus and concern of antitrust cases and ruling that this harm is outside relevant antitrust markets is also a misreading of established law.

As noted, the Ninth Circuit reversed a U.S. District Court’s decision that Qualcomm’s licensing practices violated the Sherman Antitrust Act. Specifically, the lower court held these practices “have strangled competition in the Code Division Multiple Access (CDMA) and premium Long-Term Evolution (LTE) modem chip markets for years, and harmed rivals, original equipment manufacturers (OEMs), and end consumers in the process.” Consequently, the court found “an unreasonable restraint of trade under § 1 of the Sherman Act and exclusionary conduct under § 2 of the Sherman Act….and that Qualcomm is liable under the FTC Act, as “unfair methods of competition” under the FTC Act include “violations of the Sherman Act.”

However, the Ninth Circuit disagreed, overturned the district court and summarized its decision:

  • [We] began by examining the district court’s conclusion that Qualcomm had an antitrust duty to license its standard essential patents (SEPs) to its direct competitors in the modern chip markets pursuant to the exception outlined in Aspen Skiing Co. v. Aspen Highlands Skiing Corp., 472 U.S. 585 (1985). [We] held that none of the required elements for the Aspen Skiing exception were present, and the district court erred in holding that Qualcomm was under an antitrust duty to license rival chip manufacturers. [We] held that Qualcomm’s OEM-level licensing policy, however novel, was not an anticompetitive violation of the Sherman Act.
  • [We] rejected the FTC’s contention that even though Qualcomm was not subject to an antitrust duty to deal under Aspen Skiing, Qualcomm nevertheless engaged in anticompetitive conduct in violation of § 2 of the Sherman Act. [We] held that the FTC did not satisfactorily explain how Qualcomm’s alleged breach of its contractual commitment itself impaired the opportunities of rivals. Because the FTC did not meet its initial burden under the rule of reason framework, [We were] less critical of Qualcomm’s procompetitive justifications for its OEM-level licensing policy—which, in any case, appeared to be reasonable and consistent with current industry practice. [We] concluded that to the extent Qualcomm breached any of its fair, reasonable, and nondiscriminatory (FRAND) commitments, the remedy for such a breach was in contract or tort law.

The FTC has a number of significant outstanding rulemakings.

In early 2019, the FTC released notices of proposed rulemaking (NPRM) for two of the data security regulations with which some financial services companies must comply:

The reassessment of the Safeguards Rule began in 2016 when the FTC asked for comments. The proposed Safeguards Rule demonstrates the agency’s thinking on what data security regulations should look like, which is important because the FTC is the agency most likely to become the enforcer and writer of any new data security or privacy regulations. Notably, the new Safeguards regulation would require the use of certain best practices such as encrypting data in transit or at rest or requiring the use of multi-factor authentication “for any individual accessing customer information.” Moreover, the other financial services agencies charged with implementing the section of Gramm-Leach-Bliley (GLB) that requires financial services companies to safeguard customers’ information may follow suit (e.g. the Federal Reserve Board or the Comptroller of the Currency.)

In the proposed rule, the FTC noted that its changes to the Safeguards Rule would “include more detailed requirements for the development and establishment of the information security program required under the Rule…[and] [t]hese amendments are based primarily on the cybersecurity regulations issued by the New York Department of Financial Services, 23 NYCRR 500 (“Cybersecurity Regulations”), and the insurance data security model law issued by the National Association of Insurance Commissioners (“Model Law”).”

In the Safeguards Rule proposal, the FTC explained “[t]he proposal contains five main modifications to the existing Rule.”

  • First, it adds provisions designed to provide covered financial institutions with more guidance on how to develop and implement specific aspects of an overall information security program, such as access controls, authentication, and encryption.
  • Second, it adds provisions designed to improve the accountability of financial institutions’ information security programs, such as by requiring periodic reports to boards of directors or governing bodies.
  • Third, it exempts small businesses from certain requirements.
  • Fourth, it expands the definition of “financial institution” to include entities engaged in activities that the Federal Reserve Board determines to be incidental to financial activities. Such a change would add “finders”–companies that bring together buyers and sellers of a product or service–within the scope of the Rule.
  • Finally, the Commission proposes to include the definition of “financial institution” and related examples in the Rule itself rather than incorporate them by reference from a related FTC rule, the Privacy of Consumer Financial Information Rule.

The FTC’s Safeguards Rule applies to the following and other entities:

[M]ortgage lenders, “pay day” lenders, finance companies, mortgage brokers, account servicers, check cashers, wire transferors, travel agencies operated in connection with financial services, collection agencies, credit counselors and other financial advisors, tax preparation firms, non- federally insured credit unions, investment advisors that are not required to register with the Securities and Exchange Commission, and entities acting as finders.

The FTC explained that it “is proposing to expand the definition of “financial institution” in both the Privacy Rule and the Safeguards Rule to specifically include so-called “finders,” those who charge a fee to connect consumers who are looking for a loan to a lender…[because] [t]his proposed change would bring the Commission’s Rule in line with other agencies’ interpretation of the Gramm Leach Bliley Act.”

As part of its regular review of its regulations, the FTC released asked for input on its Health Breach Notification Rule (HBN Rule) promulgated in 2010 per direction in the “American Recovery and Reinvestment Act” (ARRA) (P.L. 111-5). When enacted, Congress expected this regulation to be temporary as policymakers thought a national breach notification statute would shortly be enacted that would make the FTC’s regulations superfluous, but that has obviously not happened. And, hence the FTC continues to have regulations governing breach notification and security of some health information for entities not subject to the “Health Insurance Portability and Accountability Act” (HIPAA)/“Health Information Technology for Economic and Clinical Health Act” (HITECH Act) regulations, which are generally healthcare providers and their business associates. Incidentally, it is possible the FTC’s HBN Rule would govern breaches arising from breaches of vendors involved with COVID-19 contact tracing.

As explained in the current regulation, the HBN Rule “applies to foreign and domestic vendors of personal health records (PHR), PHR related entities, and third party service providers, irrespective of any jurisdictional tests in the Federal Trade Commission (FTC) Act, that maintain information of U.S. citizens or residents.” This rule, however, “does not apply to HIPAA-covered entities, or to any other entity to the extent that it engages in activities as a business associate of a HIPAA-covered entity.”

And yet, the FTC conceded it “has not had occasion to enforce its Rule because, as the PHR market has developed over the past decade, most PHR vendors, related entities, and service providers have been HIPAA-covered entities or “business associates” subject to the Department of Health and Human Services’ (HHS) rule.” The FTC foresees utility and need for the HBN Rule “as consumers turn towards direct-to-consumer technologies for health information and services (such as mobile health applications, virtual assistants, and platforms’ health tools), more companies may be covered by the FTC’s Rule.” Accordingly, the FTC “now requests comment on the HBN Rule, including the costs and benefits of the Rule, and whether particular sections should be retained, eliminated, or modified.”

In terms of how the HBN Rule functions, the FTC explained:

  • The Recovery Act directed the FTC to issue a rule requiring these entities, and their third-party service providers, to provide notification of any breach of unsecured individually identifiable health information.
  • Accordingly, the HBN Rule requires vendors of PHRs and PHR related entities to provide: (1) Notice to consumers whose unsecured individually identifiable health information has been breached; (2) notice to the media, in many cases; and (3) notice to the Commission.
  • The Rule also requires third party service providers (i.e., those companies that provide services such as billing or data storage) to vendors of PHRs and PHR related entities to provide notification to such vendors and entities following the discovery of a breach.
  • The Rule requires notice “without unreasonable delay and in no case later than 60 calendar days” after discovery of a data breach. If the breach affects 500 or more individuals, notice to the FTC must be provided “as soon as possible and in no case later than ten business days” after discovery of the breach. The FTC makes available a standard form for companies to use to notify the Commission of a breach. The FTC posts a list of breaches involving 500 or more individuals on its website. This list only includes two breaches, because the Commission has predominantly received notices about breaches affecting fewer than 500 individuals.

Moreover, per the current regulations, the FTC may treat breaches as violations of regulation on unfair or deceptive practices, permitting the FTC to seek and possibly levy civil fines of up to $43,000 per violation.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Gerd Altmann from Pixabay

House Energy and Commerce Releases Privacy Discussion Draft

As rumored, in mid-December, the House Energy and Commerce Committee has released its privacy discussion draft that is the result of a bipartisan effort led by Consumer Protection & Commerce Subcommittee Chair Jan Schakowsky (D-IL) and Ranking Member Cathy McMorris Rodgers (R-WA). The subcommittee is sharing this draft with stakeholders and is asking for feedback by January 24 according to media accounts. However, the discussion draft includes a number of key sections in brackets, indicating areas still under discussion, chief among them: state preemption, a private right of action – two sticking points that have bedeviled the crafting of bipartisan legislation thus far. Still, there seems to be broad agreement on much of the structure of a bill with the Federal Trade Commission (FTC) being the primary enforcer and being granted rulemaking authority to implement the new regime.

Entities covered by the new bill are those already subject to FTC jurisdiction plus common carriers and non-profits. The personal information subject to the bill is “any information about an individual possessed by a covered entity that is linked or reasonably linkable to a specific individual [or consumer device;].”

The FTC must promulgate regulations that “require each covered entity to establish and implement reasonable policies, practices, and procedures regarding the processing of covered information.” Such privacy policies should be designed to

  • comply with applicable privacy laws;
  • consider the mitigation of privacy risks throughout every stage of the covered entity’s products and services, including their design, development, launch, and implementation; and
  • implement reasonable training and safeguards within the covered entity to promote compliance with all privacy laws applicable to covered information the covered entity processes and mitigate privacy risks;

The privacy policy must “publicly available at all times and in a machine-readable format…in a manner that is clear, easily understood, and written in plain and concise language.”

A person could ask and receive an answer as to whether a covered entity is processing her information. In the same vein, a person could also access his personal information held by the covered entity, the categories of personal information processed, any sources from which this personal information was collected, and other details. People would have the right to correct personal information held by a covered entity. Entities with more than $250 million in revenue and that process the personal information of more than 10,000 people a year would need to meet additional requests. People could also ask a covered entity to delete covered information.

The bill limits data retention. Generally, “a covered entity shall not keep, retain, or otherwise store covered information for longer than is reasonably necessary for the purposes for which the covered information is processed” subject to a number of exceptions, including complying with legal requirements, for security purposes, preventing risks to health and safety, and other reasons.

There are detailed limits on the processing of personal information obtained by a covered entity, and the FTC would be required to promulgate regulations fleshing them out. Generally, processing may not occur without the consent of a person but “[c]onsent for the processing of covered information is implied to the extent the processing is consistent with the reasonable consumer expectations within the context of the interaction between the covered entity and the individual.” There is bracketed language on allowing people to opt out of first party marketing. However, for any data processing that is not consistent with a reasonable person’s expectations would require affirmative, express consent, and the FTC would need to promulgate regulations to spell out what constitutes affirmative express consent. Certain data processing would be prohibited, principally obtaining consent under false pretenses. Some covered information may not be processed, subject to certain exceptions, including biometric information, health information, geolocation information, and other specified types.

The FTC would promulgate regulations that would spell out the requirements covered entities and processors must enshrine in agreements in disclosing and processing personal information. Moreover, “[a] covered entity shall not disclose covered information to a third party unless the covered entity obtains prior express, affirmative consent of the individual to whom the covered information pertains.”

The FTC will conduct a notice and comment rulemaking to set data security standards for covered entities. Within one year of enactment, the FTC “shall require each covered entity and processor to implement and maintain reasonable administrative, technical, and physical security measures, policies, practices, and procedures to protect and secure covered information against unauthorized access and acquisition.” These standards will be geared to the activities, sensitivity of the data being held and processed, the cost of implementing safeguards, and the current available safeguards. However, this legislative direction to the FTC is “limited to the provisions included in this section.” In the event of a breach, a covered entity must notify the FTC and submit its security policies which shall be exempted from FOIA requests.

The bill bans take-it-or-leave consent arrangements or financial incentives for agreeing to data processing. Specifically, “[a] covered entity shall not condition the provision of a product or service or the quality of customer experience to any individual on an individual’s agreement to waive any rights guaranteed by this Act [or to the individual’s consent to the processing of the individual’s covered information other than information necessary to provide the product or service].” Note the brackets in the original text, suggesting the final clause in the provision is subject to final negotiation. Likewise, a covered entity may not offer “a financial incentive in exchange for an individual’s agreement to waive any rights guaranteed by this Act [or to the individual’s consent to the processing of the individual’s covered information other than information necessary to provide the product or service].”

The bill would make it unlawful “for any covered entity to process covered information…in a manner that discriminates against or makes an economic opportunity unavailable or offered on different terms, on the basis of a person’s or class of persons’ race, color, religion, national origin, sex, age, or disability” concerning a range of areas, including housing, employment, credit, insurance, and others. It shall also be unlawful “for a covered entity to process covered information in a manner that segregates, discriminates in, or otherwise makes unavailable the goods, services, facilities, privileges, advantages, or accommodations of any place of public accommodation on the basis of a person’s or class of persons’ race, color, religion, national origin, sex, age, or disability.” Additionally, the burden of proving such discrimination would be shifted to covered entities in that they would need to prove their processing is not discriminatory.

Smaller covered entities may be able to use “self-regulatory guidelines governing the processing of covered information by a covered entity” approved and monitored by the FTC. Eligible entities include those with $25 million or less in annual revenue, that process 50,000 or fewer people’s personal information a year, and that derive 50% of less of their revenue from selling personal information. The FTC must approve any such guidelines before use and any future modifications and may withdraw approval if the guidelines no longer adhere to the Act.

Information brokers would need to identify themselves as such on their websites and register with the FTC.

The FTC would need to establish a Bureau of Privacy to enforce this Act and all other data security and privacy laws within the FTC’s purview, including Section 5 of the FTC Act and COPPA. The FTC would be able to fine covered entities for violations in the first instance of up to more than $42,000 per violation. The FTC would be free to seek all the current relief it can under Section 5, including injunctions, restitution, disgorgement of ill-gotten gains, and other types of remedies. There is language in brackets that would cap civil penalties, but that would seem to be an item under discussion. State attorneys general would also be able to bring actions and seek all the relief the FTC can, and there is a subsection title in brackets, A Private Right of Action, with no provisions, which is not surprising given the opposition of Republicans to such a means of relief. Similarly, there is a title with no language regarding state preemption.

© Michael Kans and Michael Kans Blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans and Michael Kans Blog with appropriate and specific direction to the original content.

A Privacy Bill A Week: Online Privacy Act of 2019

Last week, we dived into the last White House on privacy, the discussion draft of the “Consumer Privacy Bill of Rights Act of 2015“ released by the Obama Administration. This bill was released in conjunction with a report on privacy issues and then proceeded to go nowhere as there was scant appetite on Capitol Hill to legislate on privacy. Let us flash forward to the present where privacy has moved to the fore, and the first of the long-anticipated privacy bills has been released.

Representatives Anna Eshoo (D-CA) and Zoe Lofgren (D-CA) unveiled the “Online Privacy Act of 2019” (H.R. 4978), which they started working on earlier this year when it seemed clear that the House Energy and Commerce Committee’s effort to craft a bill had stalled as Consumer Protection and Commerce Subcommittee Chair Jan Schakowsky’s (D-CA) timeline for when a bill might be unveiled continued to repeatedly slip. It must be said that this bill is going to be a non-starter with Republicans in the Senate and White House not least of which because it gives consumers a private right of action, creates a new federal agency to police data security, and does not preempt state statutes. Moreover, given Eshoo’s close political relationship with Speaker Nancy Pelosi (D-CA), this bill may be viewed in two contexts: 1) Pelosi may approve of the substance of the bill; and 2) in not trying to dissuade Eshoo and Lofgren, the Speaker may have intended to prod House Energy and Commerce to produce a bill. Finally, it bears note that Eshoo challenged current House Energy and Commerce Chair Frank Pallone Jr when former Representative Henry Waxman (D-CA) stepped down as the top Democrat, and relations between the two reportedly remain affected by the bruising contest. In any event, it is a comprehensive bill that takes a number of new approaches on some of the aspects of privacy and data security, and Eshoo and Lofgren also released a one-page summary and a section-by-section summary.

Big picture, the bill would create a new agency to oversee a new privacy and security regime, the United States Digital Privacy Agency (DPA), meaning, that unlike virtually every other privacy bill, the Federal Trade Commission (FTC) would not be the primary enforcer. However, there may still be a role for the FTC to play as discussed below. The bill unites privacy with data security, which has been a policy preference of a number of high-profile Democrats including Schakowsky and Senate Commerce, Science, and Transportation Committee Ranking Member Maria Cantwell (D-WA). Republicans have been lukewarm on this notion, however. Moreover, express, affirmative consent would generally be needed before most businesses could collect, process, maintain, or disclose a person’s personal information subject to a number of exceptions. Businesses would need to state clearly and concisely their privacy and security policies, be responsive to people exercising their rights visa vis their data, and closely supervise the service providers and third-parties with whom personal information is disclosed.

As always, it is crucial to digest the key definitions for this will inform the scope of the Act. Those entities covered by the “Online Privacy Act of 2019” is a broad group, spanning most businesses in the U.S: “a person who…intentionally collects, processes, or…maintains personal information; and…sends or receives such personal information over the internet or a similar communications network.” There are two crucial exemptions: 1) people not engaged in commercial activities and those engaging in commercial activities that is considered “de minimis;” and 2) small businesses, which are defined as entities not selling personal information, earning less than 50% of revenue from processing personal information for targeted or behavioral advertising, not having held the personal data of 250,000 or more people in the last six months, having 200 or fewer employees, and earning $25 million or less in gross revenue in the preceding year. If a small business’s status changes, and it crosses the threshold into being a covered entity, then there is a nine-month grace period before it must begin complying the Act.

Besides covered entities, two other classes of entities figure prominently in the bill: “service providers” and “third-parties.” A “service provider” is a “covered entity” that generally “processes, discloses, or maintains personal information, where such person does not process, disclose, or maintain the personal information other than in accordance with the directions and on behalf of another covered entity.” A third-party is “a person…to whom such covered entity disclosed personal information; and…is not…such covered entity…a subsidiary or corporate affiliate of such covered entity…or…a service provider of such covered entity.” Consequently, almost all disclosures of personal information made by a covered entity would likely be to either a service provider or a third party, the latter of which can be a covered entity itself.

The bill defines “data breach” in fairly standard terms as “unauthorized access to or acquisition of personal information or contents of communications maintained by such covered entity.” This term has evolved over the last decade to include mere access as opposed to exfiltration or acquisition. The Act coins a new term to cover some possible privacy violations: “data sharing abuse.” This means “processing, by a third party, of personal information or contents of communications disclosed by a covered entity to the third party, for any purpose other than—

  • a purpose specified by the covered entity to the third party at the time of disclosure; or
  • a purpose to which the individual to whom the information relates has consented.”

Personal information is simply and very comprehensively defined as “any information maintained by a covered entity that is linked or reasonably linkable to a specific individual or a specific device, including de-identified personal information and the means to behavioral personalization created for or linked to a specific individual.” Moreover, “personal information” does not include “publicly available information related to an individual” or “information derived or inferred from personal information, if the derived or inferred information is not linked or reasonably linkable to a specific individual.” Under this definition, is there an inadvertent loophole created whereby information not maintained by a covered entity is not personal information for purposes of this Act, and therefore, such information would be beyond many of the requirements of the Act?

The bill uses the definition of “contents” of a communication from the “Electronic Communications Privacy Act” (ECPA) (P.L. 99-508) that is used for wiretapping and electronic surveillance, among other purposes, which shows the intent to allow the legal structure for government surveillance to coexist frictionless alongside the new privacy regime. However, most metadata, which includes the call detail records currently being debated regarding reauthorization of National Security Agency authority, would be covered at a lesser level by this Act, meaning private sector entities could collect, process, maintain, and disclose metadata.

De-identified information are generally those data “that cannot reasonably identify, relate to, describe, reference, be capable of being associated with, or be linked, directly or indirectly, to a particular individual or device.” However, this definition stipulates further conditions that must be met: “provided that a business that uses de-identified information—

  • has de-identified the personal information using best practices for the types of data the information contains;
  • has implemented technical safeguards that prohibit re-identification of the individual with whom the information was linked;
  • has implemented business processes that specifically prohibit re-identification of the information;
  • has implemented business processes to prevent inadvertent release of de-identified information; and
  • makes no attempt to re-identify the information.”

This language is going in the right direction, for de-identification of personal information will likely need to be permanent or as close to permanent as possible to forestall the temptation some entities will invariably face to re-identify old personal information and derive value from it.

The “Online Privacy Act of 2019” spells out what constitutes a “privacy harm” and a “significant privacy harm,” two key definitions in helping covered entities gauge the sensitivity of certain information and their legal obligations in handling such information. “Privacy harm” is “adverse consequences or potential adverse consequences to an individual or society arising from the collection, processing, maintenance, or disclosure of personal information.” Such harms are identified in the definition and worth quoting in full:

  • direct or indirect financial loss or economic harm;
  • physical harm;
  • psychological harm, including anxiety, embarrassment, fear, and other demonstrable mental trauma;
  • adverse outcomes or decisions with respect to the eligibility of an individual for rights, benefits, or privileges in employment (including hiring, firing, promotion, demotion, and compensation), credit and insurance (including denial of an application or obtaining less favorable terms), housing, education, professional certification, or the provision of health care and related services;
  • stigmatization or reputational harm;
  • price discrimination;
  • other adverse consequences that affect the private life of an individual, including private family matters and actions and communications within the home of such individual or a similar physical, online, or digital location where such individual has a reasonable expectation that personal information will not be collected, processed, or retained;
  • chilling of free expression or action of an individual, group of individuals, or society generally, due to perceived or actual pervasive and excessive collection, processing, disclosure, or maintenance of personal information by a covered entity;
  • impairing the autonomy of an individual, group of individuals, or society generally; and
  • other adverse consequences or potential adverse consequences, consistent with the provisions of this Act, as determined by the Director

This list of privacy harms is as expansive, and perhaps even more so, than almost any other bill analyzed. Additionally, this list is not comprehensive, and the DPA may add other harms.

A related, crucial definition is that of “significant privacy harm” which is “adverse consequences to an individual arising from the collection, processing, maintenance, or disclosure of personal information, limited” to three specific privacy harms:

  • direct or indirect financial loss or economic harm;
  • physical harm; and
  • adverse outcomes or decisions with respect to the eligibility of an individual for rights, benefits, or privileges in employment (including hiring, firing, promotion, demotion, and compensation), credit and insurance (including denial of an application or obtaining less favorable terms), housing, education, professional certification, or the provision of health care and related services.”

A term related to these two is “protected class,” which is “the actual or perceived race, color, ethnicity, national origin, religion, sex (including sexual orientation and gender identity), familial status, or disability of an individual or group of individuals.”

The Act intends to create a safe harbor for many of the notice and consent requirements that will encourage greater use of encryption or similar methods that would make personal information and the contents of communications very hard to access. To this end, the bill defines a term “privacy preserving computing” as “the collecting, processing, disclosing, or maintaining of personal information that has been encrypted or otherwise rendered unintelligible using a means that cannot be reversed by a covered entity, or a covered entity’s service provider,” subject to further requirements. Additionally, the DPA “may determine that a methodology of privacy preserving computing is insufficient for the purposes of this definition,” so covered entities and service providers would not be free to deem any security measure “privacy preserving computing.”

The Act also makes clear that a covered entity’s sharing of personal information with a third party for any sort of remuneration will be a sale or selling, and hence the definition is “the disclosure of personal information for monetary consideration by a covered entity to a third party for the purposes of processing, maintaining or disclosing such personal information at the third party’s discretion.”

Now, let’s turn to the substance of the Act. The bill makes clear that no one may waive its requirements, and any contracts or instruments to do so are null and void. Additionally, no one may agree to any pre-dispute arbitration under this bill, meaning that no person will be forced to accept mandatory arbitration as is often the case when one agrees to the terms of service for an application or to use a device.

The “Online Privacy Act of 2019” would take effect one year after enactment, but there is curious language making clear the effective date does not “affect[] the authority to take an action expressly required by a provision of this Act to be taken before the effective date.”

The Act carves out journalism from the privacy and security requirements of the bill to the extent an organization like The New York Times is engaged in bona fide journalism as opposed to other commercial activities such as selling photographs, the latter of which may qualify such an entity as a covered entity. There is a definition of journalism, and one wonders if companies like Facebook or Google will try to get some of their activities exempted on the basis that they qualify as journalism.

The Act adds a section to the federal criminal code on extortion and threats titled “Disclosure of personal information with the intent to cause harm,” that makes a criminal offense of the actual or attempted disclosure of personal information to threaten, intimidate, or harass another person in order to commit or incite an act of violence. It is also a criminal offense to do so if a person is placed in reasonable fear of death or serious bodily injury. Violators would face fines and prison sentences of up to five years in addition to any state liability as this would seem to cover a number of crimes ushered in by the digital age: doxing, revenge porn, or making public a person’s private information in order to intimidate.

Title I of the “Online Privacy Act of 2019” would provide individuals with a number of rights regarding their personal information and how it may and may not be used by covered entities. First, people would receive a right of access which entails each covered entity making available a reasonable mechanism by which a person find out the categories of personal information and contents of communications being held and those obtained from third parties. Moreover, this information must also contain all the third parties, subsidiaries, and affiliates to whom personal information has been disclosed. Also, individuals must be able to easily access a clear and concise description of the commercial and businesses purposes for which the covered entity collects, maintains, processes, and discloses personal information. Finally, covered entities must provide a list of all automated decision-making processes it employs and those a person may ask that a human being make instead of the automated processes. Covered entities may sidestep a number of these requirements by making it publicly available on its website in a conspicuous location obviating the need for people to make requests.

Individuals would also get a right of correction allowing for the use of a reasonable mechanism to dispute the accuracy and completeness of personal information being held by a covered entity, but only if this personal information is processed such fashion as to “increase reasonably foreseeable significant privacy harms.” This language suggests that data processing that would result in mere privacy harms, say those that would impact one’s personal, familial communications, would not need to be corrected or completed. In any event, covered entities have the option to correct or complete as requested, tell the requester the information is complete or correct, respond that insufficient information does not allow for the correction or completion, or deny the request on the basis of exemptions discussed below. Small businesses are exempted from this responsibility. Of course, what ultimately is determined to be a significant privacy harm will be the result of case-by-case adjudication by the new DPA, likely in court.

People could ask that their personal information be deleted, including those data acquired from third parties or inferred by the covered entity. Again, on the basis of Section 109 exemptions, this request could be denied.

Individuals will receive a right of portability, and in order to effectuate this right, the DPA must annually publish in the Federal Register categories of online services and products that are determined to be portable. However, before a final list is published, the DPA must release an initial list of portable services and products and accept comments. Once it has been established which services and products are portable, then covered entities must allow individuals to request and receive their personal information and/or contents of communications for purposes of taking their business to a competitor. There is also language that contemplates asking one covered entity to directly transmit this information on account of a person’s request.

Upon request, covered entities must have humans make decisions instead of an “automated processing of personal information of an individual, if such processing increases reasonably foreseeable significant privacy harms for such individual.”

Before a covered entity may engage in behavior personalization, it must obtain express, affirmative consent from a person to collect, process, maintain or disclose personal information for this purpose. Behavior personalization is a term defined in the Act and “means the processing of an individual’s personal information, using an algorithm, model, or other means built using that individual’s personal information collected over a period of time, or an aggregate of the personal information of one or more similarly situated individuals and designed to—

  • alter, influence, guide, or predict an individual’s behavior;
  • tailor or personalize a product or service; or
  • filter, sort, limit, promote, display or otherwise differentiate between specific content or categories of content that would otherwise be accessible to the individual.”

This right seems squarely aimed at the use of one’s data to show him advertising based on their browsing history, searches, location, occupation, and the huge volumes of other data collected daily. Moreover, if a person denies such consent, then the product or service must be provided without the behavior personalization unless this is infeasible at which point only the core service or product need be provided. And, if it is infeasible to provide core services or products, then a covered entity may altogether deny a product or service. It is likely covered entities will seek to define “infeasible” as broadly as possible in order to leverage consent for its products and services and so that it may continue the lucrative practice of personalized advertising.

A person would also get the right to be informed which entails any covered entity that begins collecting personal information on a person despite there not being a direct relationship, the covered entity must inform the person within 30 days by writing.

There would be established a right to impermanence that would limit the holding a person’s personal information for no more time than she consented to. Covered entities must obtain affirmative, express consent from people for categories of personal information for as long as the original purpose for collection is completed or by a certain date. And yet, there is an exemption for implied consent when long-term maintenance of personal information is an obvious, core feature of a product or service and these data are maintained only to provide the product or service.

As mentioned, Section 109 details the exemptions that may allow a covered entity to disregard the rights bestowed on people under the “Online Privacy Act of 2019,” which include

  • Detecting, responding to, or preventing security incidents or threats.
  • Protecting against malicious, deceptive, fraudulent, or illegal activity.
  • Complying with specific law enforcement requests or court orders.
  • Protecting a legally recognized privilege or other legal right.
  • Protecting public safety.
  • Collection, processing, or maintenance by an employer pursuant to an employer-employee relationship of records about employees or employment status, except—
    • where the information would not be reasonably expected to be collected in the context of an employee’s regular duties; or
    • was disclosed to the employer by a third party.
  • Preventing prospective abuses of a service by an individual whose account has been previously terminated.
  • Routing a communication through a communications network or resolving the location of a host or client on a communications network.
  • Providing transparency in advertising or origination of user generated content.

However, the covered entity will need to have “technical safeguards and business processes that limit the collection, processing, maintaining, or disclosure of such personal information” to the aforementioned purposes.

This section also details the reasons why a covered entity may decline a request made pursuant to one of the rights listed in Title I:

  • A requester’s identity cannot be confirmed
  • If the request would create a legitimate risk to the privacy, security, safety, or other rights of another person
  • A legitimate risk to free expression
  • In regard to completing or deleting requests, if doing so would stop a transaction or process set into motion but not completed per a person’s request or such a request would undermine the integrity of a legally significant transaction

Service providers are altogether exempted from Title I, and covered entities employing privacy preserving computing are exempted from certain rights of people: right of access, right to human review of automated decisions, right of human review, and the right to individual autonomy. However, this exemption applies only to the data processing performed with privacy preserving computing.

Covered entities must reply to requests within 30 days and may not normally charge a fee for fulfilling requests unless it is determined that the requests are excessive or unfounded, then a covered entity may charge a fee subject to DPA approval.

Title II of the “Online Privacy Act of 2019” details the requirements placed on covered entities, service providers and third parties.

Covered entities must have a reasonable articulable basis for collecting, processing, maintaining, and disclosing personal information related to the reasonable business needs of the entity. Additionally, the covered entity must keep no more personal information than is necessary to effectuate the business or commercial purpose and these needs are to be balanced against privacy intrusions, possible privacy harms, and the reasonable expectations of people whose information in question. Additionally, covered entities should not collect more personal information than is necessary to carry out its business purpose nor should it hold these data longer than necessary. However, covered entities may engage in “ancillary” collection, processing, maintenance, and disclosure of personal information in certain circumstances subject to certain requirements. For example, if these activities are substantially similar to the original ones and it is the same type of personal information being collected and no privacy harms would result, then notice and consent are not required. However, notice is required for ancillary activities if:

  • The ancillary activities the covered entity is engaged in are similar to the original activities and there is a privacy harm risk
  • The ancillary activities are not substantially similar and there is not risk of privacy harms; or
  • The activities are substantially similar and would result in privacy harm but privacy preserving computing is used

Consequently, notice and consent would be required for any other ancillary activities that do not fall into those categories.

Covered entities would also need to limit the access of employees and contractors to personal information and the contents of communication on the basis of an articulable rationale that balances reasonable business needs, the potential for privacy harm, and the reasonable expectations of individuals. Moreover, covered entities must maintain records on all access.

There is a requirement that covered entities cannot collect or maintain any personal information unless they are in compliance with the Act. However, this requirement does not cover processing or maintaining personal information.

The disclosure of personal information by covered entities to third parties is limited only to situations when a person consents. And, any such consent is only valid after a person has been notified of all the categories of third parties the personal information may be disclosed to, the personal information to be shared, and the business purposes for doing so. Sales of personal information would be more severely constrained. Each sale to a third party by a covered entity must be agreed to by a person. What’s more, covered entities must disclose the parameters of the original purpose for the collection of the information when it sells it to a third party. Regarding the use of privacy preserving computing and de-identified personal information, disclosure does not require consent for either designation, but consent is always required for the sales of personal information.

There are provisions designed to sweep into U.S. jurisdiction players in the data ecosystem that are outside the country. The bill bars covered entities from disclosing personal information to entities not subject to U.S. jurisdiction or not in compliance with the Act. However, a safe harbor is created under which covered entities and non-U.S. entities could do business that is largely premised on the latter being willing to comply with the Act, having the cash available to pay fines for violations, and evince a willingness to be subject to DPA enforcement. The non-U.S. entity also needs to sign an agreement with the DPA. This section, however, makes clear it is seeking to create a data localization requirement in the U.S or to restrict a covered entity’s internal disclosures, so that Microsoft, say, could continue shuttling personal data around the globe to its servers without running afoul of this section.

Covered entities are barred from re-identifying de-identified information unless allowed by one of the Section 109 exemptions, and this prohibition attaches to third parties that may have the de-identified information. However, “qualified research entities” are not covered by this restriction, and it would be up to the DPA to determine who may be considered one.

A covered entity’s ability to collect, process, maintain, or disclose the contents of communication would be limited only to those situations where there is a security incident or threat, the processing is expressly requested by one of the parties to the communication, and other specified purposes. There is an exception for publicly available communications, and covered entities cannot stop people using their services or products from encrypting their communications. There is a safe harbor for service providers acting at the direction of a covered entity with a reasonable belief the directions comply with the Act.

Covered entities could not process personal information in a way that impinges a person’s opportunities on the basis of a protected class in education, employment, housing, credit, healthcare, finance, and a range of other areas. The same is true of public accommodations. Moreover, the DPA is required to promulgate regulations to effectuate this section.

The use of genetic information would be very severely limited, and more or less these types of data would only be available for medical testing and even then, subject to restrictions.

The DPA will establish a minimum percentage threshold for people to read and understand a notice for purposes of consent or a privacy policy that covered entities would need to meet or exceed before its notice or privacy policies would be allowed to be used. The DPA will establish a procedure to vet the data submitted by covered entities to show compliance with this requirement. Moreover, the DPA will make available the notices and privacy policies of all covered entities. All covered entities must make available reasonable mechanisms for people to revoke consent. And, not surprisingly, deceptive notices and privacy policies are barred.

Pursuant to these DPA approved notices, covered entities must provide clear and concise notice of the personal information being collected, maintained, processed, or disclosed. Additionally, covered entities may not collect, process, maintain, or disclose personal information without consent if it creates or increases the risk of foreseeable privacy harms. However, consent will be implied if the personal information activities of an entity are obvious on their face and notice is provided. However, privacy preserving computing would be exempt from the notice and consent requirements.

Covered entities shall, of course, have privacy policies regarding tis personal information activities, including a general description of its practices, an explanation as to how individuals may exercise their Title I rights, the categories of personal information collected, the business or commercial purposes for which such data will be used, and other requirements.

Information security would be a part of the new regime covered entities must comply with. Consequently, covered entities must design and establish an information security system to protect personal information based on the sensitivity of the data and the types of activities in which the covered entity is engaged. The information security system must include

  • A written security policy
  • A means of identifying, assessing, and mitigation security vulnerabilities
  • A process for disposing personal information securely
  • A process for overseeing those with access to personal information; and
  • A plan or protocol to respond to data breaches or data sharing abuses

In the event a data breach or data sharing abuse occurs, the covered entities must report it to the DPA within 72 hours of discovery unless the event is unlikely to create or increase foreseeable privacy harms. Any notifications made after this 72-hour window must be accompanied by reasons why it was delayed. Additionally, a covered entity must alert other covered entities from whom they personal information, and people must be notified if there is a risk of increased privacy harms.

Title III details the DPA’s structure and powers. The DPA would be headed by a Director appointed by the President and confirmed by the Senate, and the Director could appoint a Deputy Director. The Director would serve a five year, and the bill is silent is on how many terms a Director may serve. The agency would receive broad powers to set itself up and to promulgate regulations for its operations or to regulate entities under its jurisdictions. The DPA must consult with other federal agencies and state agencies in policing privacy and security. Finally, the agency would have appropriations of $550 million per year for the next five years authorized, but the Appropriations Committees would have to actually make these funds available in annually in an appropriations bill.

Title IV lays out the enforcement of the Act. The DPA could enforce the Act in two separate ways, much like the FTC’s current means of enforcement. It could initiate an internal, administrative process that would result in a cease and desist order, allowing any such defendant in this action the opportunity to challenge the agency at an agency hearing and then appealing from what would presumably be an administrative law judge’s decision to the full agency, and then to a U.S. Circuit Court of Appeals. Or, the DPA could file a complaint in a U.S. District Court and litigate against a defendant. In either case, the agency could seek civil penalties of up to $42,530 per person, and this number could get high depending on the number of people involved. For example, in the Facebook/Cambridge Analytica case where more than 87 million people were affected, if the DPA sought the maximum civil fine for each violation, the potential liability would be more than $37 trillion. It is important to note that civil penalties are calculated per person and not per violation, for the latter method could yield even larger numbers as it is easy to contemplate multiple violations per person. However, a court’s directions under the Act in terms of the factors to consider when meting out a fine would weigh against such a gigantic, company crushing fine.

In enforcing the act, the DPA must coordinate with other federal regulators, which means multiple, overlapping jurisdictions is the likely future landscape is this bill is enacted. These agencies may refer cases to the DPA for prosecution, and yet, should one federal agency initiate an action for a privacy or security violation, the DPA may not also bring an action. Moreover, the Act requires the DPA to execute an agreement with the FTC to coordinate enforcement. State attorneys general may bring actions under this Act but only if the DPA is not doing so, and the state needs to provide notice to the DPA before proceeding.

As noted, a private right of action is available for people to allege violations of the Act. However, a class action seeking civil damages could only be brought by a non-profit and not plaintiffs’ attorneys, suggesting a class action for injunctive relief may be brought by a plaintiffs’ attorney. There is also a provision allowing a whistleblower to bring an action after first allowing the DPA the option to litigate. If the DPA accepts and prevails, the whistleblower would be entitled to 15%, but if the whistleblower litigates the case, she may be entitled to between $25 and 50% of the award.

In terms of the relief the DPA or a state attorney may recover aside from civil penalties, a range of equitable relief:

  • Rescission or reformation of contracts;
  • Refund of moneys;
  • Restitution;
  • Disgorgement or compensation for unjust enrichment;
  • Payment of damages or other monetary relief;
  • Public notification regarding the violation, including the costs of notification; and
  • Limits on the activities or functions of the person;

Additionally, the DPA or state attorneys general may also seek to recover all the costs of prosecuting the case.

FTC Acts Against Stalking App Developer

The Federal Trade Commission (FTC) announced its first action regarding applications for smart phones that may be placed on a user’s device without their knowledge or consent (aka stalking apps). The FTC took action against the developer of stalking apps of violating both the Federal Trade Commission Act (FTC Act) and the Children’s Privacy Protection Rule (COPPA Rule). In its press release, the FTC claimed these apps “allowed purchasers to monitor the mobile devices on which they were installed, without the knowledge or permission of the device’s user.”

Retina-X Studios, LLC agreed to a consent order that permanently restrains and enjoins the company “from, or assisting others in, promoting, selling, or distributing a Monitoring Product or Service unless Respondents” meet a list of requirements, including foreswearing the circumvention of a mobile device’s operating system for installation (aka jail-breaking or rooting), eliciting affirmative agreement that users of any such app will only employ it in lawful, enumerated practices, and that whenever the app is running, there must be a clear and conspicuous icon on the device alerting the user that the run has been installed and is functional.

Like many such settlements, the FTC elicited agreement from the app developer to cease certain past practices and to engage in future practices to both avoid the offensive conduct and that are designed to lead to better data security. Failure to do so would allow the FTC to go back to the court and request an order to show cause against the entity, putting it in jeopardy of facing civil penalties of more than $42,000 per violation.

Of course, the FTC’s power to order entities to take certain, broadly gauged actions, such as institute a comprehensive data security program, have been called into question in LabMD v. FTC. In that 2018 case, U.S. Court of Appeals for the Eleventh Circuit ruled against the FTC and held that the agency may not direct entities to take, future, ill-defined actions. Rather, in the appeals court’s view, the FTC’s underlying statute allows the agency only to spell out the conduct that entities may not engage in whether it be in a cease and desist order issued by the FTC or a consent decree issued by a U.S. District Court. Of course, this is only the view of one circuit, and the other circuits are free to continue operating under the old understanding that the FTC may indeed direct entities to, for example and most relevantly in this case, implement a comprehensive data security regime.

In LabMD, the FTC Order that the Eleventh Circuit found faulty required:

…that the respondent shall, no later than the date this order becomes final and effective, establish and implement, and thereafter maintain, a comprehensive information security program that is reasonably designed to protect the security, confidentiality, and integrity of personal information collected from or about consumers by respondent or by any corporation, subsidiary, division, website, or other device or affiliate owned or controlled by respondent. Such program, the content and implementation of which must be fully documented in writing, shall contain administrative, technical, and physical safeguards appropriate to respondent’s size and complexity, the nature and scope of respondent’s activities, and the sensitivity of the personal information collected from or about consumers, including…

A.the designation of an employee or employees to coordinate and be accountable for the information security program;

B.the identification of material internal and external risks to the security, confidentiality, and integrity of personal information that could result in the unauthorized disclosure, misuse, loss, alteration, destruction, or other compromise of such information, and assessment of the sufficiency of any safeguards in place to control these risks. At a minimum, this risk assessment should include consideration of risks in each area of relevant operation, including, but not limited to: (1) employee training and management; (2) information systems, including network and software design, information processing, storage, transmission, and disposal; and (3) prevention, detection, and response to attacks, intrusions, or other systems failures;

C.the design and implementation of reasonable safeguards to control the risks identified through risk assessment, and regular testing or monitoring of the effectiveness of the safeguards’ key controls, systems, and procedures;

D.the development and use of reasonable steps to select and retain service providers capable of appropriately safeguarding personal information they receive from respondent, and requiring service providers by contract to implement and maintain appropriate safeguards; and

E.the evaluation and adjustment of respondent’s information security program in light of the results of the testing and monitoring required by Subpart C, any material changes to respondent’s operations or business arrangements, or any other circumstances that respondent knows or has reason to know may have a material impact on the effectiveness of its information security program.

However, in the instant case, the FTC is far more prescriptive than it was by directing Retina-X Studios to

Design, implement, maintain, and document safeguards that control for the internal and external risks to the security, confidentiality, or integrity of Personal Information identified in response to sub-Provision VI.D. Each safeguard shall be based on the volume and sensitivity of the Personal Information that is at risk, and the likelihood that the risk could be realized and result in the unauthorized access, collection, use, alteration, destruction, or disclosure of the Personal Information. Respondents’ safeguards shall also include:

1.Technical measures to monitor all of Respondents’ networks and all systems and assets within those networks to identify data security events, including unauthorized attempts to exfiltrate Personal Information from those networks;

2.Technical measures to secure Respondents’ web applications and mobile applications and address well-known and reasonably foreseeable vulnerabilities, such as cross-site scripting, structured query language injection, and other risks identified by Respondents through risk assessments and/or penetration testing;

3.Data access controls for all databases storing Personal Information, including by, at a minimum, (a) requiring authentication to access them, and (b) limiting employee or service provider access to what is needed to perform that employee’s job function;

4.Encryption of all Personal Information on Respondents’ computer networks; and

5.Establishing and enforcing policies and procedures to ensure that all service providers with access to Respondents’ network or access to Personal Information are adhering to Respondents’ Information Security Program.

The FTC continues by requiring:

F. Assess, at least once every twelve (12) months and promptly following a Covered Incident, the sufficiency of any safeguards in place to address the risks to the security, confidentiality, or integrity of Personal Information, and modify the Information Security Program based on the results.

G. Test and monitor the effectiveness of the safeguards at least once every twelve months and promptly following a Covered Incident, and modify the Information Security Program based on the results. Such testing shall include vulnerability testing of each of Respondents’ network(s) once every four (4) months and promptly after any Covered Incident, and penetration testing of each Covered Business’s network(s) at least once every twelve (12) months and promptly after any Covered Incident;

H. Select and retain service providers capable of safeguarding Personal Information they receive from each Covered Business, and contractually require service providers to implement and maintain safeguards for Personal Information; and

I. Evaluate and adjust the Information Security Program in light of any changes to Respondents’ operations or business arrangements, a Covered Incident, or any other circumstances that Respondents know or have reason to know may have an impact on the effectiveness of the Information Security Program. At a minimum, each Covered Business must evaluate the Information Security Program at least once every twelve (12) months and modify the Information Security Program based on the results.

Is it possible the FTC is seeking to forestall future actions based on LabMD through the use of more descriptive, prescriptive requirements for entities in establishing and running better data security programs? It absolutely could be. Some have suggested that the agency telegraphed its current thinking on what is proper data security in draft regulations earlier this year that are more detailed than the current regulations and the numerous settlements the FTC has entered into.

“Digital Accountability and Transparency to Advance Privacy Act” (DATA Privacy Act) (S. 583)

Last week, we spent a bit of time looking at the “Privacy Bill of Rights Act” (S. 1214), the only bill to get an A in the Electronic Privacy Information Center’s report on privacy bills, and likely outside the realm of the politically possible at present. This week, we will examine Senator Catherine Cortez Masto’s (D-NV) “Digital Accountability and Transparency to Advance Privacy Act” (DATA Privacy Act) (S. 583). Of course, Cortez Masto served as the attorney general of Nevada for eight years prior to succeeding former Senator Harry Reid (D-NV), and this bill demonstrates her background as her state’s top prosecutor.

In terms of similarities to the other privacy bills, the Federal Trade Commission (FTC) would promulgate extensive regulations to effectuate a new federal privacy regime under the Administrative Procedure Act (APA) and would be able to punish privacy violations by seeking civil penalties in the first instance in a court action. Consumers would receive the right to opt-in and opt-out of certain data collection, processing, use, sharing, and selling practices conducted by entities.

State attorneys general would be able to bring actions under the new enforcement structure. Consumers would not be allowed to sue for violations of the new regime, which is aligned with a number of other bills.

Like the “Privacy Bill of Rights” (S. 1214), the DATA Privacy Act would rule out of bounds certain practices instead of going the route on enhanced notice and consent like many of the other bills do (i.e. once a consumer is informed of how an entity proposes to collect and use their data, almost any subsequent processing and use would be acceptable.)

In terms of the scope of the DATA Privacy Act, like the “Privacy Bill of Rights Act” (S. 1214), virtually all entities collecting, using and disclosing consumer information would be considered a “covered entity.” However, there would be an exception exempting entities that “collect[], process[], store[], or disclose[] covered data relating to fewer than 3,000 individuals and devices during any 12-month period.” “Covered data” is “any information that is—

  • collected, processed, stored, or disclosed by a covered entity;
  • collected over the internet or other digital network; and
  • linked to an individual or device associated with an individual; or
  • practicably linkable to an individual or device associated with an individual, including by combination with separate information, by the covered entity or any potential recipient of the data.”

This definition encompasses much of the current data ecosystem. Any information vacuumed up electronically that is or can be linked to a person or her device would be covered under the bill. However, employment data and government records made available to the public are not covered data.

The bill defines “privacy risk” to be the “potential harm to an individual resulting from the collection, processing, storage, or disclosure of covered data, including—

(A) direct or indirect financial loss;

(B) stigmatization or reputational harm;

(C) anxiety, embarrassment, fear, and other severe emotional trauma;

(D) loss of economic opportunity; or

(E) physical harm.”

If enacted, the FTC and courts may have difficulty in determining what exactly constitutes things like “stigmatization or reputational harm,” “anxiety, embarrassment, fear, and other severe emotional trauma,” or “loss of economic opportunity.” What turns out to be a “privacy risk” would likely be shaped on a case-by-case basis after FTC regulations speak to these concepts. Nonetheless, the DATA Privacy Act is one of the few bills that seeks to make what some might consider non-economic or non-tangible privacy injuries illegal conduct.

In terms of the personal information upon which consumers would gain new rights and protections, this bill introduced some new ways of looking at this concept, notably, the following terms:

  • A “protected characteristic” is an “individual’s race, sex, gender, sexual orientation, nationality, religious belief, or political affiliation.”
  • “pseudonymous data” are “covered data that may only be linked to the identity of an individual or the identity of a device associated with an individual if combined with separate information.”
  • a “reasonable interest” means—
    • a compelling business, operational, administrative, legal, or educational justification for the collection, processing, storage, or disclosure of covered data exists;
    • the use of covered data is within the context of the relationship between the covered entity and the individual linked to the covered data; and
    • the interest does not subject the individual to an unreasonable privacy risk.
  • “sensitive data” are “any covered data relating to—
    • the health, biologic, physiologic, biometric, sexual life, or genetic information of an individual; or
    • the precise geolocation information of a device associated with an individual.

The FTC is given explicit authority to modify two of these definitions through the rulemaking authority granted the agency (i.e. “pseudonymous data” and “sensitive data”), suggesting the other definitions may not be changed.

Covered entities would need to “post in an accessible location a notice that is concise, in context, in easily understandable language, accurate, clear, timely, updated, uses visualizations where appropriate, conspicuous, and free of charge regarding the covered entity’s privacy practices.” This notice must inform consumers of “the methods necessary to exercise their rights” described elsewhere in the bill.

Within one year of enactment, the FTC must promulgate regulations that require covered entities “to implement, practice, and maintain certain data procedures and processes” subject to standards that are distinguishable from other privacy bills.

“[R]egarding the means by and purposes for which covered data is collected, processed, stored, and disclosed,” covered entities must engage in the following practices to detailed by FTC regulation:

  • A covered entity’s collection, processing, storage, and disclosure of covered data must be in service of a reasonable interest of the covered entity, such as
    • business, educational, and administrative operations that are relevant and appropriate to the context of the relationship between the covered entity and the individual linked to the covered data;
    • relevant and appropriate product and service development and enhancement;
    • preventing and detecting abuse, fraud, and other criminal activity;
    • reasonable communications and marketing practices that follow best practices, rules, and ethical standards;
    • engaging in scientific, medical, or statistical research that follows commonly accepted ethical standards; or
    • any other purpose for which the Commission considers to be reasonable.

A few observations about a “reasonable interest.” First, the FTC can add to this this list, so it is not exhaustive, but those activities not listed here would be deemed unreasonable and therefore not allowed. For example, what might be considered unreasonable “communications and marketing practices”? Advertising by third parties unrelated to the consumer based on the information the covered entity gave or sold the third party? Presumably should this not expose a consumer to a “privacy risk,” then it may be permissible. Second, the FTC will need to define a reasonableness standard by which a “business” operation “relevant and appropriate to the context of the relationship between the covered entity and the individual linked to the covered data” may be determined acceptable under the bill. If a consumer is perusing Amazon’s website for books on substance abuse addiction and has granted the necessary permissions for the website to use such covered data to sell advertisements on its website aimed at this consumer regarding 12-step programs? Possibly not since this would be “sensitive information” that is protected at a different standard that “covered data.”

The bill introduces an equitable standard that would bar the collection, use, disclosure, or processing of covered data in a way that result in discrimination on the basis of a protected characteristic. Consequently, discriminatory targeted advertising practices, “price, service, or employment opportunity discrimination,” or any other practice the FTC thinks would result in discrimination on the basis of a protected characteristic would be disallowed. Incidentally, this standard would seem to place the FTC or state attorney general’s calculus on what constitutes discrimination on the disparate impact side of the issue as opposed to disparate treatment which usually requires an intent to discriminate. Consequently, Republican and industry stakeholders would likely object to these provisions.

Finally, a forthrightness standard would bar covered entities from a number of potentially deceptive practices, including using “inconspicuous recording or tracking devices and methods,” disclosing the contents of a private communication, methods of representations that are misleading, and anything else the FTC decides does not meet this standard.

But, then would not these practices also run afoul of Section 5 of the FTC Act; however, the FTC would not be able to ask a court for fines on the basis of Section 5 violations.

The DATA Privacy Act employs both opt-out and opt-in rights for consumers depending on the type of information in question. Consumers would be able to opt-out of collection, usage, processing, and disclosing “covered data linked to the individual.” However, the definition of “covered data” includes data that is both linked to an individual and information that can be reasonably be linked to an individual. This statement of the right to opt-out may be a bit muddled and in need of clarification. Is it all covered data or just the covered data that can be linked to a person?

And yet, consumers would need to express “affirmative, opt-in consent” in a number of situations:

before the covered entity collects or discloses sensitive data linked to the individual; or

before the covered entity collects, processes, stores, or discloses data for purposes which are outside the context of the relationship of the covered entity with the individual linked to the data, including—

the use of covered data beyond what is necessary to provide, improve, or market a good or service that the individual requests;

the processing or disclosure of covered data differs in material ways from the purposes described in the privacy policy that was in effect when the data was collected; and

any other purpose that Commission considers outside of context.

Again, the FTC would be given power to further define what data collection, usage, processing, or disclosure practices would require affirmative, opt-in consent. However, opt-in consent would allow covered entities to utilize a consumer’s data in many ways. Of course, a question lurking beneath all these enhanced notice and consent regimes is does a consumer’s consent make all data usage kosher?

Finally, covered entities would have the responsibility to minimize data including taking “reasonable measures to limit the collection, processing, storage, and disclosure of covered data to the amount that is necessary to carry out the purposes for which the data is collected; and…[storing] covered data only as long as is reasonably necessary to carry out the purposes for which the data was collected.”

However, the bill details circumstances under covered entities may dispense with the requirements relating to data security: if the limitations on the collection, processing, storage, or disclosure of covered data would—

  • inhibit detection or prevention of a security risk or incident;
  • risk the health, safety, or property of the covered entity or individual; or
  • prevent compliance with an applicable law (including regulations) or legal process.

The FTC’s regulations would also need to include requirements on how covered entities must allow consumers to access, correct, delete, and obtain a portable version of covered data. However, “[i]f the covered data that an individual has requested processed…is pseudonymous data, a covered entity may decline the request if processing the request is not technically feasible.” And, this type of data are “covered data that may only be linked to the identity of an individual or the identity of a device associated with an individual if combined with separate information.” Moreover, a covered entity may not retaliate or discriminate against a consumer that avails herself of these rights by “denying goods or services to the individual;” “charging, or advertising, different prices or rates for goods or services;” or “providing different quality of goods or services.”

The DATA Privacy Act links privacy and data security legislation, a feature favored by Democrats more than Republicans. The FTC’s regulations would “require covered entities to establish and implement policies and procedures regarding information security practices for the treatment and protection of covered data.” Among the elements these new regulations must address “the level of identifiability of the covered data and the associated privacy risk…[and] the sensitivity of the covered data collected, processed, and stored and the associated privacy risk.” The FTC must also consider current “technological, administrative, and physical” safeguards, the costs of a covered entity implementing and maintaining and regularly reviewing safeguards. Finally, the FTC is required to weigh how regulations would affect small and medium-sized businesses.

As mentioned both the FTC and state attorneys general could enforce the new regime, and the FTC could intervene in any state action.

A Privacy Bill A Week: The Data Care Act

As we wait for stakeholders in Congress to finalize and release their proposals to regulate how private sector companies handle, use, and distribute the private information of Americans, we thought there would be value in reviewing some of the key bills already introduced this Congress and some introduced over the last few Congresses so when bills are finally introduced, we will have a baseline by which to judge the proposal.

This week, let’s examine the “Data Care Act” (S. 3744). In December 2018, fifteen Democratic Senators led by Senator Brian Schatz (D-HI) and including presidential candidates Senators Michael Bennet (D-CO), Amy Klobuchar (D-MN) and Cory Booker (D-NJ) introduced a bill that would extend the concept of fiduciary responsibility currently binding on health care professionals and attorneys with respect to the patients and clients’ information to “online service providers.”

This bill built on a concept fleshed out by law professor Jack Balkin in his article “Information Fiduciaries and the First Amendment“ that would place duties on companies collecting and using consumer data similar to those that lawyers and doctors must meet in how they handle client and patient information. Balkin explained that these so-called “information fiduciaries” should “have special duties to act in ways that do not harm the interests of the people whose information they collect, analyze, use, sell, and distribute.”

Schatz has been in negotiations with other members of the Senate Commerce, Science, and Transportation Committee with the goal of developing a bipartisan bill to regulate privacy at a federal level. As discussed in past issues of the Technology Policy Update, stakeholders in both the House and Senate continue to negotiate privacy bills but significant disagreements have been reported regarding whether such a bill has a private right of action, preempts the “California Consumer Privacy Act” (CCPA) (A.B. 375) and other state laws, and whether a new regime is primarily enhanced notice and consent or certain conduct would no longer be allowed amongst other issues.

In short, under the “Data Care Act,” “online service providers” would be severely be limited on how they collect, share, and sell the personally identifiable information (PII), for these companies would need to treat their customers’ PII as privileged and deserving of a greater level of protection, much like the HIPAA regulations impose this standard on health care providers or bar associations’ rules on attorneys. However, the scope of who is an online service provider would seem to encompass most consumer-oriented companies doing business on the internet. Yet, like most other privacy and data security bills, the Federal Trade Commission (FTC) would enforce the new regime.

An “online service provider” is defined as an entity “engaged in interstate commerce over the internet or any other digital network; and in the course of business, collects individual identifying data about end users, including in a manner that is incidental to the business conducted.” This very sweeping definition would cover almost any business or entity doing business in the U.S. even if it is not across state lines as the Supreme Court has often construed the Commerce Clause. However, the FTC would have the discretionary authority to exclude categories of online service providers from the fiduciary duties the bill would otherwise impose. The FTC is directed to consider the privacy risks posed by the category of online service provider.

The bill requires that “[a]n online service provider shall fulfill the duties of care, loyalty, and confidentiality” towards consumers’ personal information, which is also broadly defined in the bill.  The duty of care requires online service providers to “reasonably” safeguard “individual identifying data” from unauthorized access and notify consumers of any breach of this duty, subject to FTC regulations that would be promulgated. The duty of loyalty would require online service providers to not use the information in a way that benefits them to the detriment of consumers, including uses that would result in reasonably foreseeable material physical or financial harm to the consumer. Finally, the duty of confidentiality limits the disclosure or sale of consumers’ information to instances where the duties of care and loyalty are observed (i.e. when the information must be safeguarded and not used to the detriment of consumers). Moreover, under this duty, should an online service provider wish to share or sell consumers’ information with a third party, they would need to enter into a contract with the other party that requires them to meet the same duties of care, loyalty, and confidentiality.

As noted, the FTC would enforce the act and would have the authority to levy fines in the first instance for violations, but state attorneys general would also be able to bring actions for violations in the event the FTC does not act or after FTC action. This latter power has long been a Democratic priority in the realm of data security and may be a non-starter with Republicans. Moreover, the bill does not preempt state laws, meaning the FTC could investigate a violation under this act and states could investigate under their laws. The FTC would be given authority under the Administrative Procedure Act (APA) to promulgate regulations regarding data breach notification instead of the much more onerous Moss-Magnuson rulemaking procedures the FTC must otherwise use. These regulations include the aforementioned regulations on breach notification and some possible exemptions to the duties that would otherwise apply to online service providers (e.g. small companies). The bill expands the FTC’s jurisdiction over non-profit entities and common carriers that may also be online service providers.

Possible Preview of Federal Data Security Regulations?

If privacy legislation gets passed by the Congress this year or next (although recent reports suggest a number of impasses between Republicans and Democrats), it might also contain language on data security standards. Such legislation would also likely direct the Federal Trade Commission (FTC) to conduct an Administrative Procedure Act (APA) rulemaking to promulgate regulations on privacy and data security. As most of the major bills provide that the FTC would use APA notice and comment procedure instead of the far lengthier Moss-Magnuson procedures, it is not far-fetched to envision FTC regulations on privacy and/or data security coming into effect, say, in the first year of the next Administration. However, what might FTC regulations on data security look like? Well, the FTC’s recent proposed update to the Safeguards Rule may provide a roadmap, but first a little background.

The “Financial Services Modernization Act of 1999” (P.L. 106-102) (aka Gramm-Leach-Bliley) required financial services regulators to promulgate regulations to “protect the security and confidentiality of…customers’ nonpublic personal information.” The FTC, among other regulators, were required to “establish appropriate standards for the financial institutions…relating to administrative, technical, and physical safeguards-

  • (1) to insure the security and confidentiality of customer records and information;
  • (2) to protect against any anticipated threats or hazards to the security or integrity of such records; and
  • (3) to protect against unauthorized access to or use of such records or information which could result in substantial harm or inconvenience to any customer.”

The current Safeguards regulations were promulgated in May 2002 and reflect the thinking of the agency in the era before big data, widespread data breaches, smartphones, and other technological developments. Consequently, the regulations those financial services companies subject to FTC regulation under Gramm-Leach-Bliley now seem vague and almost permissive in light of best practices and requirements subsequently put in place for many entities. The current Safeguards rule is open-ended and allows the regulated entity the discretion and flexibility to determine what constitutes the “information security program” it must implement based on the entity’s “size and complexity, the nature and scope of your activities, and the sensitivity of any customer information at issue.” Covered entities must perform risk assessments to identify and ideally remediate foreseeable internal and external risks. Subsequently, the covered entity must then “[d]esign and implement information safeguards to control the risks you identify through risk assessment, and regularly test or otherwise monitor the effectiveness of the safeguards’ key controls, systems, and procedures.”

These regulations are not prescriptive and are more general in nature, or at least they seem so in retrospect. One would hope that any entities holding any modicum of sensitive consumer information is regularly and vigorously engaged in an ongoing practice of assessing and addressing risks. However, the repromulgation of the Safeguards rule suggest this may not be the case.

The FTC is using its very broad grant of authority under Gramm-Leach-Bliley to revisit the Safeguards Rule as part of its periodic sweep of its regulations. The FTC explained that when it issued the current Safeguards Rule in 2002 “it opted to provide general requirements and guidance for the required information security program, without providing detailed descriptions of what the information security program should contain.” The FTC claimed that“[i]t took this approach in order to provide financial institutions with the flexibility to shape the information security programs to their particular business and to allow the programs to adapt to changes in technology and threats to the security and integrity of customer information.” The FTC asserted its beliefthe new provisions “continue to provide companies with flexibility, they also attempt to provide more detailed guidance as to what an appropriate information security program entails.”

In the proposed changes to the Safeguards Rule, the FTC is calling for “more specific security requirements” that “will benefit financial institutions by providing them more guidance and certainty in developing their information security programs, while largely preserving that flexibility.” It is possible that in offering more detailed prescriptions that the FTC is responding to criticisms generally that its data security standards are vague[1]. The FTC contends that the “proposed amendments provide more detailed requirements as to the issues and threats that must be addressed by the information security program, but do not require specific solutions to those problems.” The Commission claims “the proposed amendments retain the process-based approach of the Rule, while providing a more detailed map of what information security plans must address.”

The FTC explains

These amendments are based primarily on the cybersecurity regulations issued by the New York Department of Financial Services, 23 NYCRR 500 (“Cybersecurity Regulations”), and the insurance data security model law issued by the National Association of Insurance Commissioners (“Model Law”).The Cybersecurity Regulations were issued in February 2017 after two rounds of public comment. The Model Law was issued in October 2017. The Commission believes that both the Cybersecurity Regulations and the Model Law maintain the balance between providing detailed guidance and avoiding overly prescriptive requirements for information security programs. The proposed amendments do not adopt either law wholesale, instead taking portions from each and adapting others for the purposes of the Safeguards Rule.

However, the FTC does not merely lift provisions from each but rather uses these as guidelines in drafting its own regulations, and the agency picks, chooses, modifies and discards from the regulations. Going over the three sets of data security requirements and providing a detailed analysis is outside the scope of this article. Rather, I would like to hit some of the high points by way of illustrating both the FTC’s reliance on the two predecessor schemes and also to show how the agency’s thinking on what constitutes adequate data security has evolved since 2002.

The FTC’s proposed Safeguards rule would generally require covered entities to encrypt consumer’s personal information when at rest and in transit on external systems. Similarly, the use of multi-factor authentication would be required in most circumstances, and covered entities would need to engage in regular penetration testing.

As a threshold matter, the Commission defines what a “security event” is and how regulated entities must gear their data security to preventing or reducing the risk that a “security event” occurs. Under the currently effective regulations, there is no definition. The agency proposes that a “security event” will mean “an event resulting in unauthorized access to, or disruption or misuse of, an information system or information stored on such information system.” In the Federal Register notice, the FTC explained that “[t]his term is used in proposed provisions requiring financial institutions to establish written incident response plans designed to respond to security events and to implement audit trails to detect and respond to security events.”

The FTC would generally require “covered entities” to encrypt consumer information at rest or in transit subject to a significant exception. The agency’s reasoning seems to be that encryption would be used for sensitive consumer information and when it is not prohibitively difficult or expensive to do so. However, The NAIC model statute charges regulated entities to “[d]etermine which security measures…are appropriate and implement such security measures” including encryption whereas the NYDFS would require the use of encryption of nonpublic information at rest or transmitted by covered entities unless it has been determined doing either would be “infeasible,” a decision that the entity’s CISO may agree with. The FTC followed the NYDFS in substantial part and the language on the exemption to the requirement that encryption must be used follows word for word: “[t]o the extent you determine that encryption of customer information, either in transit over external networks or at rest, is infeasible, you may instead secure such customer information using effective alternative compensating controls reviewed and approved by your CISO.” It is not clear, moreover, under this loophole what would stop organizations regulated by the FTC to make this determination, for it appears the agency would have limited recourse in questioning the covered entity’s decision not to encrypt customer data. It is unclear if the FTC will keep this provision in the final regulation. In contrast, the NYDFS requires the CISO to revisit such decisions annually.

Tellingly, however, the FTC opted against the safe harbors the NAIC model statute offers if entities have encrypted the exfiltrated or accessed data and the encryption, process or key has not also been breached. The NYDFS also does not have such a safe harbor to what constitutes a security event that triggers the reporting and notification requirements. Nonetheless, the FTC lifts its definition for encryption almost word-for-word from the NAIC model statute.

Another new requirement for covered entities is the use of multi-factor authentication. The FTC’s draft regulations provide that “[i]n order to develop, implement, and maintain your information security program, you shall…[d]esign and implement safeguards to control the risks you identity through risk assessment, including…multi-factor authentication.” The agency defines multi-factor authentication as “authentication through verification of at least two of the following types of authentication factors:

  • (1) Knowledge factors, such as a password;
  • (2) Possession factors, such as a token; or
  • (3) Inherence factors, such as biometric characteristics.”

The FTC explained that it “views multi-factor authentication as a minimum standard to allowing access to customer information for most financial institutions…[and] believes that the definition of multi-factor authentication is sufficiently flexible to allow most financial institutions to develop a system that is suited to their needs.” Nonetheless, “[t]he Commission seeks comment on whether this definition is sufficiently flexible, while still requiring the elements of meaningful multi-factor authentication.”

Like the NYDFS and NAIC standards, the FTC would require “information systems under the Rule to include audit trails designed to detect and respond to security events.” The agency uses a National Institute of Standards and Technology (NIST) definition of audit trail: “chronological logs that show who has accessed an information system and what activities the user engaged in during a given period.” The FTC noted that this standard will “not require any specific type of audit trail, nor does it require that every transaction be recorded in its entirety,” but, crucially, “the audit trail must be designed to allow the financial institution to detect when the system has been compromised or when an attempt to compromise has been made.” Also, the FTC will not require that audit trails be retained for any set period of time; rather, covered entities must hold them for a “reasonable” period of time. What should be the FTC’s expectations on maintaining audit trails that date back to “security event” that first occurred two years before it was discovered? Is two years a reasonable period of time to store audit rail materials?

Similarly, the draft Safeguards rule would “require financial institutions to take steps to monitor those users and their activities related to customer information in a manner adapted to the financial institution’s particular operations and needs.” The FTC noted that “[t]he monitoring should allow financial institutions to identify inappropriate use of customer information by authorized users, such as transferring large amounts of data or accessing information for which the user has no legitimate use.”

The FTC would bolster the current mandate that covered financial institutions “[r]egularly test or otherwise monitor the effectiveness of the safeguards’ key controls, systems, and procedures, including those to detect actual and attempted attacks on, or intrusions into, information systems.” The agency calls for “either ‘continuous monitoring’ or ‘periodic penetration testing and vulnerability assessments.’” However, in lieu of continuous monitoring, the FTC is willing to accept annual penetration testing and biannual vulnerability testing “reasonably designed to identify publicly known security vulnerabilities in your information systems based on the risk assessment.”

Finally, the FTC is proposing to carve out very small institutions that would otherwise fall within the scope of the rule because they “maintain relatively small amounts of customer information.” As a result, the draft Safeguards rule would exempt small covered entities from needing to:

  • 314.4(b)(1), requiring a written risk assessment;
  • 314.4(d)(2), requiring continuous monitoring or annual penetration testing and biannual vulnerability assessment;
  • 314.4(h), requiring a written incident response plan; and
  • 314.4(i), requiring an annual written report by the CISO.

The FTC articulated its belief that these are the requirements most likely “to cause undue burden on smaller financial institutions.”

The FTC seems to be balancing the expense imposed on these smaller institutions with presumably less resources for compliance against the mandate of Gramm-Leach-Bliley to safeguard customer records and information. But, on its face, the underlying statute does not seem to delegate authority to the FTC to exempt small entities unless the directive to “establish appropriate standards” can be read as a grant of discretion in how the agency meets this Congressional mandate (emphasis added).

And yet, putting aside that issue for the moment, one wonders why the agency drew the line at those institutions that “maintain customer information concerning fewer than five thousand consumers.” Is there a quantitative difference between the resources available to businesses of this size and those “maiantain[ing]” the consumer records and information of 7,500 or 10,000 or 20,000 consumers? Also, how exactly will maintain be construed? Will it be an annual average of the consumer information held by an institution? A monthly average? A threshold that an entity clears once and then the Safeguards’ requirements attach? The agency did not explain its thinking on this point.

Incidentally, the FTC actually split on the proposed Safeguards regulation with Commissioners Noah Joshua Phillips and Christine S. Wilson issuing a dissenting statement, in which they extol the virtues of the current rule and assert the proposed regulation “trades flexibility for a more prescriptive approach, potentially handicapping smaller players or newer entrants.” This may suggest a future FTC may not propose a similarly prescriptive approach for privacy and/or data security regulations under to be enacted legislation.

And yet, regardless of whether the FTC does proceed in this fashion, might the agency’s thinking on what constitutes acceptable data security under the powers granted by Section 5 of the FTC Act begin to resemble the more directive regime under the Safeguards rule? Given that the agency has not exactly spelled out what is “reasonable” data security, the general requirements for encryption, multi-factor authentication, and penetration testing could well get folded into what the FTC considers the sorts of practices entities will need to use in order not to violate the ban on deceptive and unfair practices.


[1] In LabMD, Inc. vs. FTC, the Eleventh Circuit ruled against the FTC’s use of its Section 5 powers to enter into settlements requiring private entities to establish and maintain remedial, “reasonable” data security practices. The court held that such settlements are contrary to the FTC Act because they do not enjoin specific acts or practices and rather command entities to institute data security practices. The court also found that such settlements are ultimately unenforceable because they are vague as to what is a reasonable data security regime.