Sponsors Take A New Run At Privacy Law in Washington State

Perhaps the third time is the charm? Legislators seek to pass a privacy law in Washington state for the third year in a row.

A group of Senators in Washington state’s Senate have introduced a slightly altered version of a privacy bill they floated last summer. A committee of jurisdiction will hold a hearing on 14 January 2021 on SB 5062. Of course, this would mark the third year in a row legislators have tried to enact the Washington privacy act. The new bill (SB 5062) tracks closely with the two bills produced by the Washington Senate and House last year lawmakers could not ultimately reconcile. However, there are no provisions on facial recognition technology, which was largely responsible for sinking a privacy bill in Washington State two years ago. The sponsors have also taken the unusual step of appending language covering the collection and processing of personal data to combat infectious diseases like COVID-19.

I analyzed the discussion draft that Washington State Senator Reuven Carlyle (D-Seattle) released over the summer, and so I will not recite everything about the new bill. It should suffice to highlight the differences between the discussion draft and the introduced legislation. Big picture, the bill still uses the concepts of data controllers and processors most famously enshrined in the European Union’s (EU) General Data Protection Regulation (GDPR). Like other privacy bills, generally, people in Washington State would not need to consent before an entity could collect and process its information. People would be able to opt out of some activities, but most could data collection and processing could still occur as it presently does.

The date on which the bill would take effect was pushed aback from 120 days in the discussion draft to 31 July 2022 in the introduced bill. While SB 5062 would cover non-profits, institutions of higher education, airlines, and others unlike the discussion draft, the effective date for the bill to cover would be 31 July 2026. The right of a person to access personal data a controller is processing is narrowed slightly in that it would no longer be the personal data the controller has but rather categories of personal data. The time controllers would have to respond to a certain class of request would be decreased from 45 to 15 days. This class includes requests to opt out of targeted advertising, the sale of personal data, and any profiling in furtherance of decisions with legal effects. Section 106’s requirement that processors have reasonable security measures has been massaged, rephrased and possibly weakened a bit.

One of the activities controllers and processors could undertake without meeting the requirements of the act was removed. Notably, they will no longer be able to “conduct internal research solely to improve or repair products, services, or technology.” There is also a clarification that using any of the exemptions in Section 110 does not make an entity a controller for purposes of the bill. There is a new requirement that the State Office of Privacy and Data Protection must examine current technology that allows for mass or global opt out or opt in and then report to the legislature. Finally, two of the Congressional stakeholders on privacy and data security hail from Washington state, and consideration and possible passage of a state law may limit their latitude on a federal bill they could support. Senator Maria Cantwell (D-WA) and Representative Cathy McMorris Rodgers (R-WA), who are the ranking members of the Senate Commerce, Science, and Transportation Committee and House Energy and Commerce Committee respectively, are expected to be involved in drafting their committee’s privacy bills, and a Washington state statute may affect their positions in much the same the “California Consumer Privacy Act” (CCPA) (AB 375) has informed a number of California Members’ position on privacy legislation, especially with respect to bills being seen as weaker than the CCPA.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2021. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Kranich17 from Pixabay

Brown’s Bill Is The Most Privacy Friendly So Far

The top Democrat on the Senate Banking Committee has released the most pro-privacy bill yet.

Even though Senate Banking, Housing and Urban Affairs Ranking Member Sherrod Brown (D-OH) introduced his draft privacy bill in June, the “Data Accountability and Transparency Act of 2020,” too much has been happening to take a proper look at the bill. Now that I have, I can say this is the most privacy and consumer rights friendly bill introduced in this Congress and quite possibly any of the recent Congresses. I wonder if Democrats could pass such a strong, restrictive bill even with super majorities in both chambers and a Democratic President, for the resistance by industry would be very fierce.

In terms of what this bill would do, most notably, a new agency would be created, the Data Accountability and Transparency Agency (DATA) that would be outside the appropriations process like the Consumer Financial Protection Bureau (CFPB), which limits Congress’ power over the agency. It would be headed by a Director appointed by the President and then confirmed by the Senate who could serve a five-year term. The agency would also have a Deputy Director. Again, this uses the CFPB as the template and not the Federal Trade Commission (FCC) or Federal Communications Commission (FCC), independent agencies with five Commissioners each. Also,  like the CFPB, and unlike the FTC, the agency would be charged with policing unfair, deceptive, and abusive privacy practices in violation of this new law. It appears the DATA (incidentally, a terrible acronym for an agency) would work alongside existing federal agencies, and so the FTC could still police privacy and data security.

Moreover, the Brown bill uses preemption model from the “Financial Services Modernization Act of 1999” (P.L. 106-102) (aka Gramm–Leach–Bliley) under which states would be allowed to regulate privacy above the federal standard so long as  a state statute is not inconsistent. And, state statutes would preempted only to the degree they are countered to the new federal law.

And, of course, Brown’s bill allows people to sue for violations, and on the most generous terms I’ve seen among the privacy bills.

Not surprisingly, the definitions are drafted in ways that are uber pro-privacy. For example, ‘‘personal data’’ is defined as “electronic data that, alone or in combination with other data—

  • could be linked or reasonably linkable to an individual, household, or device; or
  • could be used to determine that an individual or household is part of a protected class.”

This is a very broad definition of personal information a U.S. resident would have protected under the bill because it covers more than just data like names, addresses, Social Security numbers, etc. and instead covers all data that could be linked to a person, household, or device. This is a broader definition than most bills, which actually specify the sorts of data. For example, some bills treat specific geolocation data as deserving more protection than other data. However, it is often the case that this is defined as any such data that pinpoints a person’s location to within 1750 feet, meaning that data that locates a person within, say, 2000 feet, or less than half a mile would not be protected. Brown’s definition is simpler, broader, and quite possibly much easier to implement.

Likewise, what is considered a violation under the bill is also very broadly written. A ‘‘privacy harm’’ is “an adverse consequence, or a potential adverse consequence, to an individual, a group of individuals, or society caused, or potentially caused, in whole or in part, by the collection, use, or sharing of personal data, including:

(A) direct or indirect financial loss or economic harm, including financial loss or economic harm arising from fraudulent activities or data security breaches;

(B) physical harm, harassment, or a threat to an individual or property;

(C) psychological harm, including anxiety, embarrassment, fear, other trauma, stigmatization, reputational harm, or the revealing or exposing of an individual, or a characteristic of an individual, in an unexpected way;

(D) an adverse outcome or decision, including relating to the eligibility of an individual for the rights, benefits, or privileges in credit and insurance (including the denial of an application or obtaining less favorable terms), housing, education, professional certification, employment (including hiring, firing, promotion, demotion, and compensation), or the provision of health care and related services;

(E) discrimination or the otherwise unfair or unethical differential treatment with respect to an individual, including in a manner that is prohibited under section 104;

(F) the interference with, or the surveillance of, activities that are protected by the First Amendment to the Constitution of the United States;

(G) the chilling of free expression or action of an individual, or society generally, due to perceived or actual pervasive and excessive collection, use, or sharing of personal data;

(H) the impairment of the autonomy of an individual or society generally; and

(I) any harm fairly traceable to an invasion of privacy tort; and

(J) any other adverse consequence, or potential adverse consequence, consistent with the provisions of this Act, as determined by the Director.

I’ve quoted the entire definition of “privacy harm” because I think it helps one understand the full range of what harms the new privacy agency would be policing. First, it would be beyond actual financial or economic harms and go “psychological harm,” which may present courts with problems as they try to navigate what anguish meets this standard and which does not. Second, it covers activities that are protected under the First Amendment, the chilling of free expression, the impairment of a person’s impairment, and “any harm fairly traceable to an invasion of privacy tort.” This may be the widest definition of what is harm of any of the privacy bills introduced in this or any other recent Congress. Finally, the DATA could determine any other consequence, real or potential, qualifies as a privacy harm.  

“protected class’’ means the actual or perceived race, color, ethnicity, national origin, religion, sex, gender, gender identity, sexual orientation, familial status, biometric information, lawful source of income, or disability of an individual or a group of individuals.

The bill would outright ban data collection, use, or sharing unless for a permissible purpose, which include:

  • To provide a good, service, or specific feature requested by an individual in an intentional interaction.
  • To engage in journalism, provided that the data aggregator has reasonable safeguards and processes that prevent the collection, use, or sharing of personal data for commercial purposes other than journalism.
  • To employ an individual, including for administration of wages and benefits, except that a data aggregator may not invasively collect, use, or share the employee’s personal data in carrying out this paragraph.
  • Where mandated to comply with Federal, State, or local law.
  • Consistent with due process, direct compliance with a civil, criminal, or regulatory inquiry, investigation, subpoena, or summons.
  • To bring or defend legal claims, provided that the parties or potential parties take all necessary measures, including, as applicable, obtaining a protective order, to protect against unnecessary public disclosure of personal data.
  • To detect or respond to security incidents, protect against malicious, deceptive, fraudulent, or illegal activity, or prosecute those responsible for that activity.
  • Free expression by individuals on a social network or media platform.
  • In exigent circumstances, if first responders or medical personnel, in good faith, believe danger of death or serious physical injury to an individual, or danger of serious and unlawful injury to property, requires collection, use, or sharing of personal data relating to the exigent circumstances.
  • The development and delivery of advertisements—
    • based on the content of the website, online service, or application to which the individual or device is connected; and
    • excludes advertising based on the use of any personal data collected or stored from previous interactions with the individual or device, a profile of the individual or device, or the previous online or offline behavior of the individual or device.
  • To offer discounted or free goods or services to an individual if—
    • the offering is in connection with the voluntary participation by the individual in a program that rewards individuals for patronage; and
    • personal data is only collected to track purchases for loyalty rewards under the program

Again, I’ve quoted at length to show how restrictive the bill is. This is the list of permissible purposes, and one will not find a list of exemptions that pare back the privacy rights ostensibly granted by the bill. For the private sector, the first purpose will be the most relevant as they would be allowed to provide services, products, or goods requested by a person who has intentionally interacted with the entity (aka a data aggregator under the bill). Use of the word intention would seem to rule out accidental or questionable interaction. There is also not language making product or service development an exception like it is in many other bills.

Moreover, with respect to the online advertising industry, behavioral advertising would seem to not be a permissible purpose, at least the variety under which a company aggregates data from different sources to form a profile on a person. Moreover, “[c]ollecting, using, or sharing personal data to generate advertising revenue to support or carry out a permissible purpose is not a permissible purpose.”

The “Data Accountability and Transparency Act of 2020” would permit loyalty or reward programs and even allow a business to offer tiered pricing. And, entities could not charge higher or different prices if a person exercises her rights under the bill.

Brown’s bill would place very strict limits of what entities could do with personal data. To wit, it is provided that “[e]xcept where strictly necessary to carry out a permissible purpose, a data aggregator shall not—

  • share personal data with affiliated entities, service providers, or third parties;
  • use personal data for any purpose other than to carry out a permissible purpose;
  • retain personal data for any time longer than strictly necessary to carry out a permissible purpose; or
  • derive or infer data from any element or set of personal data.”

There is a list of prohibited practices, including, as mentioned, a bar on charging higher prices or providing lesser service or products if one chooses to exercise his rights under the bill. Also, businesses would be prohibited from re-identifying anonymized data or from commingling personal data from different sources. Violating these prohibitions could lead to treble damages.

It also seems like the bill bans most differential pricing:

It is unlawful for a data aggregator to collect, use, or share personal data for advertising, marketing, soliciting, offering, selling, leasing, licensing, renting, or otherwise commercially contracting for housing, employment, credit, or insurance in a manner that discriminates against or otherwise makes the opportunity unavailable or offered on different terms on the basis of a protected class or otherwise materially contributes to unlawful discrimination.

I suppose if there is differential pricing not based on a protected class, then it might be acceptable. However, I’m struggling to think of what that might look like.

This section also makes illegal the use of personal data for vote suppression. This language is an obvious non-starter with Republicans like Senate Majority Leader Mitch McConnell (R-KY) and would find few fans in the White House given recent and persistent allegations of vote suppression efforts by the Trump Campaign in 2016.

Brown’s use of the disparate impact standard in proving discrimination is anathema to most conservatives who have long made the case that disparate treatment should be the measuring stick for determining if discrimination has occurred.

Moreover, if a data aggregator use automated decision-making systems, then it must continually assess whether any bias against a protected class is occurring or any disparate impact against a protected class is happening.  

People would be able to access and port their personal information, and this right is much broader than those provided in other bills. They would be able to access to specific pieces of information collected, used or shared about, the permissible purposes used to collect the data, and the service providers and third parties with whom the information was shared. On this latter point, normally other privacy bills provide a person with access, upon request, to the categories of such entities and not the actual entities themselves.

Brown’s privacy bill provides a right of transparency that mandates in each party’s online privacy policy that the following information be made available:

  • A description of the personal data that the data aggregator collects, uses, or shares.
  • The specific sources from which personal data is collected.
  • A description of the sources from which personal data is collected.
  • The permissible purposes for which personal data is collected, used, or shared.
  • The affiliates, service providers, or third parties with which the data aggregator shares personal data, and the permissible purpose for such sharing.
  • A description of the length of time for which personal data is retained.
  • If personal data is collected and retained as anonymized data, a description of the techniques and methods used to create the anonymized data.

Again, this right provides more specific information than comparable rights in other privacy bills.

Data aggregators would have the affirmative duty to ensure information it collects is correct, and people would have the “right to require that a data aggregator that retains the individual’s personal data correct any inaccurate or incomplete personal data.” Moreover, data aggregators must correct any inaccurate or incorrect information as directed by a person. In other bills, there is language requiring businesses to make best or reasonable efforts but nothing like a guarantee for people or a duty for businesses.

People would be able to ask data aggregators to delete personal information, and they must unless these data are needed to complete a permissible purpose.

Brown’s bill has novel language stipulating “[a]n individual has the right to object to the claimed permissible purpose for any personal data that a data aggregator has collected, used, or shared of such individual.” Consequently, a data aggregator must “produce evidence supporting the data aggregator’s claim that the collection, use, or sharing of such individual’s personal data—

  • was strictly necessary to carry out a permissible purpose;
  • was not used or shared for any other purpose; and
  • has not been retained for any time longer than strictly necessary to carry out a permissible purpose.”

Presumably, failing to produce evidence at all or sufficient evidence constitutes a violation punishable by the new agency.

People would also be allowed to request that a person must review material decisions made via automated processes.

Brown puts an interesting twist on the customary language in almost all privacy bills requiring security commensurate with the type of information being collected, used, and shared. The bill creates a duty of care, which as I seem to be recalling against my will from law school, makes any violations of such duty a tort, permitting people to sue under tort law. Nonetheless, the bill that

A data aggregator shall implement and maintain reasonable security procedures and practices, including administrative, physical, and technical safeguards, appropriate to the nature of the personal data and the purposes for which the personal data will be collected, used, or shared…

Moreover, this duty of a data aggregator extends to service providers and the former are made explicitly liable for the violations of the latter.

If a data aggregator receives a request to exercise these rights that is verified, it must do so and at no cost. This would not apply to frivolous and irrelevant requests, however.

This new agency would be housed in the Federal Reserve Bank and would be able to keep and use the proceeds from its actions to fund operations. Just like the CFPB, this would ensure independence from Congress and the Executive Branch, and just like the CFPB, this is likely a non-starter with Republicans.

The new Data Accountability and Transparency Agency, as noted, would be empowered to “take any action authorized under this Act to prevent a data aggregator or service provider from committing or engaging in any unfair, deceptive, or abusive act or practice in connection with the collection, use, or sharing of personal data.” Moreover,

The Agency may prescribe rules applicable to a data aggregator identifying unlawful, unfair, deceptive, or abusive acts or practices in connection with the collection, use, or sharing of personal data, which may include requirements for the purpose of preventing such acts or practices. Rules under this section shall not limit, or be interpreted to limit, the scope of unlawful, deceptive, or abusive acts or practices in connection with the collection, use, or sharing of personal data.

The agency’s powers to punish unfair acts is drafted similarly to the FTC’s powers which add the caveat that any such acts must be unavoidable and is not outweighed by countervailing benefits to people or competition. It bears note that the agency would be able to punish unfair practices “likely” to cause privacy harms or “other substantial harm” to people in addition to actual practices.

An abusive practice is one that:

  • materially interferes with the ability of an individual to understand a term of condition of a good or service; or
  • takes unreasonable advantage of—
    • a lack of understanding on the part of the individual of the material risks, costs, or conditions of the product or service;
    • the inability of the individual to protect their interests in selecting or using a product or service; or
    • the reasonable reliance by the individual on a data aggregator or service provider to act in the interests of the individual.

Deceptive practices are not defined, and so it is likely the new agency’s powers would be the same as the FTC’s with respect to this type of illegal conduct. Also, the new agency would be able to punish violations of any privacy law, which would bring all the disparate privacy regimes under the roof of one entity in the U.S.

The new agency would receive the authority to punish bad actors in the same bifurcated fashion as the FTC and some other agencies: either through an administrative proceeding or by going to federal court. However, regarding the latter route, the agency would not need to ask the Department of Justice (DOJ) to file suit for it. This detail is salient because this is more and more coming to be the de facto Democratic position on this issue.

Whatever the case, the agency would be able to seek any appropriate legal or equitable relief, the latter term encompassing injunctions, disgorgement, restitution, and other such relief for violations. And, of course, the new agency would be able to punish violations of this new law or any federal privacy with civil fines laid out in tiers:

  • For any violation of a law, rule, or final order or condition imposed in writing by the Agency, a civil penalty may not exceed $5,000 for each day during which such violation or failure to pay continues.
  • [F]or any person that recklessly engages in a violation of this Act or any Federal privacy law, a civil penalty may not exceed $25,000 for each day during which such violation continues.
  • [F]or any person that knowingly violates this Act or any Federal privacy law, a civil penalty may not exceed $1,000,000 for each day during which such violation continues.”

It seems like these tiers would only result in the per day violation total and would not be multiplied by the number of affected people. If so, $5,000 a day is a sum most large companies would probably not register, and even $25,000 a day is bearable for enormous companies like Facebook or Amazon.

Nonetheless, violations arising from re-identifying personal data are punished under the last tier (i.e. $1 million per day), and any of these violations might result in criminal prosecution, for the agency may refer such violations to DOJ. CEOs and Boards of Directors could be prosecuted for knowing and intentional violations, which is a fairly high bar, and face up to ten years in prison and a $10 million fine if convicted.

Brown’s bill provides people with a right to sue entities, including government agencies under some circumstances, that violate this act. Also, people may sue the new agency for failing to promulgate required regulations or for adopting rules that violate the act. Plaintiffs would be able to win between $100-$1000 per violation per day, punitive damages, attorney’s fees and litigation costs, and any other relief the court sees fit to grant. Many Republicans and industry stakeholders are, of course, opposed to a private right of action, but Brown’s is beyond what the other bills are offering because it would allow for the award of punitive damages and fees related to the bringing of the litigation. They would likely argued, with justification, that there would be a wave of class action lawsuits. Another non-starter with Republicans is that the act circumvents a threshold consideration that weeds out lawsuits in federal court by stating that a mere violation of the act is an injury for purposes of the lawsuit. This language sidesteps the obstacle upon which may suits are dashed, for one whose privacy has been violated often cannot show an injury in the form of monetary or economic losses.

Like other bills, pre-dispute arbitration agreements and pre-dispute joint action waiver” signed by any person shall not be valid or enforceable in court, meaning companies cannot limit legal liability by requiring that people waive their rights as part of the terms of service as is now customary.  

As noted previously, the bill would not preempt all state privacy laws. Rather only those portions of state laws that conflict with this act would be preempted, and states would be free to legislate requirements more stringent than the new federal privacy regulatory structure. Moreover, the bill makes clear that common law actions would still be available.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Jan Alexander from Pixabay

A Washington State Privacy Bill…Rises From The Dead

One of the sponsors of a privacy bill that died earlier this year has reintroduced a modified version with new language in the hopes of passing the bill next year.

Washington State Senator Reuven Carlyle (D-Seattle) has floated a new draft of privacy legislation in the hopes it will be pass after forerunner bills dying in the last two legislative sessions. Carlyle has made a number of changes in the “Washington Privacy Act 2021” documented in this chart showing the differences between the new bill, the last version of the bill passed by the Washington State Senate last year, the “California Consumer Privacy Act” (CCPA) (AB 375), and the “California Privacy Rights Act” (CPRA) (aka Proposition 24) on this year’s ballot. But in the main, the bill tracks closely with the two bills produced by the Washington Senate and House last year lawmakers could not ultimately reconcile. However, there are no provisions on facial recognition technology, which was largely responsible for sinking a privacy bill in Washington State two years ago. Carlyle has taken the unusual step of appending language covering the collection and processing of personal data to combat infectious diseases like COVID-19.

Big picture, the bill still uses the concepts of data controllers and processors most famously enshrined in the European Union’s (EU) General Data Protection Regulation (GDPR). Like other privacy bills, generally, people in Washington State would not need to consent before an entity could collect and process its information. People would be able to opt out of some activities, but most could data collection and processing could still occur as it presently does.

Washingtonians would be able to access, correct, delete, and port their personal data. Moreover, people would be able to opt out of certain data processing: “for purposes of targeted advertising, the sale of personal data, or profiling in furtherance of decisions that produce legal effects concerning a consumer or similarly significant effects concerning a consumer.” Controllers must provide at least two secure and reliable means by which people could exercise these rights and may not require the creation of a new account. Rather, a controller can require a person to use an existing account to exercise her rights.

Controllers must act on the request within 45 days and are allowed one 45-day extension “where reasonably necessary, taking into account the complexity and number of the requests.” It is not clear what would justify a 45-day extension except for numerous, complex requests, but in any event, the requester must be informed of an extension. Moreover, if a controller decides not to comply with the request, it must let the person know within 45 days, the reasons for noncompliance, and how an appeal may be filed. People would be permitted two free requests a year (although nothing stops a controller from meeting additional requests for free), and controllers may charge thereafter to cover reasonable costs and to deal with repetitive requests. Controllers may also just deny repetitive requests, too, and they may also deny requests they cannot authenticate. In the event of the latter, a controller may ask for more information so the person can prove his identity but is not required to.

Each controller would need to establish an internal appeals process for people to use if their request to exercise a right is denied. There is a specified timeline, and, at the end of this process, if a person is unhappy with the decision, the controller must offer to turn the matter over to the Office of the Attorney General of Washington for adjudication.

Like last year’s bills, this draft makes clear the differentiated roles of controllers and processors in the data ecosystem regulated by Washington State. Processors must follow a controller’s instructions and has an obligation to help the controller comply with the act. These obligations must set out in a contract between each controller and processor “that sets out the processing instructions to which the processor is bound, including the nature and purpose of the processing, the type of personal data subject to the processing, the duration of the processing, and the obligations and rights of both parties.” Additionally, who is a controller and who is a processor will necessarily be a fact driven analysis and it is possible for one entity to be both depending on the circumstances.

Notably, processors must help controllers respond to requests from people exercising their rights, secure personal data, and assist in complying with Washington State’s data breach protocol if a breach occurs. Processors must implement and use security commensurate to the personal data they are holding and processing.

Controllers are obligated to furnish privacy policies to people that must include the categories of personal data processed, the purposes for any processing, the categories of personal data shared with third parties, and the categories of third parties with whom sharing occurs. Moreover, if a controller sells personal data for targeted advertising, a controller has a special obligation to make people aware on a continuing basis, including their right to opt out if they choose. Data collection is limited to what is reasonably necessary for the disclosed purposes of the data processing. And yet, a controller may ask for and obtain consent to process for purposes beyond those reasonably necessary to effectuate the original purposes disclosed to the person. Controllers would also need to minimize the personal data it has on hand.

Controllers must “establish, implement, and maintain reasonable administrative, technical, and physical data security practices to protect the confidentiality, integrity, and accessibility of personal data…[that] shall be appropriate to the volume and nature of the personal data at issue.” Controllers would not be allowed to process personal data in a way that would violate discrimination laws. And so, controllers may not “process personal data on the basis of a consumer’s or a class of consumers’ actual or perceived race, color, ethnicity, religion, national origin, sex, gender, gender identity, sexual orientation, familial status, lawful source of income, or disability, in a manner that unlawfully discriminates against the consumer or class of consumers with respect to the offering or provision of (a) housing, (b) employment, (c) credit, (d) education, or (e) the goods, services, facilities, privileges, advantages, or accommodations of any place of public accommodation.” Controllers could not retaliate against people who exercise any of their rights to access, correct, delete, or port their personal data through offering differently priced or quality products or services. And yet, controllers may offer different prices and services as part of a loyalty program that is voluntary for people to join and may share personal data with third parties for reasons limited to the loyalty program.

Regarding another subset of personal data, consent will be needed before processing can occur. This subset is “sensitive data,” which is defined as “(a) personal data revealing racial or ethnic origin, religious beliefs, mental or physical health condition or diagnosis, sexual orientation, or citizenship or immigration status; (b) the processing of genetic or biometric data for the purpose of uniquely identifying a natural person; (c) the personal data from a known child; or (d) specific geolocation data.”

The bill also bars a person waiving his or her rights under any type of agreement, and this will be null and void for reasons of public policy.

Controllers would not need to reidentify deidentified personal data to respond to a request from a person. However, the way this section is written gives rise to questions about the drafter’s intentions. The section would not require controllers to respond to requests from people to access, correct, delete or port personal data if the “controller is not reasonably capable of associating the request with the personal data, or…it would be unreasonably burdensome for the controller to associate the request with the personal data” if other conditions are true as well. Given that this provision comes right after the language on reidentifying deidentified data, it seems like the aforementioned language would apply to other personal data. And so, some controllers could respond to a request by arguing they cannot associate the request or it would be unduly burdensome. Perhaps this is not what the drafters intend, but this could become a route whereby controllers deny legitimate requests.

This section of the bill also makes clear that people will not be able to exercise their rights of access, correction, deletion, or porting if the personal data are “pseudonymous data.” This term is defined as “personal data that cannot be attributed to a specific natural person without the use of additional information, provided that such additional information is kept separately and is subject to appropriate technical and organizational measures to ensure that the personal data are not attributed to an identified or identifiable natural person.” This is a concept that would seem to encourage controllers and processors to store data in a state that would strip identifiers from the personal data in order for them not to have to incur the cost and time of responding to requests. It bears note the concept and definition appear heavily influenced by the GDPR, which provides:

pseudonymisation’ means the processing of personal data in such a manner that the personal data can no longer be attributed to a specific data subject without the use of additional information, provided that such additional information is kept separately and is subject to technical and organisational measures to ensure that the personal data are not attributed to an identified or identifiable natural person

Data protection assessments will be necessary for a subset of processing activities: targeted advertising, selling personal data, processing sensitive data, any processing of personal data that presents “a heightened risk of harm to consumers” and another case that requires explanation. This last category is for those controllers who are profiling such that a reasonably foreseeable risk is presented of:

  • “Unfair or deceptive treatment of, or disparate impact on, consumers;
  • financial, physical, or reputational injury to consumers;
  • a physical or other intrusion upon the solitude or seclusion, or the
  • private affairs or concerns, of consumers, where such intrusion would be offensive to a reasonable person; or
  • other substantial injury to consumers.”

These “data protection assessments must take into account the type of personal data to be processed by the controller, including the extent to which the personal data are sensitive data, and the context in which the personal data are to be processed.” Moreover, data protection assessments “must identify and weigh the benefits that may flow directly and indirectly from the processing to the controller, consumer, other stakeholders, and the public against the potential risks to the rights of the consumer associated with such processing,  as mitigated by safeguards that can be employed by the controller to reduce such risks.” Moreover, the bill stipulates “[t]he use of deidentified data and the reasonable expectations of consumers, as well as the context of the processing and the relationship between the controller and the consumer whose personal data will be processed, must be factored into this assessment by the controller.” And, crucially, controllers must provide data protection assessments to the Washington Attorney General upon request, meaning they could inform an enforcement action or investigation.

Section 110 of the “Washington Privacy Act 2021” lays out the reasons one usually finds in privacy bills as to the circumstances when controllers and processors are not bound by the act, including but not limited to:

  • Comply with federal, state, or local laws, rules, or regulations;
  • Comply with a civil, criminal, or regulatory inquiry, investigation, subpoena, or summons by federal, state, local, or other governmental authorities;
  • Cooperate with law enforcement agencies concerning conduct or activity that the controller or processor reasonably and in good faith believes may violate federal, state, or local laws, rules, or regulations;
  • Provide a product or service specifically requested by a consumer, perform a contract to which the consumer is a party, or take steps at the request of the consumer prior to entering into a contract;
  • Take immediate steps to protect an interest that is essential for the life of the consumer or of another natural person, and where the processing cannot be manifestly based on another legal basis;
  • Prevent, detect, protect against, or respond to security incidents, identity theft, fraud, harassment, malicious or deceptive activities, or any illegal activity; preserve the integrity or security of systems; or investigate, report, or prosecute those responsible for any such action;

Moreover, the act does “not restrict a controller’s or processor’s ability to collect, use, or retain data to:

  • Conduct internal research solely to improve or repair products, services, or technology;
    Identify and repair technical errors that impair existing or intended functionality; or
  • Perform solely internal operations that are reasonably aligned with the expectations of the consumer based on the consumer’s existing relationship with the controller, or are otherwise compatible with processing in furtherance of the provision of a product or service specifically requested by a consumer or the performance of a contract to which the consumer is a party.

It seems reasonable to expect controllers and processors to try and read these provisions as liberally as they can in order to escape or circumvent the obligations of the act. I do not level this claim as a criticism; rather, it is what will undoubtedly occur if a regulated entity has halfway decent legal counsel.

One also finds legal liability for controllers that was in last year’s bill, too. The act makes clear that controllers cannot be liable for a processor’s violation if “at the time of disclosing the personal data, the disclosing controller or processor did not have actual knowledge that the recipient intended to commit a violation.” Consequently, even if a reasonable person could foresee that a processor would likely violate the act, unless the controller actually knows a violation is imminent, then the controller cannot be held liable. This structuring of the legal liability will likely result in controllers claiming they did not know of processors’ violations and create a disincentive for controllers to press processors to comply with the statutory and contractual requirements binding both.

The bill reiterates:

Personal data that are processed by a controller pursuant to [any of the aforementioned carveouts in Section 110] must not be processed for any purpose other than those expressly listed in this section. Personal data that are processed by a controller pursuant to this section may be processed solely to the extent that such processing is:

(i) Necessary, reasonable, and proportionate to the purposes listed in this section; and

(ii) adequate, relevant, and limited to what is necessary in relation to the specific purpose or purposes listed in this section.

Finally, controllers bear the burden of making the case that the exception being used complies with this section. This would serve to check a regulated entity’s inclination to read terms and requirements as generously as possible for them and their conduct.

The bill would not create a new right for people to sue, but if there are existing grounds a person uses to sue (e.g. product liability, tort, contract law, etc.) and wins, the liability would be distributed between a controller and processor according to their liability.

In terms of enforcement by the Attorney General, violations of this act are treated as violations of the Washington State Consumer Protection Act, and violations are considered violations of the ban on unfair and deceptive practices with civil liability as high as $7,500 per violation. However, the Attorney general must first “provide a controller thirty days’ written notice identifying the specific provisions of this title the Attorney General, on behalf of a consumer, alleges have been or are being violated.” If a cure is affected, then the Attorney General may not seek monetary damages. But if a cure is not, then the Attorney General may take the matter to court.

The act preempts all county, city, and local data processing laws.

There is new language in the bill pertaining to public health emergencies, privacy, and contact tracing. However, the provisions are divided between two different titles with one controlling private sector entities and the other public sector entities. Incidentally, at the federal level, privacy bills have not tended to include provisions to address public health emergencies and instead standalone bills have been drafted and introduced.

In regard to private sector entities, controllers and processors would not be able to process “covered data” for a “covered purpose” which relates to the symptoms of infectious diseases and tracking their spread, unless certain conditions are met. For example, these entities would need to make available a privacy policy and people must consent to such processing. Additionally, controllers and processors would not be able to disclose “any covered data processed for a covered purpose” to any law enforcement agency in the U.S., sell “any covered data processed for a covered purpose,” or “[s]hare any covered data processed for a covered purpose with another controller, processor, or third party unless such sharing is governed by contract” according to the terms laid out in this section and disclosed to the person per the privacy policy that must be disclosed. However, private sector entities could disclose covered data processed for a covered purpose to federal, state, and local agencies pursuant to laws permitting such disclosure. So, this would likely be under public health or emergency laws.

This section of the bill defines “covered purpose” as

processing of covered data concerning a consumer for the purposes of detecting symptoms of an infectious disease, enabling the tracking of a consumer’s contacts with other consumers, or with specific locations to identify in an automated fashion whom consumers have come into contact with, or digitally notifying, in an automated manner, a consumer who may have become exposed to an infectious disease, or other similar purposes directly related to a state of emergency declared by the governor pursuant to RCW 43.06.010 and any restrictions imposed under the state of emergency declared by the governor pursuant to RCW 43.06.200 through 43.06.270.

There is a section that seems redundant. This provision establishes the right of a person to opt out of processing her covered data for a covered purpose, but the previous section makes clear a person’s covered data may not be processed without her consent. Nonetheless, a person may determine whether his covered data is being processed, request a correction of inaccurate information, and request the deletion of “covered data.” The provisions on how controllers are required to respond to and process such requests are virtually identical to those established for the exercise of the rights to access, correct, delete, and port in the bill.

The relationship and responsibilities between controllers and processors track very closely to those imposed for normal data processing.

Controllers would need to make available privacy policies specific to processing covered data. The bill provides:

Controllers that process covered data for a covered purpose must provide consumers with a clear and conspicuous privacy notice that includes, at a minimum:

  • How a consumer may exercise the rights contained in section 203 of this act, including how a consumer may appeal a controller’s action with regard to the consumer’s request;
  • The categories of covered data processed by the controller;
  • The purposes for which the categories of covered data are processed;
  • The categories of covered data that the controller shares with third parties, if any; and
  • The categories of third parties, if any, with whom the controller shares covered data.

Controllers would also need to limit collection of covered data to what is reasonably necessary for processing and minimize collection. Moreover, controllers may not process covered data in ways that exceed what is reasonably necessary for covered purposes unless consent is obtained from each person. But then the bill further limits what processing of covered data is permissible by stating that controllers cannot “process covered data or deidentified data that was processed for a covered purpose for purposes of marketing, developing new products or services, or engaging in commercial product or market research.” Consequently, other processing purposes would be permissible provided consent has been obtained. And so, a covered entity might process covered data to improve the current means of collecting covered data for the covered purpose.

There is no right to sue entities for violating this section, but it appears controllers may bear more legal responsibility for the violations of its processors regarding covered data. Moreover, the enforcement language is virtually identical to the earlier provisions in the bill as to how the Attorney General may punish violators.

The bill’s third section would regulate the collection and processing of covered data for covered purposes by public sector entities, and for purposes of this section controllers are defined as “local government, state agency, or institutions of higher education which, alone or jointly with others, determines the purposes and means of the processing of covered data.” This section is virtually identical to the second section with the caveat that people would not be given the right to determine if their covered data has been collected and processed for a covered purpose, to request a correction of covered data, and to ask that such data be deleted. Also, a person could not ask to opt out of collection.

Finally, two of the Congressional stakeholders on privacy and data security hail from Washington state, and consideration and possible passage of a state law may limit their latitude on a federal bill they could support. Senator Maria Cantwell (D-WA) and Representative Cathy McMorris Rodgers (R-WA), who are the ranking members of the Senate Commerce, Science, and Transportation Committee and House Energy and Commerce’s Consumer Protection and Commerce Subcommittee respectively, are involved in drafting their committee’s privacy bills, and a Washington state statute may affect their positions in much the same the “California Consumer Privacy Act” (CCPA) (AB 375) has informed a number of California Members’ position on privacy legislation, especially with respect to bills being seen as weaker than the CCPA.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Photo Credit: Ally Laws on Pixabay

Further Reading, Other Developments, and Coming Events (22 September)

Coming Events

  • The United States’ Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) announced that its third annual National Cybersecurity Summit “will be held virtually as a series of webinars every Wednesday for four weeks beginning September 16 and ending October 7:”
    • September 23: Leading the Digital Transformation
    • September 30: Diversity in Cybersecurity
    • October 7: Defending our Democracy
    • One can register for the event here.
  • The Senate Judiciary Committee’s Intellectual Property Subcommittee will hold a hearing on 23 September titled “Examining Threats to American Intellectual Property: Cyber-attacks and Counterfeits During the COVID-19 Pandemic” with these witnesses:
    • Adam Hickey, Deputy Assistant Attorney General National Security Division, Department of Justice
    • Clyde Wallace, Deputy Assistant Director Cyber Division, Federal Bureau of Investigation
    • Steve Francis, Assistant Director, HSI Global Trade Investigations Division Director, National Intellectual Property Rights Center, U.S. Immigration and Customs Enforcement, Department of Homeland Security
    • Bryan S. Ware, Assistant Director for Cybersecurity Cyber Security and Infrastructure Security Agency, Department of Homeland Security
  • On 23 September, the Commerce, Science, and Transportation Committee will hold a hearing titled “Revisiting the Need for Federal Data Privacy Legislation,” with these witnesses:
    • The Honorable Julie Brill, Former Commissioner, Federal Trade Commission
    • The Honorable William Kovacic, Former Chairman and Commissioner, Federal Trade Commission
    • The Honorable Jon Leibowitz, Former Chairman and Commissioner, Federal Trade Commission
    • The Honorable Maureen Ohlhausen, Former Commissioner and Acting Chairman, Federal Trade Commission
    • Mr. Xavier Becerra, Attorney General, State of California
  • The House Energy and Commerce Committee’s Consumer Protection and Commerce Subcommittee will hold a virtual hearing “Mainstreaming Extremism: Social Media’s Role in Radicalizing America” on 23 September with these witnesses:
    • Marc Ginsburg, President, Coalition for a Safer Web
    • Tim Kendall, Chief Executive Officer, Moment
    • Taylor Dumpson, Hate Crime Survivor and Cyber-Harassment Target
    • John Donahue, Fellow, Rutgers University Miler Center for Community Protection and Resiliency, Former Chief of Strategic Initiatives, New York City Police Department
  • On 23 September, the Senate Homeland Security and Governmental Affairs will hold a hearing to consider the nomination of Chad Wolf to be the Secretary of Homeland Security.
  • The Senate Armed Services Committee will hold a closed briefing on 24 September “on Department of Defense Cyber Operations in Support of Efforts to Protect the Integrity of U.S. National Elections from Malign Actors” with:
    • Kenneth P. Rapuano, Assistant Secretary of Defense for Homeland Defense and Global Security
    • General Paul M. Nakasone, Commander, U.S. Cyber Command and Director, National Security Agency/Chief, Central Security Service
  • On 24 September, the Homeland Security and Governmental Affairs will hold a hearing on “Threats to the Homeland” with:
    • Christopher A. Wray, Director, Federal Bureau of Investigation
    • Christopher Miller, Director, National Counterterrorism Center
    • Kenneth Cuccinelli, Senior Official Performing the Duties of the Deputy Secretary of Homeland Security
  • The Senate Judiciary Committee’s Antitrust, Competition Policy & Consumer Rights Subcommittee will hold a hearing on 30 September titled “Oversight of the Enforcement of the Antitrust Laws” with Federal Trade Commission Chair Joseph Simons and United States Department of Justice Antitrust Division Assistant Attorney General Makan Delhrahim.
  • The Federal Communications Commission (FCC) will hold an open meeting on 30 September and has made available its agenda with these items:
    • Facilitating Shared Use in the 3.1-3.55 GHz Band. The Commission will consider a Report and Order that would remove the existing non-federal allocations from the 3.3-3.55 GHz band as an important step toward making 100 megahertz of spectrum in the 3.45-3.55 GHz band available for commercial use, including 5G, throughout the contiguous United States. The Commission will also consider a Further Notice of Proposed Rulemaking that would propose to add a co-primary, non-federal fixed and mobile (except aeronautical mobile) allocation to the 3.45-3.55 GHz band as well as service, technical, and competitive bidding rules for flexible-use licenses in the band. (WT Docket No. 19-348)
    • Expanding Access to and Investment in the 4.9 GHz Band. The Commission will consider a Sixth Report and Order that would expand access to and investment in the 4.9 GHz (4940-4990 MHz) band by providing states the opportunity to lease this spectrum to commercial entities, electric utilities, and others for both public safety and non-public safety purposes. The Commission also will consider a Seventh Further Notice of Proposed Rulemaking that would propose a new set of licensing rules and seek comment on ways to further facilitate access to and investment in the band. (WP Docket No. 07-100)
    • Improving Transparency and Timeliness of Foreign Ownership Review Process. The Commission will consider a Report and Order that would improve the timeliness and transparency of the process by which it seeks the views of Executive Branch agencies on any national security, law enforcement, foreign policy, and trade policy concerns related to certain applications filed with the Commission. (IB Docket No. 16-155)
    • Promoting Caller ID Authentication to Combat Spoofed Robocalls. The Commission will consider a Report and Order that would continue its work to implement the TRACED Act and promote the deployment of caller ID authentication technology to combat spoofed robocalls. (WC Docket No. 17-97)
    • Combating 911 Fee Diversion. The Commission will consider a Notice of Inquiry that would seek comment on ways to dissuade states and territories from diverting fees collected for 911 to other purposes. (PS Docket Nos. 20-291, 09-14)
    • Modernizing Cable Service Change Notifications. The Commission will consider a Report and Order that would modernize requirements for notices cable operators must provide subscribers and local franchising authorities. (MB Docket Nos. 19-347, 17-105)
    • Eliminating Records Requirements for Cable Operator Interests in Video Programming. The Commission will consider a Report and Order that would eliminate the requirement that cable operators maintain records in their online public inspection files regarding the nature and extent of their attributable interests in video programming services. (MB Docket No. 20-35, 17-105)
    • Reforming IP Captioned Telephone Service Rates and Service Standards. The Commission will consider a Report and Order, Order on Reconsideration, and Further Notice of Proposed Rulemaking that would set compensation rates for Internet Protocol Captioned Telephone Service (IP CTS), deny reconsideration of previously set IP CTS compensation rates, and propose service quality and performance measurement standards for captioned telephone services. (CG Docket Nos. 13-24, 03-123)
    • Enforcement Item. The Commission will consider an enforcement action.

Other Developments

  • The United States (U.S.) Department of Justice (DOJ) has indicted two Iranian nationals for allegedly hacking into systems in the U.S., Europe, and the Middle East dating back to 2013 to engage in espionage and sometimes theft.
    • The DOJ claimed in its press release:
      • According to a 10-count indictment returned on Sept. 15, 2020, Hooman Heidarian, a/k/a “neo,” 30, and Mehdi Farhadi, a/k/a “Mehdi Mahdavi” and “Mohammad Mehdi Farhadi Ramin,” 34, both of Hamedan, Iran, stole hundreds of terabytes of data, which typically included confidential communications pertaining to national security, foreign policy intelligence, non-military nuclear information, aerospace data, human rights activist information, victim financial information and personally identifiable information, and intellectual property, including unpublished scientific research.  In some instances, the defendants’ hacks were politically motivated or at the behest of Iran, including instances where they obtained information regarding dissidents, human rights activists, and opposition leaders.  In other instances, the defendants sold the hacked data and information on the black market for private financial gain.
      • The victims included several American and foreign universities, a Washington, D.C.-based think tank, a defense contractor, an aerospace company, a foreign policy organization, non-governmental organizations (NGOs), non-profits, and foreign government and other entities the defendants identified as rivals or adversaries to Iran.  In addition to the theft of highly protected and sensitive data, the defendants also vandalized websites, often under the pseudonym “Sejeal” and posted messages that appeared to signal the demise of Iran’s internal opposition, foreign adversaries, and countries identified as rivals to Iran, including Israel and Saudi Arabia.
  • Two United States (U.S.) agencies took coordinated action against an alleged cyber threat group and a front company for a “a years-long malware campaign that targeted Iranian dissidents, journalists, and international companies in the travel sector.” The U.S. Department of the Treasury’s Office of Foreign Assets Control (OFAC) “imposed sanctions on Iranian cyber threat group Advanced Persistent Threat 39 (APT39), 45 associated individuals, and one front company…Rana Intelligence Computing Company (Rana)” per the agency’s press release. Treasury further claimed:
    • Rana advances Iranian national security objectives and the strategic goals of Iran’s Ministry of Intelligence and Security (MOIS) by conducting computer intrusions and malware campaigns against perceived adversaries, including foreign governments and other individuals the MOIS considers a threat. APT39 is being designated pursuant to E.O. 13553 for being owned or controlled by the MOIS, which was previously designated on February 16, 2012 pursuant to Executive Orders 13224, 13553, and 13572, which target terrorists and those responsible for human rights abuses in Iran and Syria, respectively.
    • The Federal Bureau of Investigation (FBI) provided “information on numerous malware variants and indicators of compromise (IOCs) associated with Rana to assist organizations and individuals in determining whether they may have been targeted.”
  • The United States (U.S.) Department of Justice (DOJ) also released grand jury indictments against five nationals of the People’s Republic of China and two Malaysians for extensive hacking and exfiltration of commercial and business information with an eye towards profiting from these crimes. The DOJ asserted in its press release:
    • In August 2019 and August 2020, a federal grand jury in Washington, D.C., returned two separate indictments (available here and here) charging five computer hackers, all of whom were residents and nationals of the People’s Republic of China (PRC), with computer intrusions affecting over 100 victim companies in the United States and abroad, including software development companies, computer hardware manufacturers, telecommunications providers, social media companies, video game companies, non-profit organizations, universities, think tanks, and foreign governments, as well as pro-democracy politicians and activists in Hong Kong.
    •  The intrusions, which security researchers have tracked using the threat labels “APT41,” “Barium,” “Winnti,” “Wicked Panda,” and “Wicked Spider,” facilitated the theft of source code, software code signing certificates, customer account data, and valuable business information.  These intrusions also facilitated the defendants’ other criminal schemes, including ransomware and “crypto-jacking” schemes, the latter of which refers to the group’s unauthorized use of victim computers to “mine” cryptocurrency. 
    • Also in August 2020, the same federal grand jury returned a third indictment charging two Malaysian businessmen who conspired with two of the Chinese hackers to profit from computer intrusions targeting the video game industry in the United States and abroad.  Shortly thereafter, the U.S. District Court for the District of Columbia issued arrest warrants for the two businessmen.  On Sept. 14, 2020, pursuant to a provisional arrest request from the United States with a view to their extradition, Malaysian authorities arrested them in Sitiawan.
  • On 21 September, the House of Representatives took and passed the following bills, according to summaries provided by the House Majority Whip’s office:
    • The “Effective Assistance in the Digital Era” (H.R. 5546) (Rep. Jeffries – Judiciary) This bill requires the Federal Bureau of Prisons to establish a system to exempt from monitoring any privileged electronic communications between incarcerated individuals and their attorneys or legal representatives.
    • The “Defending the Integrity of Voting Systems Act (S. 1321) This bill broadens the definition of “protected computer” for purposes of computer fraud and abuse offenses under current law to include a computer that is part of a voting system.
    • The “Promoting Secure 5G Act of 2020” (H.R. 5698) This bill will establish as a U.S. policy within the IFIs to only finance 5G projects and other wireless technologies that include adequate security measures in furtherance of national security aims to protect wireless networks from bad actors and foreign governments.
    • The “MEDIA Diversity Act of 2020” (H.R. 5567) This bill Requires the FCC to consider market entry barriers for socially disadvantaged individuals in the communications marketplace.
    • The “Don’t Break Up the T-Band Act of 2020” as amended (H.R. 451) This bill repeals the requirement on the FCC to reallocate and auction the T-Band.  H.R. 451 also requires the FCC to adopt rules limiting the use of 9-1-1 fees by States or other taxing jurisdictions to (1) the support and implementation of 9-1-1 services and (2) operational expenses of public safety answering points.
    • It bears note that S. 1321 has passed the Senate, and so it is off to the White House for the only election security bill that has made it through both house of Congress.

Further Reading

  • Justice Department expected to brief state attorneys general this week on imminent Google antitrust lawsuit” By Tony Romm — The Washington Post; “Justice Dept. Case Against Google Is Said to Focus on Search Dominance” By Cecilia Kang, Katie Benner, Steve Lohr and Daisuke Wakabayashi — The New York Times; “Justice Department, states to meet in possible prelude to Google antitrust suit” By Leah Nylen — Politico. Tomorrow, the United States Department of Justice (DOJ) will outline its proposed antitrust case against Google with state attorneys general, almost all of whom are investigating Google on the same grounds. Reportedly, the DOJ case is focused on the company’s dominance of online searches, notably its arrangement to make Google the default search engine on iPhones and Androids, and not on its advertising practices. If the DOJ goes this road, then it will be similar to the European Union’s (EU) 2018 case against Google for the same, which resulted in EU residents being offered a choice on search engines on Android devices and a €4.34 billion fine. This development comes after articles earlier this month that Attorney General William Barr has been pushing the DOJ attorneys and investigators against the wishes of many to wrap up the investigation in time for a pre-election filing that would allow President Donald Trump to claim he is being tough on big technology companies. However, if this comes to pass, Democratic attorneys general may decline to join the suit and may bring their own action also alleging violations in the online advertising realm that Google dominates. In this vein, Texas Attorney General Ken Paxton has been leading the state effort to investigate Google’s advertising business, which critics argue is anti-competitive. Also, according to DOJ attorneys who oppose what they see as Barr rushing the suit, this could lead to a weaker case Google may be able to defeat in court. Of course, this news comes shortly after word leaked from the Federal Trade Commission (FTC) that its case against Facebook could be filed regarding its purchase of rivals WhatsApp and Instagram.
  • Why Japan wants to join the Five Eyes intelligence network” By Alan Weedon — ABC News. This piece makes the case as to why the United States, United Kingdom, Canada, Australia, and New Zealand may admit a new member to the Five Eyes soon: Japan. The case for the first Asian country is that it is a stable, western democracy, a key ally in the Pacific, and a bulwark against the influence of the People’s Republic of China (PRC). It is really this latter point that could carry the day, for the Five Eyes may need Japan’s expertise with the PRC and its technology to counter the former’s growing ambitions.
  • The next Supreme Court justice could play a major role in cybersecurity and privacy decisions” By Joseph Marks — The Washington Post. There are a range of cybersecurity and technology cases that the Supreme Court will decide in the near future, and so whether President Donald Trump gets to appoint Justice Ruth Bader Ginsburg’s successor will be very consequential for policy in these areas. For example, the court could rule on the Computer Fraud and Abuse Act for the first time regarding whether researchers are violating the law by probing for weak spots in systems. There are also Fourth Amendment and Fifth Amendment cases pending with technology implications as the former pertains to searches of devices by border guards and the latter to self-incrimination visa vis suspects being required to unlock devices.
  • Facebook Says it Will Stop Operating in Europe If Regulators Don’t Back Down” By David Gilbert —VICE. In a filing in its case against Ireland’s Data Protection Commission (DPC), Facebook made veiled threats that if the company is forced to stop transferring personal data to the United States, it may stop operating in the European Union altogether. Recently, the DPC informed Facebook that because Privacy Shield was struck down, it would need to stop transfers even though the company has been using standard contractual clauses, another method permitted in some case under the General Data Protection Regulation. Despite Facebook’s representation, it seems a bit much that the company would leave the EU to any competitors looking to its fill its shoes.
  • As U.S. Increases Pressure, Iran Adheres to Toned-Down Approach” By Julian E. Barnes, David E. Sanger, Ronen Bergman and Lara Jakes — The New York Times. The Islamic Republic of Iran is showing remarkable restraint in its interactions with the United States in the face of continued, punitive actions against Tehran. And this is true also of its cyber operations. The country has made the calculus that any response could be used by President Donald Trump to great effect in closing the gap against front runner former Vice President Joe Biden. The same has been true of its cyber operations against Israel, which has reportedly conducted extensive operations inside Iran with considerable damage.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Another Federal Privacy Bill

Senate Commerce Republicans revise and release privacy bill that does not budge on main issues setting them apart from their Democratic colleagues.

Last week, in advance of tomorrow’s hearing on privacy legislation, the chair and key Republicans released a revised version of draft legislation released last year to mark their position on what United States (U.S.) federal privacy regulation should be. Notably, last year’s draft and the updated version would still preempt state laws like the “California Consumer Privacy Act” (CCPA) (AB 375), and people in the U.S. would not be given the right to sue entities that violate the privacy law. These two issues continue to be the main battle lines between Democratic and Republican bills to establish a U.S. privacy law. Given the rapidly dwindling days left in the 116th Congress and the possibility of a Democratic White House and Senate next year, this may be both a last gasp effort to get a bill out of the Senate and to lay down a marker for next year.

The “Setting an American Framework to Ensure Data Access, Transparency, and Accountability (SAFE DATA) Act” (S.4626) was introduced by Senate Commerce, Science, and Transportation Committee Chair Roger Wicker (R-MS), Senate Majority Whip and  Communications, Technology, Innovation, and the Internet Subcommittee Chair John Thune (R-SD), Transportation and Safety Subcommittee Chair Deb Fischer (R-NE), and Safety, and Senator Marsha Blackburn (R-TN). However, a notable Republican stakeholder is not a cosponsor: Consumer Protection Subcommittee Chair Jerry Moran (R-KS) who introduced his own bill, the “Consumer Data Privacy and Security Act of 2020” (S.3456) (See here for analysis).

As mentioned, Wicker had put out for comment a discussion draft, the “Consumer Data Privacy Act of 2019” (CDPA) (See here for analysis) in November 2019 shortly after the Ranking Member on the committee, Senator Maria Cantwell (D-WA) and other Democrats had introduced their privacy bill, the “Consumer Online Privacy Rights Act“ (COPRA) (S.2968) (See here for more analysis). Here’s how I summarized the differences at the time: in the main, CDPA shares the same framework with COPRA with some key, significant differences, including:

  • COPRA expands the FTC’s jurisdiction in policing privacy harms whereas CDPA would not
  • COPRA places a duty of loyalty on covered entities to people whose covered data they process or transfer; CDPA does not have any such duty
  • CDPA does not allow people to sue if covered entities violate the new federal privacy and security regime; COPRA would allow such suits to move forward
  • CDPA preempts state privacy and data security laws; COPRA would establish a federal floor that states like California would be able to legislate on top of
  • CDPA would take effect two years after enactment, and COPRA would take effect six months after enactment.
  • The bar against a person waiving her privacy rights under COPRA is much broader than CDPA
  • COPRA would empower the FTC to punish discriminatory data processing and transfers; CDPA would require the FTC to refer these offenses to the appropriate federal and state agencies
  • CDPA revives a concept from the Obama Administration’s 2015 data privacy bill by allowing organizations and entities to create standards or codes of conduct and allowing those entities in compliance with these standards or codes to be deemed in compliance with CDPA subject to FTC oversight; COPRA does not have any such provision
  • COPRA would require covered entities to conduct algorithmic decision-making impact assessments to determine if these processes are fair, accurate, and unbiased; no such requirement is found in CDPA

As a threshold matter, the SAFE DATA Act is in the latest in a line of enhanced notice and consent bills founded on the logic that if people were informed and able to make choices about how and when their data are used, then the U.S. would have an ideal data and privacy ecosystem. This view, perhaps coincidentally, dovetails with Republican views on other issues where people should merely be given information and the power to choose, and any bad outcomes being the responsibility of those who made poor choices. This view runs counter to those who see privacy and data security as being akin to environmental or pollution problems, that is being beyond the ability of any one person to manage or realistically change.

Turning to the bill before us, we see that while covered entities may not outright deny services and products to people if they choose to exercise the rights granted under the bill visa vis their covered data, a covered entity may charge different prices. This structure would predictably lead to only those who can afford it or are passionately committed to their privacy being able to pay for more privacy. And yet, the rights established by the bill for people to exercise some control over their private information cannot be waived, forestalling the possibility that some covered entities would make such a waiver a term of service like many companies do with a person’s right to sue.

Covered entities must publish privacy policies before or at the point of data collection, including:

  • The identity of the entity in charge of processing and using the covered data
  • The categories of covered data collected and the processing purposes of each category
  • Whether transfers of covered data occur, the categories of those receiving such data, and the purposes for which transfers occur
  • The entity’s data retention and data security policies generally; and
  • How individuals may exercise their rights.

Any material changes mean new privacy policies provided to people and consent again must be obtained before collection and processing may resume.

There is, however, language not seen in other privacy bills: “[w]here the ownership of an individual’s device is transferred directly from one individual to another individual, a covered entity may satisfy its obligation to disclose a privacy policy prior to or at the point of collection of covered data by making the privacy policy available under (a)(2)” (i.e. by posting on the entity’s website.) So, if I give an old phone to a friend, a covered entity may merrily continue collecting and processing data because I consented and my friend’s consent is immaterial. Admittedly, this would seem to be a subset of all devices used in the U.S., but it does not seem to be a stretch for covered entities to need to obtain consent if they determine a different person has taken over a device. After all, they will almost certainly be able to discern the identity of the new user and that the device is now being used by someone new.

Section 103 of the SAFE DATA Act establishes a U.S. resident’s rights to access, correct, delete, and port covered data. People would be able to access their covered data and correct “material” inaccuracies or incomplete information at least twice a year at no cost provided the covered entity can verify their identity. Included with the right to access would be provision of the categories of third parties to whom covered data has been transferred and a list of the categories of purposes. There is a long list of reasons why covered entities would not need to comply, including but not limited to:

  • If the covered entity must “retain any covered data for the sole purpose of fulfilling the request; “
  • If it would “be impossible or demonstrably impracticable to comply with;”
  • If a request would “require the covered entity to combine, relink, or otherwise reidentify covered data that has been deidentified;”
  • If it would “result in the release of trade secrets, or other proprietary or confidential data or business practices;”
  • If it would “interfere with law enforcement, judicial proceedings, investigations, or reasonable efforts to guard against, detect, or investigate malicious or unlawful activity, or enforce contracts;”
  • If it would “require disproportionate effort, taking into consideration available technology, or would not be reasonably feasible on technical grounds;”
  • If it would “compromise the privacy, security, or other rights of the covered data of an- other individual;”
  • If it would “be excessive or abusive to another individual; or
  • If t would “violate Federal or State law or the rights and freedoms of another individual, including under the Constitution of the United States.”

This extensive list will give companies not interested in complying with plenty of reason to proffer as to why they will not provide access or correct. Nonetheless, the FTC would need to draft and implement regulations “establishing requirements for covered entities with respect to the verification of requests to exercise rights” to access and correct. Perhaps the agency will be able to address some foreseeable problems with the statute as written.

Explicit consent is needed before a covered entity may transfer or process the “sensitive covered data” of a person. The first gloss on this right is that a person’s consent is not needed to collect, process, and transfer the “covered data” of a person. Elsewhere in the section, it is clear that one has a limited opt out right: “a covered entity shall provide an individual with the ability to opt out of the collection, processing, or transfer of such individual’s covered data before such collection, processing, or transfer occurs.”

Nonetheless, a bit of a detour back into the definitions section of the bill is in order to understand which types of data lay on which side of the consent line. “Covered data” are “information that identifies or is linked or reasonably linkable to an individual or a device that is linked or reasonably linkable to an individual” except for publicly available data, employment data, aggregated data, and de-identified data. Parenthetically, I would note the latter two exceptions would seem to be incentives for companies to hold personal information in the aggregate or in a de-identified state as much as possible so as to avoid triggering the requirements of the SAFE DATA Act.

“Sensitive covered data” would be any of the following (and, my apologies, the list is long):

  • A unique, government-issued identifier, such as a Social Security number, passport number, or driver’s license number, that is not required to be displayed to the public.
  • Any covered data that describes or reveals the diagnosis or treatment of the past, present, or future physical health, mental health, or disability of an individual.
  • A financial account number, debit card number, credit card number, or any required security or access code, password, or credentials allowing access to any such account.
    Covered data that is biometric information.
  • A persistent identifier.
  • Precise geolocation information (defined elsewhere as anything within 1750 feet)
  • The contents of an individual’s private communications, such as emails, texts, direct messages, or mail, or the identity of the parties subject to such communications, unless the covered entity is the intended recipient of the communication (meaning metadata is fair game; and this can be incredibly valuable. Just ask he National Security Agency)
  • Account log-in credentials such as a user name or email address, in combination with a password or security question and answer that would permit access to an online account.
  • Covered data revealing an individual’s racial or ethnic origin, or religion in a manner inconsistent with the individual’s reasonable expectation regarding the processing or transfer of such information. (of course, this sort of qualifying language always makes me think according to who’s definition of “reasonable expectation”)
  • Covered data revealing the sexual orientation or sexual behavior of an individual in a manner inconsistent with the individual’s reasonable expectation regarding the processing or transfer of such information. (See the previous clause)
  • Covered data about the online activities of an individual that addresses or reveals a category of covered data described in another subparagraph of this paragraph. (I suppose this is intended as a backstop against covered entities trying to backdoor their way into using sensitive covered data by claiming its covered data from online activities.)
  • Covered data that is calendar information, address book information, phone or text logs, photos, or videos maintained for private use on an individual’s device.
  • Any covered data collected or processed by a covered entity for the purpose of identifying covered data described in another paragraph of this paragraph. (again, this seems aimed at plugging a possible loophole in that ordinary covered data can probably be processed or combined with other covered data to arrive at some categories of “sensitive covered data.”)
  • Any other category of covered data designated by the Commission pursuant to a rulemaking under section 553 of title 5, United States Code (meaning the FTC can use normal rulemaking authority and not the shackles of the Moss-Magnuson rulemaking procedures to expand this definition as needed).

So, we have a subset of covered data that would be subject to consent requirements, including notice with a “clear description of the processing purpose for which the sensitive covered data will be processed;” that “clearly identif[ies] any processing purpose that is necessary to fulfill a request made by the individual” that “include[s] a prominent heading that would enable a reasonable individual to easily identify the processing purpose for which consent is sought; and “clearly explain[s] the individual’s right to provide or withhold consent.”

Finally, the FTC may but does not have “to establish requirements for covered entities regarding clear and conspicuous procedures for allowing individuals to provide or withdraw affirmative express consent for the collection of sensitive covered data.” If the agency chooses to do so, it may use the normal notice and comment procedures virtually every other U.S. agency may.

Covered entities must minimize collection, processing, and retention of covered data to “what is reasonably necessary, proportionate, and limited” except if permitted elsewhere in the SAFE DATA Act or another federal statute. Interestingly, the FTC would not be tasked with conducting a rulemaking but would instead need to issue guidelines with best practices on how covered entities would undertake such minimization.

Service providers must follow the direction of the covered entity with whom they are working and delete or deidentify data after they have finished work upon it. Third parties are limited in processing covered data to only those purposes consistent with the reasonable expectations of the individual to whom the data belong. However, third parties do not need to obtain consent for processing sensitive covered data or covered data. However, covered entities must perform due diligence to ensure that service providers and third parties will comply with the requirements particular to these two classes of entities. However, there is no obligation beyond due diligence and no suggestion of liability for the misdeeds and violations of service providers and third parties.

Large data holders would need to conduct periodic privacy impact analyses with an eye toward helping these entities improve their privacy policies. This class of covered entities are those that have processed or transferred the covered data of 8 million or more people in a given year or the sensitive covered data of 300,000 people.

The SAFE DATA Act would generally allow covered entities to collect, process, and transfer the covered data of people without their consent so long as these activities are reasonably necessary, proportionate and limited to the following purposes:

  • To initiate or complete a transaction or to fulfill an order or provide a service specifically requested by an individual, including associated routine administrative activities such as billing, shipping, financial reporting, and accounting.
  • To perform internal system maintenance, diagnostics, product or service management, inventory management, and network management.
  • To prevent, detect, or respond to a security incident or trespassing, provide a secure environment, or maintain the safety and security of a product, service, or individual.
  • To protect against malicious, deceptive, fraudulent, or illegal activity.
  • To comply with a legal obligation or the establishment, exercise, analysis, or defense of legal claims or rights, or as required or specifically authorized by law.
  • To comply with a civil, criminal, or regulatory inquiry, investigation, subpoena, or summons by an Executive agency.
  • To cooperate with an Executive agency or a law enforcement official acting under the authority of an Executive or State agency concerning conduct or activity that the Executive agency or law enforcement official reasonably and in good faith believes may violate Federal, State, or local law, or pose a threat to public safety or national security.
  • To address risks to the safety of an individual or group of individuals, or to ensure customer safety, including by authenticating individuals in order to provide access to large venues open to the public.
  • To effectuate a product recall pursuant to Federal or State law.

People would not be able to opt out of collection, processing, and transferring covered data. As mentioned earlier, U.S. residents would receive a limited right to opt out, and it is in Section 108 that one learns the things a person cannot opt out of. I suppose it should go without saying that covered entities will interpret these terms as broadly as they can in order to forestall people from opting out. The performance of “internal system maintenance, diagnostics, product or service management, inventory management, and network management” seems like a potentially elastic definition that could be asserted to give cover to some covered entities.

Speaking of exceptions, small businesses would not need to heed the rights of individuals regarding their covered data, do not need to minimize their collection, processing, and transferring covered data, and will not need to have data privacy and security officers. These are defined as entities with gross annual revenues below $50 million per year, that has processed the covered data of less than 1 million people, has fewer than 500 employees, and earns less than 50% of its revenue from transferring covered data. On its face, this seems like a very generous definition of what shall be a small business.

The FTC would not be able to police processing and transferring of covered data that violates discrimination laws. Instead the agency would need to transfer these matters to agencies of jurisdiction. The FTC would be required to use its 6(b) authority to “examin[e] the use of algorithms to process covered data in a manner that may violate Federal anti-discrimination laws” and then publish a report in its findings and guidance on how covered entities can avoid violating discrimination laws.

Moreover, the National Institute of Standards and Technology (NIST) must “develop and publish a definition of ‘‘digital content forgery’’ and accompanying explanatory materials.” One year afterwards, the FTC must “publish a report regarding the impact of digital content forgeries on individuals and competition.”

Data brokers would need to register with the FTC, which would then publish a registry of data brokers on its website.

There would be additional duties placed on covered entities. For example, these entities must “establish, implement, and maintain reasonable administrative, technical, and physical data security policies and practices to protect against risks to the confidentiality, security, and integrity of covered data.” However, financial services companies subject to and in compliance with Gramm-Leach-Bliley regulations would be deemed to be in compliance with these data security obligations. The same would be true of entities subject to and in compliance with the “Health Insurance Portability and Accountability Act” and “Health Information Technology for Economic and Clinical Health Act.” Additionally, the FTC may “issue regulations to identify processes for receiving and assessing information regarding vulnerabilities to covered data that are reported to the covered entity.”

The SAFE DATA Act has language new to federal privacy bills on “opaque algorithms.” Specifically, covered internet platforms would not be able to use opaque algorithms unless notice is provided to users and an input-transparent algorithm version is available to users. The term ‘‘covered internet platform’’ is broad and encompasses “any public-facing website, internet application, or mobile application, including a social network site, video sharing service, search engine, or content aggregation service.” An “opaque algorithm” is “an algorithmic ranking system that determines the order or manner that information is furnished to a user on a covered internet platform based, in whole or part, on user-specific data that was not expressly provided by the user to the platform for such purpose.”

The bill makes it an unfair and deceptive practice for “large online operator[s]” “to design, modify, or manipulate a user interface with the purpose or substantial effect of obscuring, subverting, or impairing user autonomy, decision-making, or choice to obtain consent or user data.”

A covered entity must have

  • 1 or more qualified employees or contractors as data privacy officers; and
  • 1 or more qualified employees or contractors…as data security officers.

Moreover, “[a] covered entity shall maintain internal controls and reporting structures to ensure that appropriate senior management officials of the covered entity are involved in assessing risks and making decisions that implicate compliance with this Act.”

There are also provisions protecting whistleblowers inside covered entities that “voluntarily provide[] [“original information”] to the [FTC]…relating to non-compliance with, or any violation or alleged violation of, this Act or any regulation promulgated under this Act.”

Like virtually all the other bills, the FTC would be able to levy civil fines of more than $42,000 per violation, and state attorneys general would also be able to enforce the new privacy regime. However, the FTC would be able to intervene and take over the action if it chose, and if two or more state attorneys general are bringing cases regarding the same violations, then the cases would be consolidated and heard in the federal court in the District of Columbia. The FTC would also get jurisdiction over common carriers and non-profits for purposes of enforcing the SAFE DATA Act.

And then there is new language in the SAFE DATA Act that seems aimed at addressing a pair of cases before the Supreme Court on the extent of the FTC’s power to seek and obtain certain monetary damages and equitable relief. The FTC has appealed an adverse ruling from the U.S. Court of Appeals for the Seventh Circuit while the other case is coming from the U.S. Court of Appeals for the Ninth Circuit.

Like the forerunner bill released last November, the FTC would be empowered to “approve voluntary consensus standards or certification programs that covered entities may use to comply with 1 or more provisions in this Act.” These provisions came from an Obama Administration privacy bill allowing for the development and usage of voluntary consensus-based standards for covered entities to comply with in lieu of the provisions of that bill.

The SAFE DATA Act would not impinge existing federal privacy laws but would preempt all privacy laws at the state level. Ironically, the bill would not preempt data breach notification laws. One would think if uniformity across the U.S. were a driving motivation, doing so would be desirable.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Gerd Altmann from Pixabay

CPRA Analyzed

The CCPA follow on bill on the ballot in California will significantly change how the state regulates privacy, which will set the de facto standard for the U.S. in the absence of federal legislation.      

With the “California Privacy Rights Act” (CPRA) having been successfully added to the November ballot on which Californians will vote in November, it is worth taking a deeper look at a bill. This bill would replace the “California Consumer Privacy Act” (CCPA) (AB 375), which just came into full effect with the publishing of final regulations on 14 August. Nonetheless, as the Office of the Attorney General was drafting regulations, the organization that pushed for passage of the CCPA, Californians for Consumer Privacy (CCP), completed the drafting of a follow on bill. CCP Chair and Founder Alistair Mactaggart explained his reasoning for a second ballot initiative: “[f]irst, some of the world’s largest companies have actively and explicitly prioritized weakening the CCPA…[and] [s]econd, technological tools have evolved in ways that exploit a consumer’s data with potentially dangerous consequences.” Moreover, if polling released earlier this month by CCP is close to being accurate, then an overwhelming majority of Californians support enactment of the CPRA, meaning a significantly new privacy scheme will come into effect in the new few years in California.

Of course, it may be fair to assert this bill looks to solve a number of problems created by the rush in June 2018 to draft a bill all parties could accept in order to get the CCPA removed from the ballot. Consequently, the CCPA package that was enacted was sloppily drafted in some places with inconsistent provisions that necessitated two rounds of legislation to fix or clarify the CCPA.

As under the CCPA, the CPRA would still not allow people to deny businesses the right to collect and process their personal information unlike some of the bills pending in Congress. Californians could stop the sale or sharing of personal information, but not the collection and processing of personal data short of forgoing or limiting online interactions and many in-person interactions. A person could request the deletion of personal information collected and processed subject to certain limitations and exceptions businesses are sure to read as broadly as possible. Additionally, a new agency would be created to police and enforce privacy rights, but legitimate questions may be posed about its level of resources. Nonetheless, the new statute would come into effect on 1 January 2023, leaving the CCPA as the law of California in the short term, and then requiring businesses and people to adjust to the new regime.

In the findings section CCP explicitly notes the bills introduced to weaken or rollback the CCPA as part of the reason as to why the CPRA should be enacted. Changes to the California Code made by ballot initiative are much harder to change or modify than the legislative route for enacting statutes. Notably, the CPRA would limit future amendments to only those in furtherance of the act, which would rule out any attempts to weaken or dilute the new regime. Moreover, the bill looks at privacy rights through the prism of an imbalance in information and is founded on the notion that should people in California have more information and real choice in how and when their personal data is shared, proceeded, and collected, then the most egregious data practices would stop. Of course, this conceptual framework differs from the one used by others in viewing data collection and processing as being more like pollution or air quality, situations any one individual is going to have limited impact over, thus necessitating collective government action to address deleterious effects. In the view of the CCP, Californians will be on better footing to negotiate their privacy with companies like Facebook and Google. Notably, the CCP asserted:

  • In the same way that Ingredient labels on foods help consumers shop more effectively, disclosure around data management practices will help consumers become more informed counterparties In the data economy, and promote competition, Additionally, if a consumer can tell a business not to sell his or her data, then that consumer will not have to scour a privacy policy to see whether the business Is, In fact, selling that data, and the resulting savings in time Is worth, in the aggregate, a tremendous amount of money.
  • Consumers should have the information and tools necessary to limit the use of their information to non-invasive, pro-privacy advertising, where their personal information is not sold to or shared with hundreds of businesses they’ve never heard of, If they choose to do so. Absent these tools, it will be virtually Impossible for consumers to fully understand these contracts they are essentially entering into when they interact with various businesses.

The CPRA would change the notification requirements for businesses interested in collecting, processing, and sharing personal data in Section 1798.100 of the Civil Code (i.e. language added by the CCPA and some of the follow bills the legislature passed.) This requirement would be binding on the companies that control collection and not just the entities doing the actual collecting, which suggests concern that the ultimate user of personal data would be shielded from revealing its identity to people. Worse still, the CCPA language may create an incentive to use front companies or third parties to collect personal data. Moreover, the CPRA makes clear that if a company is using another company to collect personal data it will ultimately control, it may meet its notice requirements by posting prominently on its website all the enumerated information. This may be a loophole large companies may use to avoid informing people who is controlling data collection.

The new language that tightens the information people must be provided as part of this notice, namely the purposes for which personal data is collected or used and whether the entity is proposing to sell or share this information. Moreover, the CPRA would mandate that the notice also include any additional purposes for which personal data are collected and used “that are incompatible with the disclosed purpose for which the personal information was collected.”

The changes to Section 1798.100 and the underlying CCPA language that would remain will apply to a new category of information created by the CPRA: “sensitive personal information.” This term is defined to mean:

  • personal Information that reveals
    • a consumer’s social security, driver’s license, state Identification card, or passport number;
    • a consumer’s account log-In, financial account, debit card, or credit card number in combination with any required security or access code, password, or credentials allowing access to an account;
    • a consumer’s precise geolocation;
    • a consumer’s racial or ethnic origin, religious or philosophical beliefs, or union membership;
    • the contents of a consumer’s mail, email and text messages, unless the business Is the Intended recipient of the communication;
    • a consumer’s genetic data; and
  • the processing of biometric Information for the purpose of uniquely identifying a consumer;
  • personal Information collected and analyzed concerning a consumer’s health; or
  • personal Information collected and analyzed concerning a consumer’s sex life or sexual orientation.

However, should any of these data be “publicly available” as defined by the CPRA, then it is no longer subject to the heightened requirements normally due this new class of information. For example, the new notice people must be given will list the categories of sensitive personal information collected and the purposes for which such information is collected or used. Additionally, people must be told whether this subset of personal data will be shared or sold.

The CPRA would limit collection, use, processing, and sharing of personal data to purposes “reasonably necessary and proportionate” to achieve the purpose of the information collection. Quite clearly, much will hang on what turns out to be “reasonable,” and this may be construed by the new data protection agency in regulation and ultimately courts in litigation. However, this provision also allows the “collection, use, retention, and sharing of a consumer’s personal information…for another disclosed purpose that Is compatible with the context in which the personal information was collected.” This will also need fleshing out either by regulation or litigation, or both. This seems to allow companies to specific another purpose for its data activities so long as it is compatible with the context of collection. And yet, it is not clear what would determine compatibility. If a person is agreeing to a grocery store chain’s data activities, might the company legally try to collect information regarding a person’s health?

This section also requires businesses to enter into contracts with third parties, service providers, and contractors to ensure they follow the CPRA and to specify that information sold or shared by the business is for limited and specific purposes.

Businesses are obligated to use “reasonable security procedures and practices appropriate to the nature of the personal information to protect the personal Information from unauthorized or illegal access, destruction, use, modification, or disclosure.” This is a familiar construct that contemplates a sliding scale of security measures with lesser steps being all right for less valuable information, say deidentified data, but with higher standards being needed for more sensitive personal data. The challenge in such a regime is that reasonable minds might theoretically disagree about reasonable measures, but it may be the purview of the caselaw construing the CPRA that will point the way to how businesses should secure information.

Section 1798.105 spells out a person’s right to delete personal information and expands the obligation of businesses to direct their service providers and contractors to delete information upon receipt of a valid request. Third parties would be notified of deletion requests and expected to also delete unless doing so would be impossible or “involves disproportionate effort,” a term likely to be given as expansive a reading as possible by many businesses. There still are numerous exceptions for deletion requests, many of which will also likely be read expansively by businesses reluctant to honor deletion requests, including:

  • Complete the transaction for which the personal information was collected, fulfill the terms of a written warranty or product recall conducted In accordance with federal low, provide a good or service requested by the consumer, or reasonably anticipated by the consumer within the context of a business’s ongoing business relationship with the consumer, or otherwise perform a contract between the business and the consumer.
  • Help to ensure security and integrity to the extent the use of the consumer’s personal information Is reasonably necessary and proportionate for those purposes.
  • Debug to identify and repair errors that Impair existing Intended functionality.
    Exercise free speech, ensure the right of another consumer to exercise that consumer’s right of free speech, or exercise another right provided for by law.
  • To enable solely internal uses that are reasonably aligned with the expectations of the consumer based on the consumer’s relationship with the business and compatible with the context in which the consumer provided the Information.
  • Comply with a legal obligation.

However, the CPRA eliminated the exception that could be used to deny deletion requests in the CCPA that allowed a business to “use the consumer’s personal information, internally, in a lawful manner that is compatible with the context in which the consumer provided the information.”

The CPRA creates a new section, 1798.106, titled “Consumers’ Right to Correct Inaccurate Personal Information,” that requires businesses to correct inaccurate personal information in light of the type of information and why it is being processed. Businesses must disclose that people have this right if a person submits a verifiable request to correct inaccurate personal information. However, companies are only required to make reasonable commercial efforts in correcting inaccurate personal information. It appears that a rulemaking is necessary to flesh out what would be a reasonable commercial efforts.

Section 1798.110 is amended by the CPRA but more or less keeps the CCPA’s right to know about and access being collected about them with some significant changes. For example, there is an expansion of one of the categories businesses must provide to people who utilize this right. This category is the commercial or business purpose for which collection and selling of personal information must be given to requesters currently under the CCPA. Under the CPRA, businesses would also need to inform people of the other entities with whom they share personal information, thus closing a significant loophole, for companies like Facebook share people’s information but do not sell it. Under the CCPA, a Facebook would not need to divulge to a person with which companies it is sharing one’s information.

Also, the CPRA would deem in compliance those companies that post on its website the categories of personal information, the sources of this information, its business or commercial purposes, and the categories of third parties to whom personal information is disclosed. It seems likely many companies will go this route, meaning the only personal information they would need to furnish upon a request would be the specific pieces of information on the person making the request. And yet, the CRPA strikes the CCPA requirement that businesses keep personal information for one-time transactions or to reidentify or link to these data.

Section 1798.115 of the CCPA would also be changed by expanding the universe of data a person may request and receive regarding how their personal information is shared and sold. The CPRA keeps the basic structure of the CCPA in this regard and merely expands it to include shared as well as sold for the following:

  • The categories of personal information sold or shared and the categories of third parties to whom such information was sold or shared
  • The categories of personal information disclosed about a person for business purposes and the categories of persons to whom such information was disclosed

Third parties would be barred from selling or sharing personal information that has been sold to or shared with them unless they provide explicit notice and people have the opportunity to opt-out.

The CRPA similarly changes Section 1798.120 (aka the right to opt out of the sharing or selling of one’s personal information). However, it keeps the CCPA’s right to opt out of sales or sharing at any time. Likewise, teenagers between 13-16 would need to affirmatively agree to selling or sharing, and for any child less than 13, his or her parents must affirmatively agree.

A new Section 1798.121, a “Consumers’ Right to Limit Use and Disclosure of Sensitive Personal Information,” would allow people to stop businesses from collecting and using sensitive personal information in some cases. As a general matter, if a person limits collection or use of this class of information, then the business would be limited to “that use which is necessary to perform the services or provide the goods reasonably expected by an average consumer who requests such goods or services” subject to some of the exceptions embodied in the definition of “business purpose.” Businesses may, however, provide notice of additional uses of sensitive personal information a person must further limit if these new uses ore objectionable or unwanted.

The CPRA changes the provision barring retaliation against people who opt out of certain practices or use their CCPA rights. The general prohibition on punishing people who use their rights under the bill with different prices, services, or products would be maintained. The CPRA would expand this protection to employees, contractors, and applicants for employment. However, the CPRA keeps the CCPA exemption for so-called loyalty programs to offer different prices or services but only if the difference is reasonably related to the value the person’s data provides to the business. The CCPA contains language requiring the linkage to be directly related, this is change may seen as a subtle weakening of the connection between the value of a person’s data and the rewards or prices offered through membership in a loyalty program. This will almost certainly result in businesses in California using current programs or establishing new programs to press people to share personal information in exchange for better prices or services. After all, all they would need to do is show the value of the person’s data is reasonably related to the advantages of membership. Like other similar provisions in the bill, regulation and litigation will define the parameters of what is reasonably related. Like the CCPA, the new bill would require people to opt into such programs, and should a person refuse, the business would need to wait 12 months before making the offer again.

Many of the previously discussed changes to the CCPA necessitate alterations to a key section of the statute, Section 1798.130, that details notice, disclosure, correction, and deletion requests. Businesses with physical locations must still offer two means for people to make such requests, but the CPRA would allow online businesses to merely make available an email address. Anyone who has ever tried to resolve disputes and problems via email knows this process can often be frustrating, but the new statute would allow companies like Facebook or Google to merely offer an email address.

The new 1798.130 also makes clear the 45-day window for businesses to deliver required information to people after receiving a verified request also includes making requested corrections and deletions. A potential hurdle is established for requests, however. In light of the type of information in question, a business may seek to authenticate a person’s identity before granting the request but may not force a person to create an account with the business if they do not have one. To be fair, this provision may be aimed at the mischief that could be created if a person decides to impersonate someone else and ask businesses to delete their personal information. There are likely even other such possible situations in which havoc could be wreaked by a malicious person.

In any event, the disclosure of information would need to cover the previous 12 months under the CPRA, and after new regulations are put in place, people would be able to ask for and receive information stretching back before the preceding 12 months. But such a request could be denied on the grounds of impossibility or disproportionate effort. Presumably the new regulations would address when these types of situations may be the case. Another limitation on this right is that businesses would not need to provide information before 1 January 2022.

If a person submits requests to learn what type of personal information has been collected or has been sold or shared to a business’ contractor or service provider, they have no obligation to respond. And yet, these entities must assist a business that receives such requests.

The CPRA stipulates that businesses are required to provide the following types of information if person asks for the data the entity has:

the categories of sources from which the consumer’s personal information was collected; the business or commercial purpose for collecting, or selling or sharing the consumer’s personal information; and the categories of third parties to whom the business discloses the consumer’s personal information.

A business is also obligated to provide the “specific pieces of personal information obtained from the consumer in a format that is easily understandable to the average consumer, and to the extent technically feasible, in a structured, commonly used, machine-readable format, which also may be transmitted to another entity at the consumer’s request without hindrance.”

Regarding the type of information a business must give to people who ask to know what, if any, information was sold or shared about them, a business must furnish two lists:

  • A list of the categories of personal information It has sold or shared about consumers in the preceding 12 months by reference to the enumerated category or categories in [the revised definition of personal information and new definition of sensitive personal information] that most closely describe the personal Information sold or shared, or If the business has not sold or shared consumers’ personal information in the preceding 12 months, the business shall prominently disclose that fact in Its privacy policy.
  • A list of the categories of personal information it has disclosed about consumers for a business purpose in the preceding 12 months by reference to the enumerated category in subdivision (c) that most closely describes the personal information disclosed, or If the business has not disclosed consumers’ personal information for a business purpose In the preceding 12 months, the business shall disclose that fact.

The categories of personal information a business must provide are “information that identifies, relates to, describes, is reasonably capable of being associated with, or could reasonably be linked, directly or indirectly, with a particular consumer or household. Personal information includes, but is not limited to, the following if it identifies, relates to, describes, is reasonably capable of being associated with, or could be reasonably linked, directly or indirectly, with a particular consumer or household:

(A) Identifiers such as a real name, alias, postal address, unique personal Identifier, online identifier, Internet Protocol address, email address, account name, social security number, driver’s license number, passport number, or other similar identifiers.

(B) Any personal information described in subdivision (e) of Section 1798.80.

(C) Characteristics of protected classifications under California or federal law.

(D) Commercial information, including records of personal property, products or services purchased, obtained, or considered, or other purchasing or consuming histories or tendencies.

(E) Biometric information.

(F) Internet or other electronic network activity information, including, but not limited to, browsing history, search history, and information regarding a consumer’s interaction with an Internet website, application, or advertisement.

(G) Geolocation data.

(H) Audio, electronic, visual, thermal, olfactory, or similar Information. (I) Professional or employment-related Information.

(J) Education information, defined as information that is not publicly available personally Identifiable information as defined In the Family Educational Rights and Privacy Act (20 U.S.C. section 1232g, 34 C.F.R. Part 99).

(K) Inferences drawn from any of the Information identified in this subdivision to create a profile about a consumer reflecting the consumer’s preferences, characteristics, psychological trends, predispositions, behavior, attitudes, intelligence, abilities, and aptitudes.

The CPRA modifies the CCPA standards on links on a business’ website allowing people to opt out of the sale or sharing of personal information. It also adds a requirement that such a link be placed on a website to allow people to opt out of the use or disclosure of sensitive personal information. A business would now be allowed to have one link for both if it wants, and it would also be allowed to remind people of the advantages of being a member of the business’ loyalty program and any associated charges or fees with not joining. This provision would seem to allow some businesses, at least those who can make the case of a reasonable relation between the discounts provided and the value of personal information, to pose a possibly uncomfortable dilemma to people: your privacy or your money. Put another way, the CPRA may well result in a price being put on one’s privacy with those of means or those intensely dedicated to privacy being able or willing to limit these practices while everyone else will acquiesce in the face of higher prices of worse services or products. Additionally, companies would not need to have links on their website if they allow for opting out through their platform, technology, or app.

If a person opts out, companies would have to wait 12 months before asking again whether they would not mind allowing the business to sell or share their personal information or use of disclose their sensitive personal information. But, one should bear in mind that even if a person opts out of the sale or sharing of personal information, a business may still collect or process it subject to other requirements in the CPRA. This right is limited to the dissemination of personal information through sales or a sharing arrangement.

The CPRA revises some key definitions and introduces new definitions, the most significant of which was discussed earlier: sensitive personal information. Another key change is to criteria for businesses subject to the CPRA. Each of the three thresholds for becoming a regulated business are changed:

  • First, language is changed to make clear a company must have earned $25 million in gross revenues in the preceding year to qualify on the basis of income.
  • Second, the threshold for the number of people is changed. It is raised from 50,000 to 100,000, and instead of counting people and devices, the latter is stricken and now households may be counted. Obviously, a household will likely include multiple devices, so counting by household allows for a higher threshold generally. Also, the counting is limited to the activities of businesses buying, selling, or sharing personal information, and so mere collection and processing is not counted, meaning if a business does not partake in any of the three enumerated categories, it would not qualify under this prong even if collects and processes the personal information of, say, 1 million Californians.
  • Thirdly, the threshold for businesses deriving 50% or more of their income selling consumers’ personal information is broadened to include sharing, meaning more entities might qualify on the basis of this prong.

Also, of note, the definition of business purpose was altered, and new definitions are provided for consent, contractor, cross-context behavioral advertising, dark pattern, non-personalized advertising and others.

The section on exemptions to the bars in the CCPA is rewritten and expanded by the CPRA. Businesses may disregard the obligations placed on by this privacy statute under a number of circumstances. For example, added circumstances include complying with a subpoena or court order or responding to direction by law enforcement agencies. Moreover, government agencies would able to make emergency requests for personal information to business if acting in good faith, asserts a legal right to do so, and follows with a court order within 3 days. There is also language that adds contractors to the CCPA’s provisions on the liability of a business for violations by its service providers that requires actual knowledge of such violations.

The CPRA keeps the CCPA’s grant of authority to allow people to sue for violations but casually tightens the circumstances under which this may happen to those in which one’s personal information is not encrypted and not redacted. The CCPA allows for a suit if a person’s personal information is neither encrypted nor redacted. Consequently, if a business uses either method of securing information it cannot be sued.

As noted, the bill would establish a California Privacy Protection Agency that would take over enforcement of the revised CCPA from the Office of the Attorney General. It would consist of a five-member board including a chair. At the earlier date of either 1 July 2021 or six months after the new agency informs the Attorney General it is ready to begin drafting rules, it shall have rulemaking authority. However, before this date, the Attorney General may have the authority or opportunity to begin some of the CPRA rulemakings during an interregnum that may serve to complicate implementation. Nonetheless, among other powers, the new agency would be able to investigate and punish violations with fines of up $2,500 per violation except for intentional violations and those involving the personal information of minor children, which could be fined at a rate of $7,500 per violation. Like the Federal Trade Commission, the California Privacy Protection Agency would be able to bring administrative actions inside the agency or go to court to sue. However, this new entity would only be provided $5 million during its first year and $10 million a year thereafter, which begs the question as to whether the new agency will be able to police privacy in California in a muscular way.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by David Mark from Pixabay

National Privacy Legislation Stalled in U.S.

The chances for U.S. privacy legislation are worse now than they were before the pandemic.  However, there may be some decision points approaching.     

A few weeks into the traditional August recess, Congress is no closer to enacting federal privacy legislation than before the pandemic. In fact, such legislation may be further from being sent to the White House now that more pressing, more immediate maters have eclipsed privacy such as further COVID-19 relief legislation and appropriations for the next fiscal year set to start on 30 September. There is always the chance stakeholders will dispense with their entrenched positions during a post-election session and reach agreement on a bill, but this will depend on the election results, for if Democrats take the White House and Senate, they may well conclude they will get privacy legislation more to their liking next year.

In terms of the present impasse, at present, emanates from a few different issues: a private right of action for people and state preemption. Generally speaking, Democrats favor the former and oppose the latter with Republicans’ position being the opposite. However, it is possible the two parties can agree on a limited right for people to sue companies for violating their privacy rights and some form of preemption of contrary state laws, perhaps along the lines of the preemption structure in the “Financial Services Modernization Act of 1999” (P.L. 106–102) (aka the Gramm–Leach–Bliley Act) that sets a uniform floor for privacy and data security that states may regulate above. However, industry stakeholders are likely resisting any such provisions for they would still face litigation, likely in the form of class actions, and varied, differing privacy standards across the U.S.

Otherwise, there is broad agreement that people in the U.S. would be notified of the privacy practices of entities before they can start collecting, processing, and sharing personal data and would need to explicitly agree to allow this to happen. And so, it would likely be an opt-in for most data collection, processing, and sharing. However, people would likely get a more limited set of rights to opt out of certain practices such as data transfers to third parties, but there is a great deal of variance among the leading bills on what people can choose to avoid. Likewise, people in the U.S. would generally be able to request and receive, access, correct, and delete personal data in specified situations. Most, but not all, of the bills name the Federal Trade Commission (FTC) as the regulator of a new privacy regulatory structure with varying degrees of rulemaking power. A handful of other bills seek to create out of whole cloth a new privacy regulator along the lines of Europe’s data protection authorities.

However, if the voters of California vote for the ballot initiative to enact the “California Privacy Rights Act” (CPRA), a tightening of the “California Consumer Privacy Act” (CCPA) (AB 375) that would prevent future amendments to weaken or dilute privacy protection in California, things may change in Washington. Deprived of a means of rolling back California’s new privacy regulatory structure, as many industry stakeholders tried to do in the last legislative session with the CCPA, these interests may set their sights on a national privacy bill that would ameliorate this situation. Consequently, they may pressure Republicans and Democrats in Congress to resolve the outstanding issues on federal privacy legislation.

Moreover, stakeholders in Washington are responding to what appears to be the more urgent fire: the deathblow dealt to Privacy Shield by the European Union’s highest court. Without an agreement in place to allow multinationals to transfer and process the personal data to the U.S., these entities will need to cease doing so or implement alternate means of doing so under the General Data Privacy Regulation (GDPR) such as standard contract clauses (SCC) or binding corporate rules (BCR), but even these means of transfer are not without risk. European Union (EU) data protection authorities (DPAs) may soon be reviewing these agreements to ensure they comport with the Court of Justice of the European Union’s (CJEU) ruling that the U.S. lacks controls and remedies to ensure the privacy rights of EU citizens.

It bears note that another suit has been filed in the EU to test the legality of using SCCs generally to transfer data to the U.S. Austrian privacy activist Maximillian Schrems and the organization he is working with, noyb–European Center for Digital Rights, have filed 101 complaints in all 30 EU nations and the 33 European Economic Area (EEA) nations, arguing that Google and Facebook are operating in violation of the CJEU’s ruling. Specifically, the organization is claiming:

A quick analysis of the HTML source code of major EU webpages shows that many companies still use Google Analytics or Facebook Connect one month after a major judgment by the Court of Justice of the European Union (CJEU) – despite both companies clearly falling under US surveillance laws, such as [Section 702 of the Foreign Intelligence Surveillance Act (FISA)]. Neither Facebook nor Google seem to have a legal basis for the data transfers. Google still claims to rely on the “Privacy Shield” a month after it was invalidated, while Facebook continues to use the “SCCs”, despite the Court finding that US surveillance laws violate the essence of EU fundamental rights.

Consequently, even if SCCs are used more widely as means of transferring personal data, the CJEU could find that such agreements for transfers to the U.S. do not comport with the GDPR, eliminating another means used by which U.S. multinationals. This could lead to more companies like Facebook and Google segregating EU data and processing it in the EU or another jurisdiction for which the European Commission has issued an adequacy decision. Or, this could create pressure in Washington to reform U.S. surveillance laws and practices in order that a future general data transfer agreement pass muster with the CJEU.

Still, it may serve some purpose to list the salient privacy bills and link to analysis. As mentioned, a trio of COVID-19 privacy bills were introduced a few months ago to address mainly the use of smartphones for exposure and contact tracing:

Otherwise, the major privacy bills introduced this Congress include:

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by S. Hermann & F. Richter from Pixabay

Dueling COVID-19 Privacy Bills Released

Democratic stakeholders answer a Republican proposal on how to regulate privacy issues raised by COVID-19 contact tracing. The proposals have little chance of enactment and are mostly about positioning.  

First things first, if you would like to receive my Technology Policy Update, email me. You can find some of these Updates from 2019 and 2020 here.

Late last week, a group of Democratic stakeholders on privacy and data security issues released the “Public Health Emergency Privacy Act” (S.3749), a bill that serves as a counterpoint to the “COVID-19 Consumer Data Protection Act” (S.3663) legislation introduced a few weeks ago to pose a solution to the privacy issues raised by contact tracing of COVID-19. However, the Democratic bill contains a number of provisions that many Republicans consider non-starters such as a private right of action for individuals and no preemption of state laws. It is not likely this bill will advance in the Senate even though it may possibly be moved in the House. S. 3749 was introduced by Senators Richard Blumenthal (D-CT) and Mark Warner (D-VA) and Representatives Anna Eshoo (D-CA), Jan Schakowsky (D-IL), and Suzan DelBene (D-WA). 

In a way, the “Public Health Emergency Privacy Act” makes the case that “Health Insurance Portability and Accountability Act” (HIPAA)/“Health Information Technology for Economic and Clinical Health Act” (HITECH Act) regulations are inadequate to protect the privacy and data of people in the United States for the reason that these regulations apply only to healthcare providers, their business associates, and other named entities in the healthcare system. Entities collecting healthcare information, even very sensitive information, are not subject to HIPAA/HITECH regulations and would likely be regulated, to the extent they are, by the Federal Trade Commission (FTC) or state agencies and attorneys general.

The “Public Health Emergency Privacy Act” would cover virtually all entities, including government agencies except for public health authorities (e.g. a state department of health or the Centers for Disease Control and Prevention), healthcare providers, service providers, people acting in a household capacity. This remarkable definition likely reflects plans announced around by the world by governments to eschew Google and Apple’s contact tracing app to develop its own. Moreover, it would also touch some efforts aside and apart from contact tracing apps. Moreover, this is the first bill introduced recently that proposes to treat public and private entities the same way with respect to how and when they may collect personal data.

The types of data protected under the act are “emergency health data” which are defined as “data linked or reasonably linkable to an individual or device, including data inferred or derived about the individual or device from other collected data provided such data is still linked or reasonably linkable to the individual or device, that concerns the public COVID–19 health emergency.” The bill then lists a number of examples of emergency health data that is sweeping and comprehensive. This term includes “information that reveals the past, present, or future physical or behavioral health or condition of, or provision of healthcare to, an individual” related to testing for COVID-19 and related genetic and biometric information. Geolocation and proximity information would also be covered by this definition. Finally, emergency health data encompasses “any other data collected from a personal device,” which seems to cover all data collection from a person’s phone, the primary means of contact tracing proposed thus far. However, it would appear that data on people collected at the household or higher level may not be subject to this definition and hence much of the bill’s protections.

The authority provided to covered entities is limited. First, collection, use, and disclosure of emergency health data are restricted to good faith health purposes. And yet, this term is not defined in the act, and the FTC may need to define it during the expedited rulemaking the bill. However, it seems a fairly safe assumption the agency and courts would not construe using emergency health data for advertising would not be a good faith public health purpose. Next, covered entities must allow people to correct inaccurate information and the entity itself has the duty to take reasonable efforts on its own to correct this information. Covered entities must implement reasonable safeguards to prevent discrimination on the basis of these data, which is a new obligation placed on covered entities in a privacy or data security bill. This provision may have been included to bar government agencies collecting and using covered data for discriminatory purposes. However, critics of more expansive bills may see this as the camel’s nose under the tent for future privacy and data security bills. Finally, the only government agencies that can legally be provided emergency health data are public health authorities and then only “for good faith public health purposes and in direct response to exigent circumstances.” Again, the limits of good faith public health purposes remain unclear and then such disclosure can only occur in exigent circumstances, which likely means when a person has contracted or been exposed to COVID-19.

Covered entities and service providers must implement appropriate measure to protect the security and confidentiality of emergency health data, but the third member of the usual triumvirate, availability, is not identified as requiring protection.

The “Public Health Emergency Privacy Act” outright bars a number of uses for emergency health data:

  • Commercial advertising, recommendations for e-commerce, or machine learning for these purposes
  • Any discriminatory practices related to employment, finance, credit, insurance, housing, or education opportunities; and
  • Discriminating with respect to goods, services, facilities, privileges, advantages, or accommodations of any place of public accommodation

It bears note the covered data cannot be collected or disclosed for these purposes either.

The act contains very strong consent language. Covered entities may not collect, use, or disclose emergency health data unless a person has provided express affirmative consent, which requires knowing, informed choice that cannot be obtained through the use of deceptive practices nor inferred through a person’s inaction. However, there are significant exceptions to this consent requirement that would allow covered entities to collect, use, or disclose these data, including

  • protecting against malicious, deceptive, fraudulent, or illegal activity; or
  • detecting, responding to, or preventing information security incidents or threats;
    the covered organization is compelled to do so by a legal obligation.

But these purposes are valid only when necessary and solely for the fulfillment of named purpose.

In a related vein, covered entities must allow people to revoke their consent and it must be effective as soon as practicable but no later than 15 days afterward. Moreover, the person’s emergency health data must be destroyed or rendered not linkable within 30 days of revocation of consent.

Covered entities must provide clear and conspicuous notice before or at the point of data collection that explains how and why the emergency health data is collected, used, and disclosed. Moreover, it must also disclose the categories of recipients to whom a covered entity discloses covered data. This notice must also disclose the covered entity’s data retention policies and data security policies and practices but only with respect to emergency health data. The notice must also inform consumers on how they may exercise rights under the act and how to file a complaint with the FTC.

There would also be a public reporting requirement for those covered entities collecting, using, or disclosing the emergency health data of 100,000 or more people. Every three months, these entities would need to report on aggregate figures on the number of people from whom data was collected, used, and disclosed. These reports must also detail the categories of emergency health data collected, used and disclosed and the categories of third parties with whom such data are shared.

The bill requires covered entities to destroy or make not linkable emergency health data 60 days after the Secretary of Health and Human Services declares the end of the COVID-19 public health emergency, or a state does so, or 60 days after collection.

The FTC would be given the daunting task of beginning a rulemaking 7 days after enactment “to ensure a covered organization that has collected, used, or disclosed emergency health data before the date of enactment of this Act is in compliance with this Act, to the degree practicable” that must be completed within 45 days. There is also a provision requiring the Department of Health and Human Services (HHS) to issue guidance so that HIPAA/HITECH Act regulated entities do not need to meet duplicative requirements, and, moreover, the bill exempts these entities when operating under the HIPAA/HITECH Act regulations.

The FTC, state attorneys general, and people would be able to enforce this new act through a variety of means. The FTC would treat violations as f they were violations of regulation barring a deceptive or unfair practice, meaning the agency could levy fines in the first instance of more than $43,000 a violation. The FTC could seek a range of other relief, including injunctions, restitution, disgorgement, and remediation. However, the FTC could go to federal court without having to consult with the Department of Justice, which is a departure from current law and almost all the privacy bills introduced in this Congress. The FTC’s jurisdiction would be broadened to include common carriers and non-profits for purposes of this bill, too. The FTC would receive normal notice and comment rulemaking authority that is very broad as the agency would be free to promulgate any regulations it sees necessary to effectuate this act.

State attorneys general would be able to seek all the same relief the FTC can so long as the latter is not already bringing an action. Moreover, any other state officials empowered by state statute to bring similar actions may do so for violations of this act.

People would be allowed to sue in either federal or state court to vindicate violations. The bill states that any violation is to be considered an injury in fact to forestall any court from finding that a violation does not injure the person, meaning her suit cannot proceed. The act would allow people to recover between $100-1000 for any negligent violation and between $500-5000 for any reckless, willful, or intentional violation with no cap on total damages. People may also ask for and receive reasonable attorney’s fees and costs and any equitable or declaratory relief a court sees fit to grant. Moreover, the bill would disallow all pre-lawsuit arbitration agreements or waivers of rights. Much of the right to sue granted to people by the “Public Health Emergency Privacy Act” will be opposed by many Republicans and industry stakeholders.

The enforcement section raises a few questions given that entities covered by the bill include government agencies. Presumably, the FTC cannot enforce this act against government agencies for the very good reason they do not have jurisdiction over them. However, does the private right of action waive the federal and state government’s sovereign immunity? This is not clear and may need clarification of the bill is acted upon in either chamber of Congress.

This bill would not preempt state laws, which, if enacted, could subject covered entities to meeting more than one regime for collecting, using, and disclosing emergency health data.

Apart from contact tracing, the “Public Health Emergency Privacy Act” also bars the use of emergency health data to abridge a person’s right to vote and it requires HHS, the United States Commission on Civil Rights, and the FTC to “prepare and submit to Congress reports that examines the civil rights impact of the collection, use, and disclosure of health information in response to the COVID–19 public health emergency.”

Given the continued impasse over privacy legislation, it is little wonder that the bill unveiled a few weeks ago by four key Senate Republicans takes a similar approach that differs in key aspects. Of course, there is no private right of action and it expressly preempts state laws to the contrary.

Generally speaking, the structure of the “COVID–19 Consumer Data Protection Act of 2020” (S.3663) tracks with the bills that have been released thus far by the four sponsors: Senate Commerce, Science, and Transportation Committee Chair Roger Wicker (R-MS) (See here for analysis of the “Consumer Data Privacy Act of 2019”)and Senators John Thune (R-SD), Jerry Moran (R-KS) (See here for analysis of “Consumer Data Privacy and Security Act of 2020” (S.3456)), and Marsha Blackburn (R-TN) (See here for analysis of the “Balancing the Rights Of Web Surfers Equally and Responsibly Act of 2019” (BROWSER Act) (S. 1116)). In short, people would be provided with notice about what information the app collects, how it is processed, and with whom and under what circumstances this information will be shared. Then a person would be free to make an informed choice about whether or not she wants to consent and allow the app or technology to operate on her smartphone. The FTC and state attorneys general would enforce the new protections.

The scope of the information and entities covered is narrower than the Democratic bill. “Covered data” is “precise geolocation data, proximity data, a persistent identifier, and personal health information” but not aggregated data, business contact information, de-identified data, employee screening data, and publicly available information. Those entities covered by the bill are those already subject to FTC jurisdiction along with common carriers and non-profits, but government agencies are not, again unlike the bill put forth by Democrats. Entities must abide by this bill to the extent they “collect[], process[], or transfer[] such covered data, or determine[] the means and purposes for the collection, processing, or transfer of covered data.”

Another key definition is how an “individual” is determined, for it excludes any person acting in her role as an employee and almost all work-related capacities, making clear employers will not need to comply with respect to those working for them.

“Personal health information” is “information relating to an individual that is

  • genetic information of the individual; or
  • information relating to the diagnosis or treatment of past, present, or future physical, mental health, or disability of the individual; and
  • identifies, or is reasonably linkable to, the individual.

And yet, this term excludes educational information subject to the “Family Educational Rights and Privacy Act of 1974” or “information subject to regulations promulgated pursuant to the HIPAA/HITECH Acts.

The “COVID–19 Consumer Data Protection Act of 2020” bars the collection, processing, or transferring of covered data for a covered purpose unless prior notice is provided, a person has provided affirmative consent, and the covered entity has agreed not to collect, process, or transfer such covered data for any other purpose than those detailed in the notice. However, leaving aside the bill’s enumerated allowable purposes for which covered data may collected with consent, the bill provides a number of exemptions from this general bar. For example, collection, processing, or transfers necessary for a covered entity to comply with another law are permissible apparently in the absence of a person’s consent. Moreover, a covered entity need not obtain consent for operational or administrative tasks not disclosed in the notice provided to people.

The act does spell out “covered purposes” for which covered entities may collect, process, or transfer covered data with consent after notice has been given:

  • Collecting, processing, or transferring the covered data of an individual to track the spread, signs, or symptoms of COVID–19.
  • Collecting, processing, or transferring the covered data of an individual to measure compliance with social distancing guidelines or other requirements related to COVID–19 that are imposed on individuals under a Federal, State, or local government order.
  • Collecting, processing, or transferring the covered data of an individual to conduct contact tracing for COVID–19 cases.

Covered entities would be required to publish a privacy policy detailing for which of the above covered purposes a person’s covered data would be collected, processed, or transferred. This policy must also detail the categories of entities that receive covered data, its data retention and data security policies.

There would be reporting requirements that would affect more covered entities than the Democratic bill. Accordingly, any covered entity collecting, processing or transferring covered data for one of the enumerated covered processes must issue a public report 30 days after enactment and then every 60 days thereafter.

Among other provisions in the bill, people would be able to revoke consent, a request that covered entities must honor within 14 days. Covered entities must also ensure the covered data are accurate, but this requirement falls a bit short of people being granted to right to correct inaccurate data as they would, instead, be merely able to report inaccuracies. There is no recourse if a covered entity chooses not to heed these reports. Covered entities would need to “delete or de-identify all covered data collected, processed, or transferred for a [covered purpose] when it is no longer being used for such purpose and is no longer necessary to comply with a Federal, State, or local legal obligation, or the establishment, exercise, or defense of a legal claim.” Even though there is the commandment to delete or de-identify, the timing as to when that happens seems somewhat open-ended as some covered entities could seek legal obligations to meet in order to keep the data on hand.

Covered entities must also minimize covered data by limiting collection, processing, and transferring to “what is reasonably necessary, proportionate, and limited to carry out [a covered purpose.]” The FTC must draft and release guidelines “recommending best practices for covered entities to minimize the collection, processing, and transfer of covered data.” However, these guidelines would most likely be advisory in nature and would not carry the force of law or regulation, leaving covered entities to disregard some of the document if they choose. Covered entities must “establish, implement, and maintain reasonable administrative, technical, and physical data security policies and practices to protect against risks to the confidentiality, security, and integrity of such data,” a mandate broader than the duty imposed by the Democratic bill.

The FTC and state attorneys general would enforce the new regime. The FTC would be able to seek civil fines for first violations in the same way as the Democrat’s privacy bill. However, unlike the other bill, the Republican bill would nullify any Federal Communications Commission (FCC) jurisdiction to the extent it conflicted with the FTC’s in enforcement of the new act. Presumably, this would address jurisdictional issues raised by placing common carriers under FTC jurisdiction when they are usually overseen by the FCC. Even though state laws are preempted, state attorneys general would be able to bring actions to enforce this act at the state level. And, as noted earlier, there is no private right of action.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

CCPA 2.0 Backers Submit Ballot Initiative for November Election

A new California ballot initiative is submitted for approval that would revise the CCPA and impose new requirements starting in 2023, if enacted. This new statute could not be amended to weaken it per ballot initiative law in California.

The organization that forced action on the “California Consumer Protection Act” (CCPA) (AB 375) by getting its proposed measure approved for California’s November 2018 ballot announced that it has the sufficient number of signatures to get its preferred revision of the CCPA on the ballot for this fall’s coming election. If this effort succeeds, and Californians vote for this measure, it would throw the state’s efforts to establish and enforce the new CCPA into doubt as the new regime would commence in 2023 and there would likely again be a rulemaking process to implement the new statute. It is possible that should this initiative be placed on the November ballot, new life could be breathed into Congressional efforts to pass a national privacy and data protection bill.

The Californians for Consumer Privacy claimed in its press release “it is submitting well over 900,000 signatures to qualify the “California Privacy Rights Act” (CPRA) for the November 2020 ballot.” The Californians for Consumer Privacy have been negotiating extensively with stakeholders on the CCPA’s follow on bill and actually released a draft bill last fall. Nonetheless, even though some stakeholders were able to secure desired changes in the base text, others were not. This fact along with the reality that it is next to impossible to weaken or dilute statutes added to the California Code through ballot initiative suggest a serious campaign to defeat this ballot initiative.

In a summary, the Californians for Consumer Privacy claimed the CPRA would:

1) Make it almost impossible to weaken privacy in California in the future, absent a new initiative allowing such weakening. CPRA would give the California Legislature the power to amend the law via a simple majority, but any amendment would have to be “in furtherance of the purpose and intent” of CPRA, which is to enhance consumer privacy. This would protect privacy in California from a business onslaught to weaken it in Sacramento.

2) Establish a new category of sensitive personal information (SPI), and give consumers the power to restrict the use of it. SPI includes: SSN, DL, Passport, financial account info, precise geolocation, race, ethnicity, religion, union membership, personal communications, genetic data, biometric or health information, information about sex life or sexual orientation.

3) Allow consumers to prohibit businesses from tracking their precise geolocation for most purposes, including advertising, to a location within roughly 250 acres.

a. This would mean no more tracking consumers in rehab, a cancer clinic, at the gym (for how long) at a fast food restaurant (how often), sleeping in a separate part of the house from their partner (how recently), etc., all with the intention of monetizing that most intimate data that makes up people’s lives.

4) Add email +password to the list of items covered by the ‘negligent data breach’ section to help curb ID theft. Your sensitive information (i.e. your health or financial data)would now include your email and password; and if mishandled, you would be able to sue the business for damages, without having to prove an actual financial loss (and let’s face it—who can ever link the data breach from one company, to the ID theft six months later.  It’s impossible, and this would change that). 

5) Establish the California Privacy Protection Agency to protect privacy for Californians, funded with $10M from the State’s General Fund

a. This funding would equate to roughly the same number of privacy enforcement staff as the FTC has to police the entire country (the FTC has 40 privacy professionals).

A predecessor bill, “The California Privacy Rights and Enforcement Act of 2020” (CPREA), was released last fall (See 3 October 2019 Technology Update for write up.) At the time, Californians for Consumer Privacy Chair and Founder Alistair Mactaggart explained his reasoning for a second ballot initiative: “First, some of the world’s largest companies have actively and explicitly prioritized weakening the CCPA…[and] [s]econd, technological tools have evolved in ways that exploit a consumer’s data with potentially dangerous consequences.”

As noted, changes to the California Code made by ballot initiative are much harder to change or modify than the legislative route for enacting statutes. Notably, the CPRA would limit future amendments to only those in furtherance of the act, which would rule out any attempts to weaken or dilute the new regime. Consequently, industry and allied stakeholders can be expected to fight this ballot initiative.

As mentioned, stakeholders in Congress may be motivated by this new effort to resolve differences and reach agreement on a bill to govern privacy and protect data at the federal level, sweeping aside state laws like the CPRA. However, a new, stronger law in California may cause key Democrats to dig in and insist on the policy changes Republicans have been reluctant to give way on such as a federal private right of action. In such a scenario, it is conceivable Democrats would use their leverage to extract even more changes from Republicans. As it stands, Republicans have moved a fair distance from their original positions on privacy and data protection and may be willing to cede more policy ground.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

The BROWSER Act (S. 1116)

First things first, if you would like to receive my Technology Policy Update, email me. You can find some of these Updates from 2019 here.

My apologies. I thought I had posted this write up and others on the various privacy and data protection bills. In any event, I’ll be doing some remedial work of a sort in putting these materials up, which is not to say I see any great movement on Congress passing a U.S. privacy and data protection bill.

In this post, we will examine one of the Senate bills sponsored by Senators Marsha Blackburn (R-TN), Tammy Duckworth (D-IL), and Martha McSally (R-AZ): the “Balancing the Rights Of Web Surfers Equally and Responsibly Act of 2019” (BROWSER Act) (S. 1116). S. 1116 would set up an enhanced notice and consent regime for consumers policed by the Federal Trade Commission (FTC) but only for certain classes of private sector entities collecting, sharing, selling, and using consumer information, mainly broadband providers and so-called “edge Providers,” that is entities like Google and Facebook that provide services online. This bill is much closer to the current FTC means for regulating privacy and data security even though the scope of the agency’s jurisdiction to police privacy practices for some types of consumer information would be expanded.

As noted, this bill would cover only “broadband internet access service[s]” and “edge service[s],” which as these terms are defined in the bill would mostly be technology and communications companies. Therefore, this bill would sweep much more narrowly than many of the other privacy bills introduced thus far. Accordingly, S. 1116 defines “broadband internet access service” as “a mass-market retail service by wire or radio that provides the capability to transmit data to and receive data from all or substantially all internet endpoints, including any capabilities that are incidental to and enable the operation of the communications service, but excluding dial-up internet access service.” The bill also provides a definition of “edge service:” “a service provided over the internet—

for which the provider requires the user to subscribe or establish an account in order to use the service;

that the user purchases from the provider of the service without a subscription or account;

by which a program searches for and identifies items in a database that correspond to keywords or characters specified by the user, used especially for finding particular sites on the world wide web; or

by which the user divulges sensitive user information; and

includes a service described in subparagraph (A) that is provided through a software program, including a mobile application.

Clearly, big technology companies like Facebook, Google, Instagram, Amazon, etc. would be classified as “edge providers.” Moreover, the definition of broadband internet access service would clearly include all of the internet service providers like Comcast or AT&T but would also seem to include cell phone service providers like Verizon and T-Mobile.

All covered service providers must “provide a user of the service with clear and conspicuous notice of the privacy policies of the provider with respect to the service.” Additionally, covered service providers must also give users “clear and conspicuous advance notice of any material change to the privacy policies of the provider with respect to the service.”

Whether consumers need to opt-in or opt-out on data use will turn on whether the information is “sensitive” or not. Under S. 1116, “sensitive user information” includes any of the following:

  • Financial information.
  • Health information.
  • Information pertaining to children under the age of 13.
  • Social Security number.
  • Precise geolocation information.
  • Content of communications.
  • Web browsing history, history of usage of a software program (including a mobile application), and the functional equivalents of either.

Among the information that would be deemed non-sensitive under the bill are meta-data (aka call detail records) from usage of a phone such as the addressee of a communication and the time, one’s order history from a site like Amazon, matters relating to employment, and other categories of information not enumerated above. Additionally, the bill deems “precise geolocation information” as sensitive information, suggesting “geolocation information” that is less than precise might be non-sensitive. So, perhaps a trip to a mall would not be considered “precise” but the stores a customer visits might be?

Covered service providers would need to “obtain opt-in approval from a user to use, disclose, or permit access to the sensitive user information of the user.” However, what constitutes the “approval” necessary to satisfy this requirement is not spelled out in the bill. Conversely, the provider of covered services must only offer consumers the option to opt out of the use, disclosure, and accessing of their non-sensitive personal information. Again “approval” is a key word as covered service providers need only obtain a consumer’s approval in order to opt-out.

As is usually the case, there are some exceptions to this seemingly general rule against using, collecting, sharing, or selling sensitive user information. Notably, in the following situations, covered service providers need not obtain opt-in approval from consumers:

(1) In providing the covered service from which the information is derived, or in providing services necessary to, or used in, the provision of the service.

(2) To initiate, render, bill for, and collect for the covered service.

(3) To protect the rights or property of the provider, or to protect users of the covered service and other service providers from fraudulent, abusive, or unlawful use of the service.

(4) To provide location information or non-sensitive user information—

(A) to a public safety answering point, emergency medical service provider or emergency dispatch provider, public safety, fire service, or law enforcement official, or hospital emergency or trauma care facility, in order to respond to the request of the user for emergency services;

(B) to inform the legal guardian of the user, or members of the immediate family of the user, of the location of the user in an emergency situation that involves the risk of death or serious physical harm; or

(C) to providers of information or database management services solely for purposes of assisting in the delivery of emergency services in response to an emergency.

(5) As otherwise required or authorized by law.

Covered service providers would not be able to require consumers to waive their privacy rights in exchange for use of a service. The bill stipulates that “[a] provider of a covered service may not—

(1) condition, or effectively condition, provision of the service on agreement by a user to waive privacy rights guaranteed by law or regulation, including this Act; or

(2) terminate the service or otherwise refuse to provide the service as a direct or indirect consequence of the refusal of a user to waive any privacy rights described in paragraph (1).”

The FTC would enforce this new privacy scheme under its existing Section 5 powers to police unfair and deceptive practices and crucially not as if a violation of an existing FTC regulation against unfair and deceptive practices. If the FTC is seeking to punish a violation of such a regulation, it may seek civil fines in the first instance. And, this is in contrast to the FTC’s general powers to punish unfair and deceptive practices with respect to data security and privacy violations, which is limited to monetary remedies in the form of equitable relief such as disgorgement and restitution. The BROWSER Act would be at odds with most other privacy bills that contain language such as “[a] violation of this Act or a regulation promulgated under this Act shall be treated as a violation of a rule under section 18(a)(1)(B) of the Federal Trade Commission Act (15 U.S.C. 57a(a)(1)(B)) regarding unfair or deceptive acts or practices.”

Again unlike other bills, the BROWSER Act does not provide the FTC with the authority to promulgate regulations under the Administrative Procedures Act (APA) process, and to the extent the agency would be able to write regulations to implement the bill, it would be under the much more lengthy and involved Moss-Magnuson procedures that have effectively halted the FTC’s regulatory activity (seeIt’s  Time  to  Remove  the  “Mossified” Procedures  for  FTC  Rulemaking” for a summary of these procedures.) Therefore, the FTC would essentially extend to privacy regulation its current practice of penalizing companies for not maintaining “reasonable” data security standards on a case-by-case basis and not providing any bright lines to assure companies of the practices.

The FTC’s jurisdiction would be expanded, however, to police the privacy practices under the bill for broadband providers that would otherwise be subject to the jurisdiction and enforcement powers of the Federal Communications Commission (FCC.)

The bill would preempt state privacy laws. To wit, “[n]o State or political subdivision of a State shall, with respect to a provider of a covered service subject to this Act, adopt, maintain, enforce, or impose or continue in effect any law, rule, regulation, duty, requirement, standard, or other provision having the force and effect of law relating to or with respect to the privacy of user information.” Of course, preemption of state laws is a non-starter for many Democrats but a sine non qua for many Republicans, leaving this as an area of ongoing dispute.

Regarding another issue that has split Democrats and Republicans in the past regarding data security legislation, the BROWSER Act would not provide a role for state attorneys general to enforce the new regulatory regime. However, Republicans may be willing to give on this issue provided consumers have no private right of action, and the BROWSER Act would not allow consumers to sue those providing covered services for violating the bill.

© Michael Kans and Michael Kans Blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans and Michael Kans Blog with appropriate and specific direction to the original content.