ACCC Charges Google With Violations Of Consumer Laws Over Android Location Settings

The Australian Competition and Consumer Commission (ACCC) announced a legal action against Google “alleging they engaged in misleading conduct and made false or misleading representations to consumers about the personal location data Google collects, keeps and uses” according to the agency’s press release. In its initial filing, the ACCC is claiming that Google mislead and deceived the public in contravention of the Australian Competition Law and Android users were harmed because those that switched off Location Services were unaware that their location information was still be collected and used by Google for it was not readily apparent that Web & App Activity also needed to be switched off. ACCC Chair Rod Sims explained that “[w]e are taking court action against Google because we allege that as a result of these on-screen representations, Google has collected, kept and used highly sensitive and valuable personal information about consumers’ location without them making an informed choice.” Moreover, it is being reported in the Australian press that the ACCC is preparing an anti-competitive action against Google based on its actions against an Australian competitor, Unlockd, that subsequently filed for administration last year.

In its press release, the ACCC claimed

[T]hat from at least January 2017, Google breached the Australian Consumer Law when it made on-screen representations on Android mobile phones and tablets that the ACCC alleges misled consumers about the location data Google collected or used when certain Google Account settings were enabled or disabled. The representations were made to consumers setting up a Google Account on their Android mobile phones and tablets, and to consumers who later accessed their Google Account settings through their Android mobile phones and tablets.

The ACCC stated that its “case regarding the collection of location data focuses on two Google Account settings: one labelled ‘Location History’; and another labelled ‘Web & App Activity’.” The agency alleged “that from January 2017 until late 2018, it was misleading for Google to not properly disclose to consumers that both settings had to be switched off if consumers didn’t want Google to collect, keep and use their location data.’ The ACCC claimed that “when consumers set up a Google Account on their Android phone or tablet, consumers would have incorrectly believed, based on Google’s conduct, that ‘Location History’ was the only Google Account setting that affected whether Google collected, kept or used data about their location.” The agency further added that “if consumers later accessed their Google Account settings on their Android device, Google did not inform them that by leaving ‘Web & App Activity’ switched on, Google would continue to collect location data.”

The ACCC stated its allegations “that from around mid-2018 until late 2018, Google represented to consumers that the only way they could prevent Google from collecting, keeping and using their location data was to stop using certain Google services, including Google Search and Google Maps.” The ACCC noted that “this could be achieved by switching off both ‘Location History’ and ‘Web & App Activity’.”

In terms of possible liability, since much of Google’s alleged conduct occurred before a rewrite of the Australia Competition Law that will allow for higher possible fines, including up to 10% of annual turnover and/or A$10 million per violation, any possible fine could be relatively small (on the order of A$1.2 million per violation). This is not Google’s first privacy violation fine this year. In January, France’s Commission nationale de l’informatique et des libertés (CNIL) aka (the French  Data  Protection Authority) levied a €50 million fine under the General Data Protection Regulation (GDPR) “for lack of transparency, inadequate information and lack of valid consent regarding the ads personalization.” In September, the Federal Trade Commission (FTC) and New York Attorney General Leticia James announced a $170 million settlement with Google and its subsidiary YouTube regarding alleged violations of the “Children’s Online Privacy Protection Act of 1998” (COPPA) and Section 5 of the FTC Act. To date, this is the largest settlement to resolve alleged COPPA violations.

Earlier this year, the ACCC released its final report from its “Digital Platforms Inquiry” that “proposes specific recommendations aimed at addressing some of the actual and potential negative impacts of digital platforms in the media and advertising markets, and also more broadly on consumers.” Not surprisingly, the report focuses entirely on Facebook and Google but not Amazon, which the ACCC remarked was still relatively small in Australia.

Moreover, the ACCC explained

Many digital platforms increasingly collect a large amount and variety of user data. The data collected often extends far beyond the data users actively provide when using the digital platform’s services. Digital platforms may passively collect data from users, including from online browsing behaviour across the internet, IP addresses, device specifications and location and movement data. Once collected, digital platforms often have broad discretions regarding how user data is used and also disclosed to third parties.

The ACCC articulated its “view that consumers’ ability to make informed choices is affected by:

The information asymmetry between digital platforms and consumers. The ACCC found that consumers are generally not aware of the extent of data that is collected nor how it is collected, used and shared by digital platforms. This is influenced by the length, complexity and ambiguity of online terms of service and privacy policies. Digital platforms also tend to understate to consumers the extent of their data collection practices while overstating the level of consumer control over their personal user data.

FTC Acts Against Stalking App Developer

The Federal Trade Commission (FTC) announced its first action regarding applications for smart phones that may be placed on a user’s device without their knowledge or consent (aka stalking apps). The FTC took action against the developer of stalking apps of violating both the Federal Trade Commission Act (FTC Act) and the Children’s Privacy Protection Rule (COPPA Rule). In its press release, the FTC claimed these apps “allowed purchasers to monitor the mobile devices on which they were installed, without the knowledge or permission of the device’s user.”

Retina-X Studios, LLC agreed to a consent order that permanently restrains and enjoins the company “from, or assisting others in, promoting, selling, or distributing a Monitoring Product or Service unless Respondents” meet a list of requirements, including foreswearing the circumvention of a mobile device’s operating system for installation (aka jail-breaking or rooting), eliciting affirmative agreement that users of any such app will only employ it in lawful, enumerated practices, and that whenever the app is running, there must be a clear and conspicuous icon on the device alerting the user that the run has been installed and is functional.

Like many such settlements, the FTC elicited agreement from the app developer to cease certain past practices and to engage in future practices to both avoid the offensive conduct and that are designed to lead to better data security. Failure to do so would allow the FTC to go back to the court and request an order to show cause against the entity, putting it in jeopardy of facing civil penalties of more than $42,000 per violation.

Of course, the FTC’s power to order entities to take certain, broadly gauged actions, such as institute a comprehensive data security program, have been called into question in LabMD v. FTC. In that 2018 case, U.S. Court of Appeals for the Eleventh Circuit ruled against the FTC and held that the agency may not direct entities to take, future, ill-defined actions. Rather, in the appeals court’s view, the FTC’s underlying statute allows the agency only to spell out the conduct that entities may not engage in whether it be in a cease and desist order issued by the FTC or a consent decree issued by a U.S. District Court. Of course, this is only the view of one circuit, and the other circuits are free to continue operating under the old understanding that the FTC may indeed direct entities to, for example and most relevantly in this case, implement a comprehensive data security regime.

In LabMD, the FTC Order that the Eleventh Circuit found faulty required:

…that the respondent shall, no later than the date this order becomes final and effective, establish and implement, and thereafter maintain, a comprehensive information security program that is reasonably designed to protect the security, confidentiality, and integrity of personal information collected from or about consumers by respondent or by any corporation, subsidiary, division, website, or other device or affiliate owned or controlled by respondent. Such program, the content and implementation of which must be fully documented in writing, shall contain administrative, technical, and physical safeguards appropriate to respondent’s size and complexity, the nature and scope of respondent’s activities, and the sensitivity of the personal information collected from or about consumers, including…

A.the designation of an employee or employees to coordinate and be accountable for the information security program;

B.the identification of material internal and external risks to the security, confidentiality, and integrity of personal information that could result in the unauthorized disclosure, misuse, loss, alteration, destruction, or other compromise of such information, and assessment of the sufficiency of any safeguards in place to control these risks. At a minimum, this risk assessment should include consideration of risks in each area of relevant operation, including, but not limited to: (1) employee training and management; (2) information systems, including network and software design, information processing, storage, transmission, and disposal; and (3) prevention, detection, and response to attacks, intrusions, or other systems failures;

C.the design and implementation of reasonable safeguards to control the risks identified through risk assessment, and regular testing or monitoring of the effectiveness of the safeguards’ key controls, systems, and procedures;

D.the development and use of reasonable steps to select and retain service providers capable of appropriately safeguarding personal information they receive from respondent, and requiring service providers by contract to implement and maintain appropriate safeguards; and

E.the evaluation and adjustment of respondent’s information security program in light of the results of the testing and monitoring required by Subpart C, any material changes to respondent’s operations or business arrangements, or any other circumstances that respondent knows or has reason to know may have a material impact on the effectiveness of its information security program.

However, in the instant case, the FTC is far more prescriptive than it was by directing Retina-X Studios to

Design, implement, maintain, and document safeguards that control for the internal and external risks to the security, confidentiality, or integrity of Personal Information identified in response to sub-Provision VI.D. Each safeguard shall be based on the volume and sensitivity of the Personal Information that is at risk, and the likelihood that the risk could be realized and result in the unauthorized access, collection, use, alteration, destruction, or disclosure of the Personal Information. Respondents’ safeguards shall also include:

1.Technical measures to monitor all of Respondents’ networks and all systems and assets within those networks to identify data security events, including unauthorized attempts to exfiltrate Personal Information from those networks;

2.Technical measures to secure Respondents’ web applications and mobile applications and address well-known and reasonably foreseeable vulnerabilities, such as cross-site scripting, structured query language injection, and other risks identified by Respondents through risk assessments and/or penetration testing;

3.Data access controls for all databases storing Personal Information, including by, at a minimum, (a) requiring authentication to access them, and (b) limiting employee or service provider access to what is needed to perform that employee’s job function;

4.Encryption of all Personal Information on Respondents’ computer networks; and

5.Establishing and enforcing policies and procedures to ensure that all service providers with access to Respondents’ network or access to Personal Information are adhering to Respondents’ Information Security Program.

The FTC continues by requiring:

F. Assess, at least once every twelve (12) months and promptly following a Covered Incident, the sufficiency of any safeguards in place to address the risks to the security, confidentiality, or integrity of Personal Information, and modify the Information Security Program based on the results.

G. Test and monitor the effectiveness of the safeguards at least once every twelve months and promptly following a Covered Incident, and modify the Information Security Program based on the results. Such testing shall include vulnerability testing of each of Respondents’ network(s) once every four (4) months and promptly after any Covered Incident, and penetration testing of each Covered Business’s network(s) at least once every twelve (12) months and promptly after any Covered Incident;

H. Select and retain service providers capable of safeguarding Personal Information they receive from each Covered Business, and contractually require service providers to implement and maintain safeguards for Personal Information; and

I. Evaluate and adjust the Information Security Program in light of any changes to Respondents’ operations or business arrangements, a Covered Incident, or any other circumstances that Respondents know or have reason to know may have an impact on the effectiveness of the Information Security Program. At a minimum, each Covered Business must evaluate the Information Security Program at least once every twelve (12) months and modify the Information Security Program based on the results.

Is it possible the FTC is seeking to forestall future actions based on LabMD through the use of more descriptive, prescriptive requirements for entities in establishing and running better data security programs? It absolutely could be. Some have suggested that the agency telegraphed its current thinking on what is proper data security in draft regulations earlier this year that are more detailed than the current regulations and the numerous settlements the FTC has entered into.