Pending Legislation In U.S. Congress, Part IV

There is an even chance that Congress further narrows the Section 230 liability shield given criticism of how tech companies have wielded this language.

This year, Congress increased its focus on Section 230 of the Communications Act of 1934 that gives companies like Facebook, Twitter, Google, and others blanket immunity from litigation based on the content others post. Additionally, these platforms cannot be sued for “good faith” actions to take down or restrict material considered “to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.” Many Republicans are claiming both that these platforms are biased against conservative content (a claim not borne out by the evidence we have) and are not doing enough to find and remove material that exploits children. Many Democrats are arguing the platforms are not doing enough to remove right wing hate speech and agree, in some part, regarding material that exploits children.

Working in the background of any possible legislation to narrow Section 230 is an executive order issued by the President directing two agencies to investigate “online censorship” even though the Supreme Court of the United States has long held that a person or entity does not have First Amendment rights visa vis private entities. Finally, the debate over encryption is also edging its way into Section 230 by a variety of means, as the Trump Administration, especially the United States Department of Justice (DOJ) has been pressuring tech companies to address end-to-end encryption on devices and apps. One means of pressure is threatening to remove Section 230 liability protection to garner compliance on encryption issues.

In late July, the Senate Judiciary Committee met, amended and reported out the “Eliminating Abusive and Rampant Neglect of Interactive Technologies Act of 2020” (EARN IT Act of 2020) (S.3398), a bill that would change 47 USC 230 by narrowing the liability shield and potentially making online platforms liable to criminal and civil actions for having child sexual materials on their platforms. The bill as introduced in March was changed significantly when a manager’s amendment was released and then further changed at the markup. The Committee reported out the bill unanimously, sending it to the full Senate, perhaps signaling the breadth of support for the legislation. It is possible this could come before the full Senate this year. If passed, the EARN IT Act of 2020 would represent a second piece of legislation to change Section 230 in the last two years with enactment of “Allow States and Victims to Fight Online Sex Trafficking Act of 2017” (P.L. 115-164). There is, at present, no House companion bill.

In advance of the markup, two of the sponsors, Judiciary Committee Chair Lindsey Graham (R-SC) and Senator Richard Blumenthal (D-CT) released a manager’s amendment to the EARN IT Act. The bill would still establish a National Commission on Online Child Sexual Exploitation Prevention (Commission) that would design and recommend voluntary “best practices” applicable to technology companies such as Google, Facebook, and many others to address “the online sexual exploitation of children.”

Moreover, instead of creating a process under which the DOJ, Department of Homeland Security (DHS), and the Federal Trade Commission (FTC) would accept or reject these standards, as in the original bill, the DOJ would merely have to publish them in the Federal Register. Likewise, the language establishing a fast track process for Congress to codify these best practices has been stricken, too as well as the provisions requiring certain technology companies to certify compliance with the best practices.

Moreover, the revised bill also lacks the safe harbor against lawsuits based on having “child sexual abuse material” on their platform for following the Commission’s best practices. Therefore, instead of encouraging technology companies to use the best practices in exchange for continuing to enjoy liability protection, the language creating this safe harbor in the original bill has been stricken. Now the manager’s amendment strikes liability protection under 47 USC 230 for these materials except if a platform is acting as a Good Samaritan in removing these materials. Consequently, should a Facebook or Google fail to find and take down these materials in an expeditious fashion, then they would face federal and state liability to civil and criminal lawsuits.

However, the Committee adopted an amendment offered by Senator Patrick Leahy (D-VT) that would change 47 USC 230 by making clear that the use of end-to-end encryption does not make providers liable for child sexual exploitation laws and abuse material. Specifically, no liability would attach because the provider

  • utilizes full end-to-end encrypted messaging   services,   device   encryption,   or   other   encryption services;
  • does  not  possess  the  information  necessary to decrypt a communication; or
  • fails to take an action that would otherwise  undermine  the  ability  of  the  provider  to  offer  full  end-to-end  encrypted  messaging  services, device encryption, or other encryption services.

Moreover, in advance of the first hearing to markup the EARN IT Act of 2020, key Republican stakeholders released a bill that would require device manufacturers, app developers, and online platforms to decrypt data if a federal court issues a warrant based on probable cause. Critics of the EARN IT Act of 2020 claimed the bill would force big technology companies to choose between weakening encryption or losing their liability protection under Section 230. They likely see this most recent bill as another shot across the bow of technology companies, many of which continue to support and use end-to-end encryption even though the United States government and close allies are pressuring them on the issue. However, unlike the EARN IT Act of 2020, this latest bill does not have any Democratic cosponsors.

Graham and Senators Tom Cotton (R-AR) and Marsha Blackburn (R-TN) introduced the “Lawful Access to Encrypted Data Act” (S.4051) that would require the manufacturers of devices such as smartphones, app makers, and platforms to decrypt a user’s data if a federal court issues a warrant to search a device, app, or operating system.

The assistance covered entities must provide includes:

  • isolating the information authorized to be searched;
  • decrypting or decoding information on the electronic device or remotely stored electronic information that is authorized to be searched, or otherwise providing such information in an intelligible format, unless the independent actions of an unaffiliated entity make it technically impossible to do so; and
  • providing technical support as necessary to ensure effective execution of the warrant for the electronic devices particularly described by the warrant.


The DOJ would be able to issue “assistance capability directives” that would require the recipient to prepare or maintain the ability to aid a law enforcement agency that obtained a warrant that needs technical assistance to access data. Recipients of such orders can file a petition in federal court in Washington, DC to modify or set aside the order on only three grounds: it is illegal, it does meet the requirements of the new federal regulatory structure, or “it is technically impossible for the person to make any change to the way the hardware, software, or other property of the person behaves in order to comply with the directive.” If a court rules against the recipient of such an order, it must comply, and if any recipient of such an order does not comply, a court may find it in contempt of court, allowing for a range of punishments until the contempt is cured. The bill also amends the “Foreign Intelligence Surveillance Act” (FISA) to require the same decryption and assistance in FISA activities, which are mostly surveillance of people outside the United States. The bill would focus on those device manufacturers that sell more than 1 million devices and those platforms and apps with more than 1 million users, meaning obviously companies like Apple, Facebook, Google, and others. The bill also tasks the DOJ with conducting a prize competition “to incentivize and encourage research and innovation into solutions providing law enforcement access to encrypted data pursuant to legal process”

In response to the EARN IT Act, a bicameral group of Democrats released legislation to dramatically increase funding for the United States’ government to combat the online exploitation of children that has served as an alternate proposal to  a bill critics claim would force technology companies to give way on encryption under pain of losing the Section 230 liability shield. The “Invest in Child Safety Act” (H.R.6752/S.3629) would require $5 billion in funding outside the appropriations process to bolster current efforts to fight online exploitation and abuse. This bill was introduced roughly two months after the EARN IT Act of 2020, and in their press release, Senators Ron Wyden (D-OR), Kirsten Gillibrand (D-NY), Bob Casey D-PA) and Sherrod Brown (D-OH) stated

The Invest in Child Safety Act would direct $5 billion in mandatory funding to investigate and target the pedophiles and abusers who create and share child sexual abuse material online. And it would create a new White House office to coordinate efforts across federal agencies, after DOJ refused to comply with a 2008 law requiring coordination and reporting of those efforts. It also directs substantial new funding for community-based efforts to prevent children from becoming victims in the first place.  

Representatives Anna Eshoo (D-CA), Kathy Castor (D-FL), Ann M. Kuster (D-NH), Eleanor Holmes Norton (D-DC), Alcee L. Hastings (D-FL), and Deb Haaland (D-NM) introduced the companion bill in the House.

The bill would establish in the Executive Office of the President an Office to Enforce and Protect Against Child Sexual Exploitation headed by a Senate confirmed Director who would coordinate efforts across the U.S. government to fight child exploitation. Within six months of the appointment of the first Director, he or she would need to submit to Congress “an enforcement and protection strategy” and thereafter send an annual report as well. The DOJ and Federal Bureau of Investigation would receive additional funding to bolster and improve their efforts in this field.

In June, Senator Josh Hawley (R-MO) introduced the “Limiting Section 230 Immunity to Good Samaritans Act” (S.3983) that is cosponsored by Senators Marco Rubio (R-FL), Kelly Loeffler (R-GA), Mike Braun (R-IN) and Tom Cotton (R-AR). The bill would amend the liability shield in 47 U.S.C. 230 to require large social media platforms like Facebook and Twitter to update their terms of service so that they must operate under “good faith” or face litigation with possible monetary damages for violating these new terms of service. Hawley’s bill would add a definition of “good faith” to the statute, which echoes one of the recommendations made by the DOJ. In relevant part, the new terms of service would bar so-called “edge providers” from “intentional[]  selective  enforcement  of  the  terms  of  service  of  the  interactive  computer  service,  including  the  intentionally  selective  enforcement  of  policies  of  the  provider  relating  to  restricting  access to or availability of material.” If such “selective enforcement” were to occur, then edge providers could be sued but the plaintiffs would have to show the edge provider actually knew they were breaching the terms of service by selectively enforcing its platform rules.

The focus of such alleged “selective enforcement” arise from allegations that conservative material posted on Twitter and Facebook is being targeted in ways that liberal material is not, including being taken down. This claim has been leveled by many Republican stakeholders. And now they are proposing providing affected people with the right to sue; however, it is not clear whether these Republicans have changed their minds on allowing private rights of action against technology companies as a means of enforcing laws. To date, many Republicans have opposed private rights of action for data breaches or violations of privacy.

In early July, Senator Brian Schatz (D-HI) and Senate Majority Whip John Thune (R-SD) introduced the “Platform Accountability and Consumer Transparency (PACT) Act” (S.4066) that would reform Section 230. Schatz and Thune are offering their bill as an alternative to the EARN IT Act of 2020. Schatz and Thune serve as the Ranking Member and Chair of the Communications, Technology, Innovation and the Internet Subcommittee of the Senate Commerce, Science, and Transportation Committee and are thus key stakeholders on any legislation changing Section 230.

Under the PACT Act, so-called “interactive computer services” (the term of art used in Section 230) would need to draft and publish “acceptable use polic[ies]” that would inform users of what content may be posted, a breakdown of the process by which the online platform reviews content to make sure it is in compliance with policy, and spell out the process people may use to report potentially policy-violating content, illegal content, and illegal activity. The PACT Act defines each of the three terms:

  • ‘‘illegal activity’’ means activity conducted by an information content provider that has been determined by a Federal or State court to violate Federal criminal or civil law.
  • ‘‘illegal content’’ means information provided by an information content provider that has been determined by a Federal or State court to violate—
    • Federal criminal or civil law; or
    • State defamation law.
  • “potentially policy-violating content’’ means content that may violate the acceptable use policy of the provider of an interactive computer service.

The first two definitions will pose problems in practice, for if one state court determines content is illegal but another does not, then how must an online platform respond to comply with the reformed Section 230. The same would be true of illegal activity. Consequently, online platforms may be forced to monitor content state to state, hardly a practical system and one that would favor existing market entrants while proving a barrier to entry for new entrants. And, then based on different state or federal court rulings are online platforms to then allow or take down content on the basis of where the person posting the content lives?

In any event, after receiving notice, online platforms would have 24 hours to remove illegal content or activity and two weeks for potentially policy-violating content to review the notice and determine if the content actually violates the platform’s policies. In the latter case, the platform would be required to notify the person that posted the content and allow them an appeal if the online platform decides to take down the content because it violated its policies based on a user complaint. There would be a different standard for small business providers, requiring them to act on the three categories of information within a reasonable period of time after receiving notice. And, telecommunications and cloud networks and other entities would be exempted from this reform to Section 230 altogether.

However, Section 230’s liability shield would be narrowed with respect to illegal content and activity. If a provider knows of the illegal content and activity but does not remove it within 24 hours, then they would lose the shield from lawsuits. So, if Facebook fails to take down a posting urging someone to assassinate the President, a federal crime, within 24 hours of being notified it was posted, it could be sued. However, Facebook and similar companies would not have an affirmative duty to locate and remove illegal content and activity, however, and could continue to enjoy Section 230 liability if there is either type of content on its platform so long as there is no notice provided. And yet, Section 230 would be narrowed overall as the provision making clear that all federal criminal and civil laws and regulations are outside the liability protection. Currently, this provision only pertains to federal criminal statutes. And, state attorneys general would be able to enforce federal civil laws if the lawsuit could also be brought on the basis of a civil law in the attorney general’s state.

Interactive computer services must publish a quarterly transparency report including the total number of instances in which illegal content, illegal activity, or potentially policy-violating content was flagged and the number of times action was taken, among other data. Additionally, they would need to identify the number of times they demonetized or deprioritized content. These reports would be publicly available.

The FTC would be explicitly empowered to act under the bill. Any violations of the process by which an online platform reviews notices of potentially policy-violating content, appeals, and transparency reports would be violations of an FTC regulation defining an unfair or deceptive act or practice, allowing the agency to seek civil fines for first violations. But, this authority is circumscribed by a provision barring the FTC from reviewing “any action or decision by a provider of an interactive computer service related to the application of the acceptable use policy of the provider.” This limitation would seem to allow an online platform to remove content on its own initiative if it violates the platform’s policies without the FTC being able to review such decisions. This would provide ample incentive for Facebook, Twitter, Reddit, and others to police their platforms so that they could avoid FTC action. The FTC’s jurisdiction would be widened to include non-profits regarding how they manage removing content based on a user complaint the same way for profit entities would be subject to the agency’s scrutiny.

The National Institute of Technology and Standards (NIST) would need to develop “a voluntary framework, with input from relevant experts, that consists of non-binding standards, guidelines, and best practices to manage risk and shared challenges related to, for the purposes of this Act, good faith moderation practices by interactive computer service providers.”

This week, Senate Commerce, Science, and Transportation Committee Chair Roger Wicker (R-MS), Graham, and Blackburn introduced the latest Section 230 bill, the “Online Freedom and Viewpoint Diversity Act” (S.4534) that would essentially remove liability protection for social media platforms and others that choose to correct, label, or remove material, mainly political material. A platform’s discretion would be severely limited as to when and under what circumstances it could take down content. This bill would seem tailored to conservatives who believe Twitter, Facebook, etc. are biased against them and their viewpoints.

In May, after Twitter factchecked two of his Tweets regarding false claims made about mail voting in California in response to the COVID-19 pandemic, President Donald Trump signed a long rumored executive order (EO) seen by many as a means of cowing social media platforms: the “Executive Order on Preventing Online Censorship.” This EO directed federal agencies to act, and one has by asking the Federal Communications Commission (FCC) to start a rulemaking, which has been initiated. However, there is at least one lawsuit pending to enjoin action on the EO that could conceivably block implementation.

In the EO, the President claimed

Section 230 was not intended to allow a handful of companies to grow into titans controlling vital avenues for our national discourse under the guise of promoting open forums for debate, and then to provide those behemoths blanket immunity when they use their power to censor content and silence viewpoints that they dislike.  When an interactive computer service provider removes or restricts access to content and its actions do not meet the criteria of subparagraph (c)(2)(A), it is engaged in editorial conduct.  It is the policy of the United States that such a provider should properly lose the limited liability shield of subparagraph (c)(2)(A) and be exposed to liability like any traditional editor and publisher that is not an online provider.

Consequently, the EO directs that “all executive departments and agencies should ensure that their application of section 230(c) properly reflects the narrow purpose of the section and take all appropriate actions in this regard.”

With respect to specific actions, the Department of Commerce’s the National Telecommunications and Information Administration (NTIA) was directed to file a petition for rulemaking with the FCC to clarify the interplay between clauses of Section 230, notably whether the liability shield that protects companies like Twitter and Facebook for content posted on an online platform also extends to so-called “editorial decisions,” presumably actions like Twitter’s in factchecking Trump regarding mail balloting. The NTIA was also to ask the FCC to define better the conditions under which an online platform may take down content in good faith that are “deceptive, pretextual, or inconsistent with a provider’s terms of service; or taken after failing to provide adequate notice, reasoned explanation, or a meaningful opportunity to be heard.” The NTIA was directed to also ask the FCC to promulgate any other regulations necessary to effectuate the EO.

The FTC must consider whether online platforms are violating Section 5 of the FTC Act barring unfair or deceptive practices, which “may include practices by entities covered by section 230 that restrict speech in ways that do not align with those entities’ public representations about those practices.” As of yet, the FTC has not done so, and in remarks before Congress, FTC Chair Joseph Simons has opined that doing so is outside the scope of the agency’s mission. Consequently, there has been talk in Washington that the Trump Administration is looking for a new FTC Chair.

Following the directive in the EO, on 27 July, the NTIA filed a petition with the FCC, asking the agency to start a rulemaking to clarify alleged ambiguities in 47 USC 230 regarding the limits of the liability shield for the content others post online versus the liability protection for “good faith” moderation by the platform itself.

The NTIA asserted “[t]he FCC should use its authorities to clarify ambiguities in section 230 so as to make its interpretation appropriate to the current internet marketplace and provide clearer guidance to courts, platforms, and users…[and] urges the FCC to promulgate rules addressing the following points:

  1. Clarify the relationship between subsections (c)(1) and (c)(2), lest they be read and applied in a manner that renders (c)(2) superfluous as some courts appear to be doing.
  2. Specify that Section 230(c)(1) has no application to any interactive computer service’s decision, agreement, or action to restrict access to or availability of material provided by another information content provider or to bar any information content provider from using an interactive computer service.
  3. Provide clearer guidance to courts, platforms, and users, on what content falls within (c)(2) immunity, particularly section 230(c)(2)’s “otherwise objectionable” language and its requirement that all removals be done in “good faith.”
  4. Specify that “responsible, in whole or in part, for the creation or development of information” in the definition of “information content provider,” 47 U.S.C.
    § 230(f)(3), includes editorial decisions that modify or alter content, including but not limited to substantively contributing to, commenting upon, editorializing about, or presenting with a discernible viewpoint content provided by another information content provider.
  5. Mandate disclosure for internet transparency similar to that required of other internet companies, such as broadband service providers.

NTIA argued that

  • Section 230(c)(1) has a specific focus: it prohibits “treating” “interactive computer services,” i.e., internet platforms, such as Twitter or Facebook, as “publishers.” But, this provision only concerns “information” provided by third parties, i.e., “another internet content provider”68 and does not cover a platform’s own content or editorial decisions.
  • Section (c)(2) also has a specific focus: it eliminates liability for interactive computer services that act in good faith “to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.”

In early August, the FCC asked for comments on the NTIA petition, and comments were due by 2 September. Over 2500 comments have been filed, and a cursory search turned up numerous form letter comments drafted by a conservative organization that were then submitted by members and followers.

Finally, the House’s “FY 2021 Financial Services and General Government Appropriations Act” (H.R. 7668) has a provision that would bar either the FTC or FCC from taking certain actions related to Executive Order 13925, “Preventing Online Censorship.” It is very unlikely Senate Republicans, some of whom have publicly supported this Executive Order, will allow this language into the final bill funding the agencies.

There has been other executive branch action on Section 230. In mid-June, the DOJ released “a set of reform proposals to update the outdated immunity for online platforms under Section 230” according to a department press release. While these proposals came two weeks after President Donald Trump’s “Executive Order on Preventing Online Censorship” signed after Twitter fact checked two tweets that were not true (see here for more detail and analysis), the DOJ launched its review of 47 U.S.C. 230 in February 2020.

The DOJ explained “[t]he Section 230 reforms that the Department of Justice identified generally fall into four categories:

1) Incentivizing Online Platforms to Address Illicit Content. The first category of potential reforms is aimed at incentivizing platforms to address the growing amount of illicit content online, while preserving the core of Section 230’s immunity for defamation.

  1. Bad Samaritan Carve-Out. First, the Department proposes denying Section 230 immunity to truly bad actors. The title of Section 230’s immunity provision—“Protection for ‘Good Samaritan’ Blocking and Screening of Offensive Material”—makes clear that Section 230 immunity is meant to incentivize and protect responsible online platforms. It therefore makes little sense to immunize from civil liability an online platform that purposefully facilitates or solicits third-party content or activity that would violate federal criminal law.
  2. Carve-Outs for Child Abuse, Terrorism, and Cyber-Stalking. Second, the Department proposes exempting from immunity specific categories of claims that address particularly egregious content, including (1) child exploitation and sexual abuse, (2) terrorism, and (3) cyber-stalking. These targeted carve-outs would halt the over-expansion of Section 230 immunity and enable victims to seek civil redress in causes of action far afield from the original purpose of the statute.
  3. Case-Specific Carve-Outs for Actual Knowledge or Court Judgments. Third, the Department supports reforms to make clear that Section 230 immunity does not apply in a specific case where a platform had actual knowledge or notice that the third party content at issue violated federal criminal law or where the platform was provided with a court judgment that content is unlawful in any respect.

2) Clarifying Federal Government Civil Enforcement Capabilities. A second category of reform would increase the ability of the government to protect citizens from illicit online conduct and activity by making clear that the immunity provided by Section 230 does not apply to civil enforcement by the federal government, which is an important complement to criminal prosecution.

3) Promoting Competition. A third reform proposal is to clarify that federal antitrust claims are not covered by Section 230 immunity. Over time, the avenues for engaging in both online commerce and speech have concentrated in the hands of a few key players. It makes little sense to enable large online platforms (particularly dominant ones) to invoke Section 230 immunity in antitrust cases, where liability is based on harm to competition, not on third-party speech.

4) Promoting Open Discourse and Greater Transparency. A fourth category of potential reforms is intended to clarify the text and original purpose of the statute in order to promote free and open discourse online and encourage greater transparency between platforms and users.

  1. Replace Vague Terminology in (c)(2). First, the Department supports replacing the vague catch-all “otherwise objectionable” language in Section 230 (c)(2) with “unlawful” and “promotes terrorism.” This reform would focus the broad blanket immunity for content moderation decisions on the core objective of Section 230—to reduce online content harmful to children—while limiting a platform’s ability to remove content arbitrarily or in ways inconsistent with its terms or service simply by deeming it “objectionable.”
  2. Provide Definition of Good Faith. Second, the Department proposes adding a statutory definition of “good faith,” which would limit immunity for content moderation decisions to those done in accordance with plain and particular terms of service and accompanied by a reasonable explanation, unless such notice would impede law enforcement or risk imminent harm to others. Clarifying the meaning of “good faith” should encourage platforms to be more transparent and accountable to their users, rather than hide behind blanket Section 230 protections.
  3. Continue to Overrule Stratton Oakmont to Avoid the Moderator’s Dilemma. Third, the Department proposes clarifying that a platform’s removal of content pursuant to Section 230 (c)(2) or consistent with its terms of service does not, on its own, render the platform a publisher or speaker for all other content on its service.

While the DOJ did not release legislative language to affect these changes, it is possible to suss out the DOJ’s purposes in making these recommendations. The Department clearly believes that the Section 230 liability shield deprives companies like Facebook of a number of legal and financial incentives to locate, takedown, or block material such as child pornography. The New York Times published articles last year (see here and here) about the shortcomings critics have found in a number of online platforms’ efforts to find and remove this material. If the companies faced civil liability for not taking down the images, the rationale seems to go, then they would devote much greater resources to doing so. Likewise, with respect to terrorist activities and cyber-bullying, the DOJ seems to think this policy change would have the same effect.

Some of the DOJ’s other recommendations seem aimed at solving an issue often alleged by Republicans and conservatives: that their speech is more heavily policed and censored than others on the political spectrum. The recommendations call for removing the word “objectionable” from the types of material a provider may remove or restrict in good faith and adding “unlawful” and “promotes terrorism.” The recommendations would also call for a statutory definition of “good faith,” which dovetails with an action in the EO for an Administration agency to petition the Federal Communications Commission (FCC) to conduct a rulemaking to better define this term.

Some consider the Department’s focus on Section 230 liability a proxy for its interest in having technology companies drop default end-to-end encryption and securing their assistance in accessing any communications on such platforms. If this were true, the calculation seems to be technology companies would prefer to be shielded from financial liability over ensuring users communications and transactions are secured via encryption.

© Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.

Image by Gerd Altmann from Pixabay

3 thoughts on “Pending Legislation In U.S. Congress, Part IV

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s