United States: Now is the time to evaluate your online content moderation program

In brief

US laws have traditionally given online services significant leeway to moderate user-generated content however they see fit. In particular, there is a long history of US courts relying on Section 230 of the Communications Decency Act (CDA 230) to reject a wide range of claims seeking to hold online services providers liable for hosting, displaying, removing or blocking third-party content, including under contract, defamation, tort and civil rights laws. CDA 230 does not protect online services providers from all claims related to third-party content. For example, there are statutory exceptions for IP infringements and criminal violations. But many commentators credit CDA 230 as one of the most important laws in the development of the internet by allowing online services providers to focus on growing their user base without having to discharge unduly burdensome duties to continuously review, assess and moderate user-generated content.


Contents

In recent years, CDA 230 has come under scrutiny for its alleged impact on freedoms of speech, online safety, and misinformation. We describe further below recent examples of lawmakers and authorities seeking to impose more onerous content moderation restrictions or obligations on digital platforms. While some of these efforts have targeted specific social media companies or business models, these developments will impact any company offering a service in which users can communicate with one another, including multiplayer video games, dating websites, social streaming platforms, virtual hangout spaces, social e-commerce platforms, and more.

We therefore recommend that any online services provider:

  • Review their policies, terms of service, mechanisms and reporting channels in light of the new content moderation laws in Florida, Texas, California, New York and other jurisdictions, as applicable
  • Closely monitor content moderation developments in the US in state and federal legislatures and court dockets at all levels
  • Continuously evaluate their content moderation policies to balance the protection of free speech with the shared objective of protecting users

Trump Administration Executive Order on Preventing Online Censorship: In May 2020, after a social media platform labeled a Trump post “potentially misleading”, President Trump issued Executive Order 13925 entitled “Preventing Online Censorship”, which claimed that digital platforms were relying on CDA 230 immunity to engage in deceptive or arbitrary actions to censor user-generated content expressing viewpoints with which they disagree. Pursuant to the Executive Order, the Department of Justice released proposed amendments to CDA 230 that would have, among other things, only allowed an online services provider to restrict access to user-generated content where the provider had terms of use that clearly prohibit such content, the action was consistent with such terms, and the provider gave a reasonable explanation of the action to affected users and a meaningful opportunity to respond.

Biden Administration’s Calls to Reform Section 230: In May 2021, soon after assuming office, President Biden revoked Trump’s Executive Order 13925. But President Biden has also called for CDA 230 to be pared back. Rather than taking issue with the perception that digital platforms wrongfully restrict access to political content, President Biden has taken issue with the perception that digital platforms do not sufficiently restrict access to certain objectionable content. For example, in September 2022, the Biden Administration released a readout of its listening session on “Tech Platform Accountability” calling for legislators to “Remove special legal protections for large tech platforms: Tech platforms currently have special legal protections under Section 230 of the Communications Decency Act that broadly shield them from liability even when they host or disseminate illegal, violent conduct or materials. The President has long called for fundamental reforms to Section 230”.

Florida Transparency in Technology Act (SB 7072). In May 2021, Governor DeSantis signed SB 7072 into law with the stated intention of preventing social media platforms from engaging in unfair censorship of Floridians. Much of the law is currently enjoined from taking effect, but if the US Supreme Court reviews this law and finds it to be constitutional, the statute would impose significant content moderation obligations on qualifying social media platforms. For example, the law generally requires a social media platform to provide a precise and thorough explanation of why it censored a user whenever it does so, and to apply its censorship standards in a consistent manner. The law establishes a private right of action for inconsistent content moderation practices which could entitle claimants to statutory damages of up to USD 100,000. The law also prohibits social media platforms from changing their content moderation policies more than once every 30 days, and gives users a right to opt-out of certain content recommendation or de-prioritization algorithms.

Texas’ Act relating to censorship of digital expression (HB 20). In September 2021, Governor Abbott signed HB 20 into law with the stated intention of preventing social media platforms from engaging in wrongful censorship of Texans. Much of the law is currently enjoined from taking effect, but if the US Supreme Court decides to take up a case involving this law and finds it to be constitutional, the statute would require social media platforms to publish a transparency report biannually with numerous categories of details and statistics about the platform’s content moderation operations, such as how many user complaints it received of potential terms of service violations, how such complaints were handled, and the results of the complaints, with respect to the preceding 6-month period. The law also generally prohibits a social media platform from censoring a user based on viewpoints, represented viewpoints, or geographical location, and prevents email service providers from filtering email messages except in certain narrow circumstances, such as where the email is reasonably believed to include malicious computer code or obscene material. The law establishes a private right of action for contraventions of its anti-spam filtering provisions, and generally enables users to sue for declaratory and injunctive relief. 

New York’s Act requiring social media networks to maintain hateful conduct reporting mechanisms (S 4511A). In June 2022, Governor Hochul signed S 4511A into law with the stated intention of combating the proliferation of hate on social media. The law defines a social media network as for-profit providers and operators of internet platforms designed to enable users to share any content with other users or to make such content available to the public. Unlike the content moderation statutes of Florida, Texas and California, New York’s law does not include size-based thresholds in its definition of qualifying social media networks. The law requires a social media network conducting business in New York to provide and maintain a clear and easily accessible mechanism for individual users to report incidents of hateful conduct and a clear and concise policy on how the network will respond. “Hateful conduct” means the use of a social media network to vilify, humiliate, or incite violence against a group or class of persons on the basis of race, color, religion, ethnicity, national origin, disability, sex, sexual orientation, gender identity or gender expression. The New York Attorney General may institute penalties for violations of up to USD 1,000 per day on which a violation took place.

California’s Social Media Content Moderation Law (AB 587). In September 2022, Governor Newsom signed AB 587 into law with the stated intention of protecting Californians from hate and disinformation spread online. Qualifying social media companies must post terms of service that meet certain format, language and content requirements. For example, the terms of service must include a description of the process that users must follow to flag violating content or users, a list of potential actions the social media company may take against violating content or users, and the social media company’s commitments on response and resolution time. The law takes aim at five categories of problematic content: (1) hate speech or racism; (2) extremism or radicalization; (3) disinformation or misinformation; (4) harassment; and (5) foreign political interference. Social media companies must submit lengthy and detailed reports twice a year to the California Attorney General describing their content moderation practices and breaking down how often content was flagged, actioned or resulted in disciplinary action against users for including the five categories of content. The California Attorney General may institute penalties for violations of up to USD 15,000 per day on which a violation took place. 

Contact Information

Copyright © 2024 Baker & McKenzie. All rights reserved. Ownership: This documentation and content (Content) is a proprietary resource owned exclusively by Baker McKenzie (meaning Baker & McKenzie International and its member firms). The Content is protected under international copyright conventions. Use of this Content does not of itself create a contractual relationship, nor any attorney/client relationship, between Baker McKenzie and any person. Non-reliance and exclusion: All Content is for informational purposes only and may not reflect the most current legal and regulatory developments. All summaries of the laws, regulations and practice are subject to change. The Content is not offered as legal or professional advice for any specific matter. It is not intended to be a substitute for reference to (and compliance with) the detailed provisions of applicable laws, rules, regulations or forms. Legal advice should always be sought before taking any action or refraining from taking any action based on any Content. Baker McKenzie and the editors and the contributing authors do not guarantee the accuracy of the Content and expressly disclaim any and all liability to any person in respect of the consequences of anything done or permitted to be done or omitted to be done wholly or partly in reliance upon the whole or any part of the Content. The Content may contain links to external websites and external websites may link to the Content. Baker McKenzie is not responsible for the content or operation of any such external sites and disclaims all liability, howsoever occurring, in respect of the content or operation of any such external websites. Attorney Advertising: This Content may qualify as “Attorney Advertising” requiring notice in some jurisdictions. To the extent that this Content may qualify as Attorney Advertising, PRIOR RESULTS DO NOT GUARANTEE A SIMILAR OUTCOME. Reproduction: Reproduction of reasonable portions of the Content is permitted provided that (i) such reproductions are made available free of charge and for non-commercial purposes, (ii) such reproductions are properly attributed to Baker McKenzie, (iii) the portion of the Content being reproduced is not altered or made available in a manner that modifies the Content or presents the Content being reproduced in a false light and (iv) notice is made to the disclaimers included on the Content. The permission to re-copy does not allow for incorporation of any substantial portion of the Content in any work or publication, whether in hard copy, electronic or any other form or for commercial purposes.