By Beatriz Sampaio, LL.M. 2025
In early January 2025, Meta announced a major change to their content moderation policy: the end of their third-party fact checking in the United States. Whereas the previous strategy to address misinformation on the company’s platforms relied on centralized moderation through independent fact-checkers certified through the non-partisan International Fact-Checking Network (IFCN), the new one relied on “collaborative moderation.” Users themselves add information to community notes that are displayed next to the content. Once other platform users validate their content, the notes become public.
This raises several questions: did the previous model constitute “too much censorship” as stated by Meta? Is this new model more compliant with the First Amendment’s commitment to freedom of expression? While the shift at issue aims to increase users’ freedom of speech, it could arguably pose risks for them, such as exposure to misinformation.
As providers of interactive computer services, platforms are subject to 47 U.S.C. § 230 of the Communications Decency Act). Under Section 230, platforms are not treated as the publisher or speaker of any information provided by another information content provider (for instance, users). The rule further states that no provider or user of an interactive computer service shall be held liable over restriction of access to or availability of obscene, excessively violent, harassing, or otherwise objectionable content.
Ultimately, Section 230 translated the political will to safeguard free speech as the internet became the world wide web in the 1990s. For one, the law aimed to shield online platforms services from liability for unlawful third-party content, avoiding overburdening online platforms in monitoring user-generated content. On the other hand, the law was supposed to encourage self-policing of offensive material. As courts have interpreted this law broadly, many argue that online platforms have abused such immunity and hold too much power.
Content moderation plays a role in protecting vulnerable communities online and offline, especially children and teenagers, people with disabilities, immigrants, and members of the LGBTQIA+ community from harms like disinformation, obscenity, harassment, gender-based abuse, and psychological and physical violence. After all, social media can also be used for vicious ends, such as inciting violence, self-harm, terrorist propaganda, cyberbullying, and cyberstalking. Hate speech on social media can have real-world impacts – cyberbullying, for one, has been repeatedly linked in research to suicide rates in minors.
As Kate Klonick states in The New Governors: The People, Rules, and Processes Governing Online Speech, one of the main motivations for content moderation is corporate responsibility: platforms self-police content out of a sense of social and ethical obligation. In this sense, content moderation through fact-checking policies does not equal censorship. After all, the First Amendment protects against government restrictions on speech, not restrictions carried out by private entities.
Courts have repeatedly ruled that private companies, including social media platforms, are not state actors and thus are not bound by the First Amendment in the same way the government and public entities are. In Manhattan Community Access Corp. v. Halleck (2019), the Supreme Court ruled that private entities are not state actors unless they are performing a function that has traditionally been exclusive to the government. The Court rejected the idea that YouTube was a state actor simply for being an online platform for speech to the public in general. Hence, social media platforms, as private providers of forums for speech, could set their own content rules without violating the First Amendment. Likewise, in Prager University v. Google (9th Cir. 2020), Prager University sued YouTube, claiming a First Amendment violation by its content moderation policies. The Ninth Circuit applied the Halleck precedent ruling that YouTube, as a private company, had no obligation to host all speech.
Additionally, courts have found that platforms have their own first amendment rights when they make editorial decisions about the content on their sites. In Moody v. NetChoice, LLC, the Court held that websites are making “expressive choices” when they make decisions about the display and organization of third-party content.. Netchoice LLC, a trade association for Internet companies like Meta, challenged Florida Senate Bill 7072, which prohibited large social media platforms from banning political candidates or “journalistic enterprises” and from deleting or reducing visibility of posts by or about them, among other provisions. According to Plaintiff s’ own amended complaint, their “members have invested extensive resources into developing policies and standards for editing, curating, arranging, displaying, and disseminating content in ways that reflect their unique values and the distinctive communities they hope to foster.” The Court ruled that platforms’ content moderation is a form of protected expression, and forcing platforms to host speech they disagree with would constitute unconstitutional “compelled speech” as an attempt to “control the expression of ideas.”
As a private entity, Meta could moderate user content without censoring speech, as moderation decisions made by social media companies constitute a protected editorial discretion under the First Amendment. Additionally, Meta’s function of hosting public speech on a publicly accessible platform is not a governmental one, meaning that its content-moderation policies per se do not violate users’ First Amendment rights. On the contrary, social media platforms are themselves private entities with First Amendment rights.
Whereas the concern with harnessing free speech is entirely legitimate, community notes do not appear to be the most optimal solution in ensuring users’ welfare. In fact, the “X” experience shows they have performed poorly in this respect. Since laying off its content moderation team, adopting the new community notes standard, and withdrawing from the EU Code of Practice on Disinformation, levels of hate speech on X have increased considerably.
In sum, both statutory and case law support the finding that social media platforms like Meta are free to adopt their own content moderation practices and community guidelines without implicating Constitutional rights. Even though the previous, centralized fact-checking model faces many challenges, like scale, this sort of content moderation, especially through specialized, independent fact-checkers, works better as a safety enhancing, corporate responsible practice, without jeopardizing users’ or platform’s speech.
Only time will tell whether Meta’s new content moderation format will lead to a surge in the occurrence of existing and new harms. In the meantime, society and courts must keep an eye out for this new policy’s outcomes and developments in order to guarantee enforcement of statutes and precedents.