By Barbara Rasin, J.D. Candidate, 2027
In Anderson v. TikTok, decided in in late summer 2024, the Third Circuit Court of Appeals held that TikTok’s “For You Page” algorithm was sufficiently creative to bar its protection under §230 of the Communications Decency Act (CDA). This is a significant step towards limiting the protection that Internet platforms receive under §230, which has generally expanded along with our reliance on these platforms.
The CDA was enacted in 1996, as part of an effort by Congress to address the spread of pornography online. Nonetheless, the Act’s impact has been most acutely felt nearly three decades later via a last-minute addition, §230. Section 230 aimed to promote free speech and innovation by shielding “Interactive Computer Services” (ICSs) from liability for potentially illegal content posted on their platforms by third parties. These ICSs would remain liable for their own original, first-party content, but no longer had to worry so acutely about the legality of the third-party content that they hosted. Legislators worried that increased policing of online content without adequate safeguards would either dissuade nascent tech companies from pursuing growth, cause them to overly censor user-generated content out of precaution, or both.
Since the passage of the CDA, internet platforms have grown from niche sites to a dominant method of communication in modern society. We now predominantly consume third-party content via intermediary platforms, which all use §230 to shield themselves from almost any legal liability that may result from the content itself. This has had an enormous impact on our modern media ecosystem, and its influence is only growing. As AI threatens to facilitate the dissemination of misinformation and damage the integrity of news media, Internet platforms are unlikely to prioritize a workable solution. Even if legal liability is imposed on those that knowingly disseminate false information via proposed legislation, under §230, the responsibility to mitigation the dissemination of such fabrications would fall on a platforms’ individual users.
This may change with Anderson v. TikTok. In late 2021, videos began circulating on TikTok of users participating in a “Blackout Challenge,” in which participants would record themselves engaging in self-asphyxiation. After TikTok promoted these videos to her “For You Page,” ten-year-old Nylah Anderson attempted the challenge and unintentionally hanged herself. Nylah’s mother sued TikTok and its parent company, ByteDance, for strict product liability and negligence, among other things. Her case was initially dismissed by the District Court, which upheld TikTok’s claims of immunity under §230. However, the Third Circuit has ruled that TikTok’s algorithmically generated “For You Page” contained sufficient editorial decision-making to constitute first-party speech not immunized under the CDA.
To justify their reasoning, the Court cited Moody v. NetChoice, LLC, in which the Supreme Court recently held that social media platforms’ algorithms were protected by the First Amendment insofar as they constituted first-party speech. Unlike speech generated by third-parties, first-party speech must involve sufficient “expressive activity” to prove originality. According to the Third Circuit, internet platforms like TikTok cannot benefit from the constitutional protections given to first-party speech while simultaneously enjoying the protections of third-party speech under §230. The Third Circuit described TikTok’s algorithm as being “not based solely on a user’s online inputs,” and therefore sufficiently expressive and original to satisfy the definition of first-party speech.
Treating algorithms as first-party speech may cause internet platforms to lose other protections they have long relied on. For example, §512 of the Digital Millennium Copyright Act (DMCA) provides a “safe harbor” for Internet platforms hosting third-party content in violation of another’s copyrights. Through a “notice and takedown” regime, platforms cannot be held liable for hosting infringing content so long as they remove said content after receiving notice from the copyright holder. This has been instrumental to enable platforms to host a surplus of content. Yet, it has also given rise to a Sisyphean game of ‘whack-a-mole,’ in which infringing content is simply reuploaded, and copyright holders must continually survey the never ending pool of user-uploaded content for new infringing material. Given that DMCA § 512 protection applies only to third-party speech, it seems logical that algorithms henceforth considered first-party speech would no longer qualify – therefore losing both DMCA and §230 protection. This would have significant repercussions for platforms like YouTube, whose business model depends on promoting significant amounts of copyright-infringing works to its users via algorithms. Indeed, algorithms’ potential loss of DMCA protection may be a welcome change for the majority of copyright-owners, and especially those without the resources to constantly police these platforms and submit requisite notices.
Nonetheless, the dissent in Anderson warned that construing §230 to impose liability on platforms that make editorial decisions will merely thwart the statute’s essential purpose by deterring platforms from exercising the discretion necessary to prevent the spread of harmful content. Instead, it argued that TikTok is already liable under §230 because it continued to distribute and promote dangerous videos despite knowledge of these videos’ existence. By this logic, a distinction between first and third-party speech is unnecessary.
Should the Third Circuit’s logic persist, we will likely see heightened scrutiny of third-party content on platforms like TikTok that depend on editorial algorithms to deliver content. This should invite further skepticism into the ability of private platforms to sufficiently protect individuals’ right to free speech online, especially as we become increasingly reliant on them as a stand-in for the public commons. Yet ultimately, Anderson v. TikTok is a win for those hoping to hold dominant tech companies more responsible for their massive cultural impact.