[MATT]
You’re listening to the Berkeley Technology Law Journal Podcast. I’m Matt Sardo.
[IBRAHIM]
And I’m Ibrahim Hinds. Today our podcast is about Section 230 of the 1996 Communications Decency Act. We will briefly explain Section 230 and its history, and then speak with Professor Pamela Samuelson of Berkeley Law.
[MUSICAL INTERLUDE]
[MATT]
Suppose a malevolent ex-boyfriend sets up a fake dating profile on Grindr on your behalf. It contains entirely false statements about you—specifically claiming that you’re interested in a variety of lewd and lascivious acts that you by no means are intrigued by. Outraged, you decide to sue. You not only take your ex-boyfriend to court for defamation, but you also sue Grindr for failing to take down the false and reputation-damaging profile. Does Grindr face any liability for allowing this fake profile to stay up on its site? What about for failing to protect its users from fake profiles? In 2011, the Southern District Court of New York considered this exact situation in Herrick v. Grindr.[1]
Relying on Section 230, the court held that Grindr was “under no obligation to search for and remove the impersonating profiles.”[2] Why not? What is it about Section 230 that gives Grindr protection from liability in this case? And what is the significance of Section 230 on how internet content is regulated at the federal level?
The intent of the authors of Section 230—Democratic Senator Ron Wyden and Republican Congressman Christopher Cox—was to “protect online speech, allow innovation, and encourage companies to develop their own content moderation processes.”[3] Section 230 has since been coined “[t]he Twenty-Six Words That Created the Internet” by legal scholar Jeff Kosseff. [4] The statute reads in relevant part: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”[5] In a nutshell, the law protects tech companies from legal liability for user content on internet-based platforms.[6]
At the time of Section 230’s adoption, dial-up modems brought AOL instant messaging into American homes.[7] Now, in 2021, technology is dramatically different, and politicians are considering updating the law. Both Democrats and Republicans across the United States express interest in amending internet platform content moderation laws.[8] Republicans argue that conservative voices are silenced by “Big Tech,” while Democrats decry online hate speech that fuels real-life violence but does not result in liability for the hosting platforms.[9] Recently, this law has become a topic of political controversy.[10]
This single statute has consistently been lauded as underpinning the modern internet economy.[11] But how does the law actually work? How does it prevent platform liability? How does it impact the individual and how do private companies hold the power to moderate public figures?
Today, we have the pleasure of welcoming Professor Pamela Samuelson to the podcast to answer these questions and discuss Section 230 with us. Professor Samuelson is the Richard M. Sherman Distinguished Professor of Law and Information at the University of California, Berkeley, School of Law, and the co-director of the internationally-renowned Berkeley Center for Law & Technology.[12]
Our Meg Sullivan and Seth Bertolucci dig into Section 230 with Professor Samuelson, discussing the intent of the initial legislation, judicial interpretation of the statute over the last twenty-five years, how Section 230 has encouraged the growth of the internet economy, and the unintended consequences and controversy of shielded platform liability.[13] We hope you enjoy the conversation.
[MUSICAL INTERLUDE]
[MEG]
So, could you talk to us and tell us a bit about what actually is Section 230 and could you explain it to us as if we’re in high school?
[PROFESSOR Samuelson]
So, Section 230 is part of the Telecommunications and Communications law title of the U.S. Code. And although it’s often known as Section 230 of the Communications Decency Act, that’s not really accurate. But it is what it’s very commonly known by. What happened was that the bill that was originally known as the Internet Freedom and Family Empowerment Act got bundled with the Communications Decency Act. And then that got bundled in the 1996 Telecommunications revision to the Communications Act. So, it’s like a little law, inside a bigger law, inside of a bigger law, when it was adopted in the mid 1990s.
So, here’s my example of—anybody can understand this. So, suppose that Seth is the webmaster for a club and the club has several dozen members, and one of the members basically posts something, saying, “Oh, come to my event, to celebrate something” or other, and then somebody else who’s a member of the club gets on that same website and says, “Joe’s a thief, you shouldn’t go to his event. He’s awful.” And so if it’s not true—that Joe’s a thief—then that’s defamation. And the person who said the defamatory thing can be held liable for it. But neither Seth nor the club can be held liable for the defamatory statement because he wasn’t the speaker or the publisher. So, it’s basically a rule that says that that content that you didn’t publish, or that you didn’t utter yourself, is not speech for which you can be held liable, even if the content, in fact, is illegal. Does that help?
[SETH]
Yeah, thank you so much, seriously. It’s reassuring to know, as the webmaster of my own club online, that I won’t be held liable for that content. Now, I know and that might be changing soon, because, Professor, you know, I know a lot of people are talking about repealing 230 lately. And, intuitively, it seems that if online platforms could be held liable for what gets posted on their platforms, that they might actually be more aggressive at censoring content. But is that an accurate—is that an accurate consequence of repealing 230? Or is that more of a misconception that people might have?
[PROFESSOR Samuelson]
Well, if Section 230 is repealed, there will still be a First Amendment right that platforms have to take down information that they object to for one reason or another. So, the First Amendment is kind of a backup protection. But what’s different is that Section 230 enables a company that is challenged with some user infringement or some user activity that’s illegal. They basically can get out of a lawsuit on a motion to dismiss. Right? So if Section 230 is repealed, then the lawsuits could go ahead.
Now, you still have to prove that somebody actually sort of contributed to or knew about or did something wrong, but the expectation of people who’ve been studying this is that it’s pretty likely that—especially the small and medium sized platforms—will decide to take down content that somebody complains about, because it’s risky to keep it up. Right? If somebody is right that that stuff is defamatory, then I could be held liable for it. So I don’t want to do that. So I’ll take it down. Unfortunately, that means that probably more people who want to silence critics will make complaints, whether they’re valid complaints or not. That would then mean that some of the content that would be truthful, and actually something that you’d want to have up, would be taken down. So that’s one of the risks.
[SETH]
Why was Section 230 passed? What were the fears around the internet at that time and kind of the motivation to shield these companies from liability for content that users post?
[PROFESSOR Samuelson]
So the big concern, actually, in the mid 1990s, was about children and children getting exposed to indecent things on the internet. And there were two different ideas about how to deal with that. One was Senator Exon’s bill, that would make it criminally liable for anybody to essentially say indecent things on the internet. And you could go to prison for two years. Okay. So that’s like, penalize anybody who’s saying indecent things that might harm children, and then, you know, send them to jail. And the Representatives, Cox and Wyden, had a different idea.
So the reason that the bill is called the Internet Freedom and Family Empowerment Act, was that they had the idea that families could basically buy software that would protect the children, and that some parents are going to be more liberal and some more conservative and so there would be like competition in the market for software to protect children. Okay. And so you wanted actually to encourage the content moderation.
The cases, or the statute, is actually a response to the Stratton Oakmont v. Prodigy[14] case, which was decided by a New York judge a couple of years before this bill got proposed, where Prodigy said, “Hey, we’re family friendly, and we’re making sure that nobody’s going to say anything bad.” And so, on one of their chat sites, there was a criticism and a charge that Stratton Oakmont was committing fraud. And Stratton Oakmont sued Prodigy saying, you know, you published this and you say that you are, you know, controlling what’s going on in your platform. And Prodigy couldn’t get out of the case, even though no one at Prodigy actually knew about the defamatory statement that had been made, because they exercise editorial control over what went on their site. And they said that they had these policies that would protect people from defamation and other kinds of wrongful content.
If you exercise any editorial control at all right, do any kind of content moderation, the Stratton Oakmont case suggested that you’d have to do—that you’d be liable if you didn’t find something. So, you know, the content moderation at scale problem is really huge. Even back then Prodigy had, you know, tens of thousands, maybe hundreds of thousands, of people who were subscribers. And you just can’t—you can’t actually see everything that’s getting posted before—before it goes online. So what Wyden and Cox wanted to do is to make sure that if the companies, the platforms, actually took stuff down, that they wouldn’t be held liable. And if they didn’t take it down, that they could keep it up or take it down, but they wanted people to get engaged in content moderation and not be held liable if they miss something.
[SETH]
Right, so if I’m understanding it right, Section 230—a lot of the motivation behind Cox and Wyden was to empower private companies, basically to make the internet safer—to make it better without facing liability.
[PROFESSOR Samuelson]
That’s exactly right. Also, they wanted to encourage the growth of the internet economy, right? At the time, Prodigy was one of the few companies that was providing internet access and content, user-posted content. But they saw the potential for that to grow into a major industry, which of course it has.
[MEG]
How have the aims of Section 230 or the way that it’s been used change? How have those aims changed in the twenty-plus years since it was passed?
[PROFESSOR Samuelson]
Well, I think that the people who are upset about Section 230 right now are not upset about it because of indecent things getting exposed to children. So, that’s not the main driving concern. And that [the] Communications Decency Act rule that would penalize—would create a criminal liability—if you knowingly transmitted indecent content over the internet that children would be in the audience for—that got overturned by the U.S. Supreme Court—it struck it down as unconstitutional because it was too much of a of a penalty on lawful speech by adults. So, the CDA actually got essentially repealed or overturned by the Supreme Court. And what was left was Section 230.
So I think that Cox and Wyden have actually come out, still very supportive of the law. They say, “Take a look at the Internet economy now.” Right? Ten percent of GDP is actually internet commerce. And so, one of the aspirations to enable the internet companies to thrive has been achieved and, in a recent exchange, former-Representative Cox said two hundred million websites in the United States depend on Section 230.
So it really isn’t just about big tech companies. It’s actually about everyone, and you want people to engage in content moderation. So that second of the aspirations for Section 230, which is to encourage companies to engage in content moderation, that’s also happening. Maybe it’s not happening perfectly. Maybe some people and maybe some companies are maybe taking a little bit too much advantage of Section 230. But nonetheless, the kind of encouraged content moderation—every site engages in content moderation that posts user content. They have to.
[SETH]
How has the judicial interpretation of Section 230 over the past twenty or so years affected the way in which it applies to the internet today, and also how has that interpretation changed over time?
[PROFESSOR Samuelson]
So what Section 230 says is that interactive computer services will not be treated as speakers or as publishers of information content provided by third parties. Okay, now, if you defame somebody, because you call them a thief, you’re the speaker and you can be held liable. Somebody who basically is, let’s say, a newspaper who published “Oh, Joe’s a thief,” that would be a publisher who can be held liable because the publisher basically disseminates the defamatory thing. So the interpretation that’s been given to it is that you’re not a publisher, and you’re not a speaker of the defamatory or otherwise unlawful content that’s provided by this third party. So if you get sued for it as if you are a speaker or publisher, you can basically move to dismiss the complaint for failure to state a claim and get out of the case that way. And the overwhelming majority of the courts—and there have been hundreds and hundreds of Section 230 cases—they just throw it out.
Now, there is a third kind of liability that’s possible, and that’s distributor liability. So if you are a distributor of, let’s say, obscene books. If you know that there are obscene books in your store, you can be held liable for obscenity like the publisher and like the author. But you have to know. And, you know, one of the things that could happen is that courts could basically decide that these platforms, once they get notice about some unlawful content, that they could be held liable as distributors. Once they know, then they might have an obligation to take the stuff down. That’s a theory that Justice Clarence Thomas issued as a statement in relation to the denial of a cert petition and a case involving Section 230, but of a different sort. So he’s basically put back into the conversation the possibility of distributor liability for content that is on a site that’s wrongful and that the person could have taken it down, but chose not to.
[MEG]
Thank you for that context, and I think that that segues really well with our next question. I was hoping that you could talk to us a little bit about the Zeran[15] case and how that fits into our conversation.
[PROFESSOR Samuelson]
Well, poor Ken Zeran who wasn’t even on AOL as a user. Some person decided to post something as if Ken Zeran was selling t-shirts that glorified the Oklahoma City bombing by Timothy McVeigh. And they didn’t use Zeran’s last name, but they said, “If you want some of these t-shirts, call Ken at this number,” which was his number. So all of a sudden, he starts getting all these calls and death threats and people, you know, just treating him terribly. And he’s very, very upset, as you can understand. So he contacts people at AOL several times and says, “Look, this is up on your site. It’s not accurate. Somebody who’s doing something really terrible and they’re ruining my life, and I feel at risk of losing my life. Please take it down.” And he was given some reassurances that they would take it down. But they didn’t. So he kept getting this and, finally, he talked to a lawyer and the lawyer basically then brought a lawsuit.
Now the cause of action was negligence, right? The theory was that, once AOL had said—had gotten notice about this and had reassured him, that it was negligent for them not to have taken it down. And the Fourth Circuit Court of Appeals basically said, “Look, the reason Congress passed Section 230 was to protect AOL-like companies from lawsuits for content that was provided by a third party. And it doesn’t make any difference what the cause of action is in your complaint. You’re trying to evade the very limitation on liability that Congress intended for entities like AOL.” And Zeran had a good analysis, and almost all the cases since then have followed Zeran and endorsed its ruling.
[SETH]
Can you tell us of maybe one more notable example of—just going beyond Zeran—of Section 230 protecting a company, you know, that chose not to take down content that was really controversial, obscene, defamatory? And I guess this isn’t much of a question because, specifically, I’m wondering about the Grindr case.
[PROFESSOR Samuelson]
Yeah, that’s one I was going to talk about. So there was a young man named Herrick, and he had been in a gay relationship with this other person. And the other person, after they broke up, posted a profile purporting to be a profile of Herrick on Grindr, a gay dating site. And it said a bunch of things about his willingness to engage in certain kinds of sexual conduct. And basically, gave information about how to contact him and where he was employed. And so he was really, you know, he’s being besieged, kind of, like, not exactly the same as Zeran, but the point is that people were like, “Oh, I wanna,” you know, “I would get together with you. I like that kind of sex too.” And Herrick basically said to Grindr, “Look, that isn’t a profile of me, that’s basically my vengeful boyfriend—ex-boyfriend—who’s done this to me. Take that down.” And Grindr chose not to. So that’s an example of something where, you know, somebody was really harmed.
I think a responsible way to deal with that would have been to basically take down the profile, or basically say that, you know, this is inaccurate, and it’s not really about this guy, you know, put some sort of warning on it. But of course, taking down would have been the better thing to do, and they just didn’t. So Herrick brought a lawsuit against Grindr and charged that Grindr should be held strictly liable for the harm that had been done to Herrick, because it designed its platform in a way that really encouraged or enabled this kind of activity to happen. Court threw it out.
[MEG]
That’s really incredible that in both of those examples, even after being directly notified, the platforms still had no legal liability. But moving on from that, one more question that we have sort of relates to the politics around Section 230. So Democrats are talking about changing Section 230, Republicans are talking about changing it, but would reforming Section 230 be a bipartisan initiative, and do both of those parties want the same things out of reform?
[PROFESSOR Samuelson]
The answer is they don’t want the same thing. In general, the Republican critics of Section 230 think that these platforms are taking down too much. A lot of the hate speech and disinformation that the platforms have been taking down lately are things that they think are constitutionally protected speech and that shouldn’t be taken down because of the lawfulness of that speech. And they don’t want content moderation to take down anything that’s constitutionally protected speech. The Democrats, for the most part, say “you’re not taking down enough.” So they want, sort of, more due process, and they want to encourage more of that take down. So that makes them at loggerheads.
Now, maybe they could agree on repeal, but I doubt it. I think there is going to be enough Democratic support, especially given that this law affects two hundred million websites. That’s actually enough to maybe say, “Maybe we should tweak it, but maybe repeal is not the right—the right thing to do.” So, I think Senator Warner’s most recent bill that was introduced recently kind of tries to aim at fraud and extremism and hate speech, and kind of identifies what harmful speech is being promoted on some of these, especially the social media platforms, and then says, you know, there, they need to do more to take it down. So we’ll see whether that gets any traction. But there’s not going to be Republican support for that because they have a different point of view.
[MEG]
Of course, and, well, I’d be curious to hear: Do you agree with Senator Warner’s policy objectives? Or perhaps a broader question would be: What do you view as the best way to fix Section 230?
[PROFESSOR Samuelson]
I think that the best thing that the Biden administration could do is basically set up a commission to study, kind of, the pros and cons and to come up with a recommendation for change. I think that, if you go through this kind of like “oh, there’s just terrible stuff out there” and you don’t, kind of, take a look at the—how the whole ecosystem is operating, you can make a bad decision by, you know, trying to sort of tweak this and have all kinds of unintended consequences, particularly for the small and medium platforms. The big platforms are going to be able to survive no matter what, right. So Facebook, for example, has come out; “Oh, we’re in favor of government regulation.” Well, you know, the thing is that they benefited tremendously by the existence of Section 230 when they were little guys. So now they want to pull the ladder up behind them and make it harder for the small and medium-sized firms to compete and to grow into real competitors with them.
So that’s why I’m concerned about, sort of, you know, too many kind of like, “Oh, oh, there’s too much, you know, hate speech. So let’s take away this immunity.” I just think that’s the wrong attitude. Something that could be done apart from a commission: I’ve suggested that maybe the Federal Trade Commission (FTC) could start a new bureau of platform regulation, and could do for kind of content moderation, create some best practices, or work with the, sort of, the industry to come up with some best practices. That would be a way of encouraging more really meaningful self-regulation. The FTC has done that to a substantial degree for privacy and for cybersecurity, so I think that they could also try to do some oversight. And then if they decide that additional legislation is needed, they can recommend it. But I don’t want these Senators who really don’t know enough about, sort of, the internet ecosystem to be kind of doing these big broad-brush changes that actually will have negative effects.
[SETH]
It’s fascinating to hear how, you know, even a little tweak to the law could have huge and, you know, often unintended consequences that Congress often misses. And the policy around this whole issue is fascinating, and I could talk about it all day. We’re pushing up on the time limit. And so I want to ask you, just one kind of final follow up question.
You’re a very influential scholar in copyright law, and that’s, you know, kind of your bread and butter and a huge part of your background. Do you feel that Section 230 kind of relates to copyright or is it its own thing? You know, and if it does relate to copyright, how so and what’s kind of led you down this new scholarly path?
[PROFESSOR Samuelson]
Well, the reason that I chose to teach this class, “On Regulating Internet Platforms,” with a focus on Section 230 was because of the pandemic and the desire to do something that would be helpful for 1L students [i.e., first year law students] coming in to make them feel like, you know, the school really cares about them. I knew that there was a little bit of controversy about Section 230. But I hadn’t taught—I used to teach cyber law, but I hadn’t taught it in a long time, so there was a lot to catch up on. So what I look at though, is that both the internet Section 230 intermediary rules and the Section 512 copyright rules have been shielding these platforms from liability. So if they didn’t contribute to it, they didn’t know about it, they had no control over it, I don’t see why they should be held liable for it. And that’s a norm that underpins both of them. Now, I think that, generally speaking, copyright infringement is easier to detect than what’s defamatory or what’s privacy violation. So, I think that the Section 230 immunity makes more sense for the kind of stuff that is not so easy to detect whether or not it’s illegal.
[MUSICAL INTERLUDE]
[IBRAHIM]
In sum, Section 230 and what to do about it is a complex issue that includes differing interpretations of the right to free speech, the duty of those in the social media industry, the government, and the legislature.[16] Section 230 has been incredibly important to the growth of the internet market. Arguably, social media giants, such as Facebook and Twitter, are partially the result of this legal shield.[17]
Efficiency and utilitarianist perspectives support different substantive and procedural solutions with regards to altering and repealing Section 230. For instance, efficiency arguments can be made to support keeping Section 230 as is. The obvious argument is that policing all of the content produced by such enormous user bases is a mammoth task that requires significant resources—resources that up-and-coming companies may not possess. However, technological solutions such as machine learning may offer some remedy.
On the other hand, from a utilitarianism perspective, it may be best to incentivize the platforms to regulate speech themselves. Although Section 230 has provided stable conditions and supported the growth of the internet, it is by no means perfect and has the potential to result in serious harm as discussed before with respect to the Grindr and Zeran cases.
Currently, social media platforms have no legal duty to take down defamatory content, false representations of fact and identity, or even incitements of violence.[18] As Professor Samuelson explained, lawsuits against such platforms for user content have largely been dismissed under Section 230, preventing plaintiffs from seeking remedy from economic, emotional, and other harms.[19]
So, what is the solution? As Professor Samuelson stated, the complexity of a solution requires iterative research before implementation because of the market sensitivity to regulation and the socio-economic importance of the industry.[20] Furthermore, this dilemma is complicated by the balance of sustaining small and medium-sized firms while protecting against harmful content on extremely large platforms. Large platforms, such as Facebook, may support increased government regulation.[21] However, this benefits them because it negatively impacts competition by introducing liabilities that smaller companies may not have the financial safeguards to withstand, ensuring their market dominance.[22]
Ultimately, the legislative bodies, the judicial branch, and members of the social media industry all have key roles to play. And a decision to replace or significantly alter Section 230 should only be made after due diligence and extensive studies to understand the probability of possible outcomes.
[MUSICAL INTERLUDE]
[MATT]
Thank you for listening! The BTLJ Podcast is brought to you by Podcast Editors Andy Zachrich and Haley Broughton. Our Executive Producers are BTLJ Senior Online Content Editors Allan Holder and Karnik Hajjar. BTLJ’s Editor-in-Chief is Emma Lee.
[IBRAHIM]
If you have enjoyed our podcast, please support us by subscribing and rating us on Apple Podcasts, Spotify, or wherever you listen to your podcasts. If you have any questions, comments, or suggestions, write us at btljpodcast@gmail.com. The information presented here does not constitute legal advice. This podcast is intended for academic and entertainment purposes only.
[ANDY]
The information in this podcast is up-to-date as of March 28, 2021. The interview with Professor Samuelson took place on February 23, 2021.
[1] 306 F. Supp. 3d. 579, 584 (S.D.N.Y. 2018).
[2] Id. at 594.
[3] Jeff Kosseff, The Twenty-Six Words That Created the Internet 67 (2019).
[4] Stephen Engelberg, Twenty-Six Words Created the Internet. What Will It Take to Save It?, ProPublica (Feb. 9, 2021), https://www.propublica.org/article/nsu-section-230.
[5] 47 U.S.C. § 230(c)(1).
[6] See id.
[7] Engelberg, supra note 4.
[8] Bryan Pietsch & Isobel Asher Hamilton, The Key Facts You Need to Know About Section 230, the Controversial Internet Law That Trump Hated and Biden Might Reform, Bus. Insider (Feb. 20, 2021), https://markets.businessinsider.com/news/stocks/what-is-section-230-internet-law-communications-decency-act-explained-2020-5-1030104009.
[9] Id.
[10] See generally id.
[11] Interview with Pamela Samuelson, Co-Director, Berkeley Center for Law and Technology (Feb. 23, 2021) [hereinafter “Samuelson Interview”].
[12] Pamela Samuelson Biography, Univ. Cal. Berkeley L. Sch., https://www.law.berkeley.edu/our-faculty/faculty-profiles/pamela-samuelson/.
[13] Samuelson Interview, supra note 11.
[14] Stratton Oakmont v. Prodigy Serv. Co., No. 31063/94, 1995 WL 805178, at *1–3 (N.Y. Sup. Ct. Dec. 11, 1995).
[15] Zeran v. Am. Online, Inc., 129 F.3d 327 (4th Cir. 1997)
[16] Samuelson Interview, supra note 11.
[17] Id.
[18] Id.
[19] Id.
[20] Id.
[21] Id.
[22] Id.