[Meg O’Neill] 00:08
Hello and a warm welcome to all our listeners tuning in to the Berkeley Technology Law Journal podcast. My name is Meg O’Neill, and I’m one of the editors at the podcast. In today’s episode, we dive into the contentious world of content moderation on the internet. The governing legislation on this topic is section 230 of the Communications Decency Act of 1996,1 one of the most important and impactful pieces of legislation in the tech world. Under Section 230, online platforms cannot be held liable for user generated content. This immunity is important for allowing online platforms to operate without fear of legal repercussion, which consequently encourages companies to be less restrictive with content moderation. In that way, you could say that section 230 is responsible for the Internet existing in its current form, and for the free flow of information and expression online. However, you could also say that section 230 is responsible for the spread of hate speech, misinformation and toxicity online. States are dissatisfied with the current level of content moderation by the likes of Google, Facebook, Twitter, and YouTube, and so states are trying to pass their own legislation to stem the bleeding. But should states have a role in regulating the global phenomenon that is the Internet? Today, our colleague Eric Miranda will dive into the contentious and personal world of content moderation with our guest expert, Dean Erwin Chemerinsky. Erwin Chemerinsky is the dean of our very own law school, Berkeley Law. He is one of the most cited authorities on the US Constitution in the world, and the author of 19 books, including leading case books and treatises about constitutional law, criminal procedure, and federal jurisdiction. Dean Chemerinsky has also argued several appellate cases, including in the United States Supreme Court. We are thrilled to have Dean Chemerinsky on for today’s episode of the BTLJ podcast, and we hope you enjoy the conversation.
[Eric Miranda] 02:13
Welcome to the BTLJ podcast. I’m your host, Eric, and I’m here today with Erwin Chemerinsky, Dean of Berkeley Law, and one of the most cited authorities on US constitutional law in the world. Thank you for talking to me today.
[Dean Chemerinsky] 02:29
Truly my pleasure.
[Eric Miranda] 02:30
I’d love to begin today’s episode by asking you if someone were listening to this podcast and they knew nothing about you, the law, they knew nothing about free speech, internet regulation, etc. How would you introduce what you do and why you do it?
[Dean Chemerinsky] 02:47
As you mentioned, I’m the Dean of Berkeley Law. This is my eighth year as dean here. I’m in my 45th year as a law professor. I’ve been teaching constitutional law, including First Amendment law, during all of this time. I do it because I believe that the Constitution is the most powerful tool for protecting our rights and for social change, and I focus on the First Amendment because I believe that if it were going to be a democracy, it’s absolutely essential that we protect speech.
[Eric Miranda] 03:19
And so on that idea of protecting speech, there’s really been this recent push for internet and content regulation that has taken hold of the public mind. It’s come up in topics like a push for a TikTok ban and Brazil’s recent troubles with Twitter, also known as X. But regulation is also a battle that’s being taken up at the state level. And so why have states taken up this call for regulation?
[Dean Chemerinsky] 03:47
There’s a widespread sense among both liberals and conservatives that the internet is broken. Perhaps from the perspective of liberals, it’s because there’s too much hate that’s expressed there. Perhaps from the perspective of conservatives, it’s a belief that social media platforms are disproportionately taking down conservative speakers. There’s a sense that too much sexual material is accessible to children, that the internet has real harms, and social media causes real harms to children. So, all of this leads to a sense of a serious problem, there’s no agreement as to the solution, and in the absence of congressional action, many states are stepping into that breach.
[Eric Miranda] 04:36
Of course, but there’s not necessarily a total absence of congressional action. I believe there is Section 230 of the Communications Decency Act that really governs this whole area and battleground for content regulation. So, I was wondering if you could speak on exactly what is Section 230, which has been described as the 26 words that created the Internet as we know today.
[Dean Chemerinsky] 05:03
That’s the title of a book by Jeff Kosseff that’s quite important in explaining why Section 230 is crucial for the internet.2 Section 230 which was adopted as part of the Communication Decency Act of 1996 says that social media platforms, internet platforms cannot be held liable for that which they allow to be posted or that which they choose to take down. So if somebody posts awful things on a social media platform, the social media platform isn’t liable. Conversely, if a social media platform chooses to take down really awful things, it can’t be liable for that. There is such an enormous amount that’s posted on social media platforms every day. If a social media platform were liable for anything that’s posted, it would dramatically change the nature of the internet.
[Eric Miranda] 06:03
Do you believe, so without Section 230 the nature of the internet would be different. But do you believe you would even be able to have the internet as we know it today without Section 230?
[Dean Chemerinsky] 06:14
I think there undoubtedly will be an internet and social media going forward whether there is or isn’t Section 230. But if there’s not a Section 230 and social media companies are liable for anything that’s posted there, they would need to do much more aggressive screening before anything could be posted. That would take time, and it would lead to the exclusion of a lot more content. That said, keep in mind, social media companies are already constantly engaging in content moderation. Facebook says it takes down 5000 hateful messages per hour.
[Eric Miranda] 06:53
Right, and so you really touched on why internet platforms care about Section 230 protections. Is there a perspective from users of why they should care about Section 230?
[Dean Chemerinsky] 07:06
Section 230 is what allows there to be the almost infinite amount of speech on the internet. The internet has dramatically changed the nature of speech. I’d say the internet is the most powerful development of free speech, most important development of free speech since the invention of the printing press. It’s democratized the ability to reach a mass audience. It used to be in order to reach a large number of people, you had to be rich enough to own a newspaper or get a broadcast license. Now anyone with a smartphone or even just a modem in a library can reach a mass audience. It gives us access to infinite amounts of information. I think if there weren’t Section 230, all of that would be changed dramatically and for the worse.
[Eric Miranda] 07:54
Definitely, it’s definitely allowed us to have this powerful tool as a people and to use the internet in new ways that we haven’t even really started beginning to touch. And so I wanted to now dive into your article that you wrote Misguided Federalism State Regulation of the Internet and Social Media.3 So, in that article, you talk about why states and state legislatures really aren’t the best forum for content regulation. So, my first question on this topic is, why did you, and I believe you wrote it with your son, why did you choose to write about this topic?
[Dean Chemerinsky] 08:30
Yes, I did write it with Alex Chemerinsky, who’s a lawyer in Los Angeles, and it was a pleasure to get to have the chance to write an article with my son. It was in response to especially the laws adopted in Florida and Texas that had prohibited social media internet platforms from engaging in content moderation.4 This applied to large social media platforms. It was based on the perception of the Florida and the Texas legislatures that social media platforms were disproportionately taking down conservative speech. Among other things, it required that individual content moderation decisions be justified. But we also saw these two laws in Florida and Texas as part of a wave of state regulation of the Internet. California, New York, Utah, we can go on, all adopting very different laws. Part of the thesis of our article is that it doesn’t make sense to have the internet and social media regulated at the state level. The internet and social media are national, international media; to allow each state to have its own regulation is a real threat to the function of the internet and social media.
[Eric Miranda] 09:45
How would you address the argument, though, that states could be acting, by passing their own regulations as these laboratories of democracy and figuring out how we should regulate the internet?
[Dean Chemerinsky] 09:58
States certainly can be laboratories for experimentation, but the question is, are they good experiments that we should encourage or even allow, or are they undesirable experiments? I think that the Florida and the Texas laws prohibiting content moderation are terrible ideas. We want social media platforms to engage in content moderation. We want them to take down child sexual exploitation material even if it’s protected by the First Amendment, we want them to take down material that encourages terrorism, even if it’s protected by the First Amendment. I think overall, the regulations by the states of social media and the internet have been undesirable. I think that it’s for this reason that, if there’s going to be regulation, much better at the federal level than state by state.
[Eric Miranda] 10:50
So speaking of that federal and state by state interaction, how does Section 230 come into play here?
[Dean Chemerinsky] 10:59
Well, Section 230 says that social media companies can’t be held liable for the content moderation decisions they don’t make, but they allow to be posted, or for the content moderation decisions that they do make. There might be an argument that Section 230 preempts laws like Florida and Texas. Florida and Texas say social media companies can’t engage in content moderation, where Section 230 I think, preserves the right of social media companies to engage in content moderation.5 If that’s right, then there’s a conflict between federal and state law, and state law would be preempted. That was not one of the issues that was presented with the Supreme Court in the Net Choice cases.6
[Eric Miranda] 11:45
Could you dive a little further into exactly what preempting a state law means for our listeners?
[Dean Chemerinsky] 11:51
If there’s a conflict between a federal law and a state law, the state law is deemed preemptive. Federal law is supreme over state law. So there’s a famous old case where a federal agency required that all maple syrup bottles have labels stating their ingredients. Wisconsin actually had a law that prohibited maple syrup bottles sold in that state from selling their ingredients.7 No seller of maple syrup could simultaneously comply with what the federal law required and the state law prohibited, so the state law was deemed to be preempted.8
[Eric Miranda] 12:30
Perfect. That’s exactly what I was looking for. So, on this idea of states not being ideal regulators, do you have a proposal, or is there an alternative to state level regulation for addressing these pressing concerns of internet governance and social media accountability?
[Dean Chemerinsky] 12:48
I don’t think that anyone has yet come up with a very good proposal for how to deal with this. I overall think that Jeff Kosseff is right. Section 230 are the words that created the internet. I’d be enormously concerned about the repeal of Section 230. I think that if we’re going to have solutions, they’re going to be much more specific than generally in terms of repealing or an overhaul of Section 230. For example, California adopted a law9 prohibiting deep fakes before political elections and prohibiting sexual explicit deep fakes that put somebody in a false light show them engaging in sexual activity that didn’t occur, I would support those regulations of internet and social media.
[Eric Miranda] 13:39
What exactly makes that different than the laws that you previously spoke about?
[Dean Chemerinsky] 13:45
They’re narrowly tailored. Let’s take the example of the law that says it is impermissible to show somebody in a sexually explicit situation that didn’t occur. And we know that this is happening a good deal where people will take sexually explicit pornography and superimpose somebody’s face on it. That puts somebody in a false light, invades their privacy, it involves sexual speech. So, I would say that narrow regulation is permissible. I also think prohibiting deep fakes before a political election should be constitutional. I think what makes it different than the overall Section 230 is that both of these examples are quite focused. They’re narrowly tailored on solving specific problems. They’re not an overall regulation of the Internet.
[Eric Miranda] 14:42
Thank you. So I guess a question I have is, you seem to be really focused on this idea of narrow, narrowly tailoring policies to target internet regulation. But does that run up to a concern of the government is targeting specific content and regulating the content of words on the internet?
[Dean Chemerinsky] 15:04
If the government is regulating the content of speech on the internet, the government’s going to have to meet strict scrutiny. As to my examples, as to the former. You can’t show somebody in a sexually explicit situation without their consent. I think that that would be constitutional, even if strict scrutiny is the test, and there’s an argument that, since it’s sexual speech and sexually explicit speech, it wouldn’t have to meet strict scrutiny. In terms of the restriction of political ads, I would say it does meet strict scrutiny. I’d say there’s a compelling interest in preventing the ability to show somebody through a deep fake saying something they didn’t say, and especially before an election where there’s not time for more speech to counter it.
[Eric Miranda] 15:53
I completely agree. Do you think that this topic of internet regulation has especially come to the forefront because of elections that are pretty prevalent or relevant today?
[Dean Chemerinsky] 16:08
I think the reason issues come to the forefront is deep fakes are relatively recent, and because of AI, deep fakes are now becoming easier and also more realistic. There is something different about hearing someone apparently say something, than just being told that they said it or reading about it. If, on the eve of an election, a candidate for say, local office is seen in a video saying, I committed this horrible crime, but elect me anyway, and it’s all false, hearing somebody, seeing somebody say something, is very powerful. And I think that’s why deepfakes are a particular threat to democracy.
[Eric Miranda] 16:54
I completely agree. And so I guess my question to you would be, how should the government, or how should we as people, really balance this innovation that we’re seeing with AI and the internet with this idea of regulating harmful content and misinformation on digital platforms?
[Dean Chemerinsky] 17:15
I think it’s going to take a step-by-step approach. I think that the best approaches will be narrowly focused on solving particular problems, rather than overall overhaul of Section 230.
[Eric Miranda] 17:30
Thank you. And so we talked about this, this interplay between the federal and the state governments. I was wondering, is there a consensus between the Supreme Court of how this content moderation should look like and how we should proceed?
[Dean Chemerinsky] 17:48
There’s not a consensus in the Supreme Court. There have only been a handful of Supreme Court cases dealing with the internet and social media. On the other hand, July 1, the Supreme Court handed down a case called Moody v. Net Choice.10 It involved the Florida and the Texas laws that I mentioned that prohibited social media platforms from engaging in content moderation. The Supreme Court didn’t decide the First Amendment question. It sent the cases back to the lower courts, but the Supreme Court gave a clear indication what it’s going to do. Justice Kagan, writing for the court, said social media companies are private platforms. They get to decide what to include or exclude.11 She relied especially on a 1974 Supreme Court case, Miami Herald v. Tornillo.12 And there, the Supreme Court declared unconstitutional a Florida right to reply law. The Florida law said anytime somebody is attacked in a newspaper, they have the right to reply in that newspaper. The Supreme Court unanimously declared that unconstitutional. The court said publishers of newspapers get to determine the content of what’s in their newspapers, likewise in Moody v. Net Choice, the Supreme Court said social media platforms are private companies. They get to decide for themselves what to include or exclude.
[Eric Miranda] 19:13
So is that because of the idea that these social media platforms are hosts to their own content, or is there a difference if they start creating their own content?
[Dean Chemerinsky] 19:24
I think the question depends on whether you’re talking about Section 230 or the First Amendment. For Section 230, it applies if the internet company is the passive receiver or making choices to take things down. But, if the social media company is creating its own content, then it’s outside the protection of Section 230. With regard to the First Amendment, it’s more unclear. Certainly, if a social media company creates its own content, it then is First Amendment protection. But what if it’s just the passive receiver of information. Justice Barrett and a concurring opinion expressed doubts as to whether that’d be First Amendment protection. I disagree, and I think what Justice Kagan the majority opinion was saying is that social media companies like newspapers get to decide what to include and what to exclude.
[Eric Miranda] 20:20
Thank you. And so I guess I would like to dive a little deeper into that. Is there a difference between the free speech First Amendment protections for companies versus an average person?
[Dean Chemerinsky] 20:34
The answer is no. The Supreme Court in 1978, the First National Bank of Boston v. Bellotti13 said that corporations have free speech rights. The Supreme Court has continued to adhere to that. Now there are places where regulations on corporations have been allowed, where such regulations aren’t permitted against individuals. Corporations can’t contribute money to candidates for federal elective office. Obviously, individuals can do so, but generally corporations have the same speech rights as individuals.
[Eric Miranda] 21:08
Thank you. And so there’s this idea that internet companies and social media platforms have been innovating with AI with their own algorithms. And so there’s this debate of whether they are hosting their own content, or whether they’re actually posting their own content through algorithmically suggesting posts to users. Under this new world, where social media platforms are taking more of a hand on what content to recommend to their users, would that still lie under Section 230 or are we moving into a world where the First Amendment will come more into play in this question?
[Dean Chemerinsky] 21:49
We don’t know the answer to that question. The issue appeared to be posed before the Supreme Court in 2023. The case called Gonzalez v. Google LLC14, and the Supreme Court didn’t reach the issue. So I think there is an interesting question of to the extent that the social media platforms are using algorithms, does that make them much more of a controller of the content, and outside the realm of Section 230 than if they’re just the passive receiver information? We just don’t know yet.
[Eric Miranda] 22:23
Thank you. And so taking a step back and kind of broadening our scope, I was wondering if you could talk on how Section 230 protections apply across like different size companies/platforms. Does it impact bigger companies the same way it impacts smaller, independent websites?
[Dean Chemerinsky] 22:46
I think what Section 230 does is say that the website has tremendous discretion as to what to include or exclude, and it can’t be held liable for those decisions. And in that sense, it’s not drawing a distinction among websites or social media platforms based on their size.
[Eric Miranda] 23:05
Right, but would you say that that protection matters more for a specific type of company, or do all companies enjoy the fruits of that of Section 230?
[Dean Chemerinsky] 23:22
I think all companies enjoy the fruits of Section 230. For a small company, liability could be much more fatal. It wouldn’t take much in terms of lawsuits against it for its content before putting out a business. For large social media companies, there’s really the concern that a section 230 would change, they would have to engage in far more content moderation. That, in turn, would really change what we can see on the internet and social media.
[Eric Miranda] 23:53
So speaking about those larger private companies, do they have themselves an incentive to regulate their own content? Like, why necessarily does the government need to intervene if there is this incentive for the company themselves?
[Dean Chemerinsky] 24:08
The companies engage in content moderation, at least most of them, we were talking about things like Facebook and Instagram, the like, and I think there’s a strong argument that it is better to let the private entities do content moderation than to have the government mandate it, because the private companies aren’t constrained by the First Amendment. The government, if it imposes a mandate, is limited by the First Amendment. So, I think we can have better content moderation if it’s done privately, assuming the private companies choose to do it.
[Eric Miranda] 24:46
Do those incentives take place right now, or is there a way we can encourage those incentives?
[Dean Chemerinsky] 24:53
The incentives are largely about social pressure, and I do think that there’s much more that can be done to encourage content moderation by private platforms, by Internet companies, and I would hope that we will continue to go down that path of finding ways to encourage those who run social media platforms, internet companies to engage in responsible content moderation.
[Eric Miranda] 25:20
Do you think that companies regulating their own platforms that they themselves doing that is too slow, and that’s why states and why people are looking to the government to regulate and push those policies on companies?
[Dean Chemerinsky] 25:35
I think there’s a natural human tendency to want to stop the speech we don’t like. People look at the internet and social media and inevitably can find speech they don’t like, and that then creates pressure for government regulation.
[Eric Miranda] 25:52
Has this pressure always been around? Because Section 230 and the internet has been around for a long time. Why are we having these conversations now?
[Dean Chemerinsky] 26:03
I think the pressure to stop the speech we don’t like is always there. It’s interesting, from your perspective, you say the internet and social media have been around a long time. From my perspective, they’re relatively new media. When I began teaching in 1980, put aside when I went to college and law school, there was no internet and social media. When I began teaching First Amendment law in the academic year in 1980-81, no one had ever heard of the internet and social media. We were discussing things that I taught a class on media in the law, and we were talking about things like regulation of cable television. So I think that we’re dealing with a very new and dramatically different type of communications media, and it’s not surprising new issues are arising. I think also because of the development of artificial intelligence and algorithms, things are very different than we’ve ever had to confront before with regard to media of communication.
[Eric Miranda] 27:08
That’s interesting. So, I want to dive back into you saying that for you, the internet feels more like a newer invention. So, I was wondering, do you feel like the type of speech and the type of speech that people don’t like and that want to regulate, that has arisen on the internet is that different, fundamentally different than the type of speech we used to have in the past?
[Dean Chemerinsky] 27:33
I think the internet, as I said, is fundamentally different than any other media. The ability of people to reach a large, mass audience quickly, never before was it in the hands of individuals like it is now. The ability to access almost infinite information instantaneously is different than we’ve ever had before, and those lead to positive results and to negative results.
[Eric Miranda] 28:01
Of course. So, I guess, should there be this idea that the federal government should have a responsibility in at least setting transparency for how these internet companies should be regulating their speech and or exactly what is allowed?
[Dean Chemerinsky] 28:18
I think any regulation should be at the federal level, not state by state. We need uniformity in dealing with the national media. I also see no problem with requiring the social media platforms disclose how they engage in content moderation. Obviously required disclosure is compelled speech, and I have no doubt that the social media companies will object to having to disclose this. There’s all sorts of places where the law requires disclosure, and I know there would be interest, both among liberals and conservatives, in better understanding how content moderation decisions are made.
[Eric Miranda] 28:58
So when you speak about disclosure, is that to how they’re deciding their policies, or does that even go to the extent of how and what they are regulating?
[Dean Chemerinsky] 29:08
I think it’s explaining what their policy is. It’s not requiring that they explain each individual content moderation decision.
[Eric Miranda] 29:20
So that that requirement that companies explain how they’re regulating, that doesn’t exist currently. Is that correct?
[Dean Chemerinsky] 29:29
Well, some states have adapted those rules requiring transparency of explanation of how they go about content moderation decisions.
[Eric Miranda] 29:39
So that is a part of what you’ve talked about previously with the Florida and the Texas laws?
[Dean Chemerinsky] 29:47
That’s correct.
[Eric Miranda] 29:47
And so I guess I want to change topics a little here. We’ve talked about platforms regulating their own content and being protected and really given the safe haven by Section 230. Is there a way in, can platforms be sued for biased enforcement of their own moderation policies?
[Dean Chemerinsky] 30:10
Likely no, though a lot would depend on context. If, for example, you could show that a social media platform was consistently discriminating on the basis of race or sex or religion, sexual orientation in its content moderation decisions. Yes, then it could be sued because it would be violating laws that prohibit business establishments from discriminating on those grounds. California has such a law, but in general, because of Section 230, social media platforms can’t be held liable for that which they allow or that which they take down. In other words, they can’t be liable for their content moderation decisions.
[Eric Miranda] 30:49
Thank you. So then, do you see any potential developments in content moderation law? Or because you’ve seen you’ve said that the Supreme Court has neglected or hasn’t had to speak on the First Amendment issue rights. So where do you see this debate going forward in the future?
[Dean Chemerinsky] 31:09
I think there’s going to continue to be a tremendous amount of litigation with regard to internet and social media platforms. When can states regulate? When can the federal government regulate? When can there be tort liability against internet and social media platforms? And so I think that we’re just at the early stages of dealing with the First Amendment internet. There’s lots more cases to come.
[Eric Miranda] 31:37
Do you believe that, since we’re such in such an early stage, is regulation in internet and social media a worthwhile goal?
[Dean Chemerinsky] 31:46
I think that the Communication Decency Act and Section 230 is desirable. I think it is good to say to social media platforms, you’re not liable for what’s posted there. You’re not liable for what you choose to take down. So, I would favor the continuation of Section 230, though I know people both on the left and the right disagree.
[Eric Miranda] 32:08
Would you argue or do you think there’s a need for Congress? So you are against removing Section 230 but does it need to be amended, or does there need to be new congressional legislation on this topic of regulation?
[Dean Chemerinsky] 32:24
I don’t think there is any consensus now as to if Section 230 was to be changed, how to do it? I think so. Congress has not done so. But I think if there are desirable things to do, it is much better at the federal level than leaving it state by state. It becomes almost impossible to run a national or an international media platform if the rules change state by state.
[Eric Miranda] 32:51
Thank you. And so zooming back out, I was wondering if you could talk about, so we really talked about the debate that has happened in the US on and how you are arguing that we should have federal regulation as opposed to state. Is that a similar topic that we’re seeing globally? Is it that federal governments are taking the rein of saying, this is how we’re going to regulate the internet?
[Dean Chemerinsky] 33:14
I think we’re going to see much more effort in Europe than in the United States. We’ve already seen much more efforts in Europe than the United States to regulate the internet and social media. There just isn’t the same commitment to freedom of speech in the continent that there is in the United States.
[Eric Miranda] 33:35
So when you say there isn’t the same level of commitment, what, how does, exactly, does that come into play?
[Dean Chemerinsky] 33:42
In every European country, there are hate speech laws that would be unconstitutional in the United States.15 In Europe, there’s the ability to recover for things like moral rights that don’t exist within the United States. I think that overall, there is much more of a commitment to speech here than in Europe.
[Eric Miranda] 34:09
And so for our listener for this podcast, how would you explain why is that important that here in the US, we do have those protections for hate speech and free speech?
[Dean Chemerinsky] 34:21
In part, I think the reason hate speech is protected in the United States, where it’s not in Europe, is that in the United States, in order to regulate speech, the law has to be, it can’t be vague or over broad. It has to be clear about what’s prohibited and what’s allowed. It can’t regulate a lot more speech than necessary. No one has found a way of defining what’s hate speech that’s not vague and over broad. Most European countries prohibit hate speech if it “stigmatizes or demeans on the basis of race, sex, religion, sexual orientation.” But what does stigmatize or demean mean? And given that there isn’t a way to define hate speech that’s precise, that’s not over broad, there’s just real vagueness and overbreadth problems, some of it, though, is the realization that under the First Amendment, all ideas and views can be expressed. Period. Hate speech is vile, but it is expressing an idea and a viewpoint, so it’s protected by the First Amendment.
[Eric Miranda] 35:29
So I was wondering if you have any take on whether the freedom of hate speech extends to when speech is artificially created, like is that an expression that we should be protecting, especially with internet companies really going to this artificially created content?
[Dean Chemerinsky] 35:46
It’s a great question. We don’t know the answer to it yet from the courts, but my answer is yes, and I base that on a couple of things. One is that even speech that’s artificially created is as a result of programs that human beings wrote. And so if artificial intelligence is creating speech, well that’s because human beings have structured it to do so. I worry about line drawing. I’m sure we all use spell checks and grammar checks and things like that. Do they become impermissible? But most of all, the reason I believe that artificial intelligence generated speech is protected is that the value of the speech is its being out there into the marketplace of ideas. When the Supreme Court first considered whether corporations have free speech rights, in a case I mentioned First National Bank of Boston v. Bellotti,16 the court said corporations have speech rights because the more speech that’s out there, the better. I think that would apply whether it’s a human being or a computer program that’s generating the speech.
[Eric Miranda] 37:04
Do you believe that the volume of speech, so protecting the volume of speech is important, but does it matter if the speech is fundamentally different, or the context of that speech?
[Dean Chemerinsky] 37:16
Not sure I understand the question. What I’m saying is that we generally don’t care as to the identity of the speaker. Corporations have the same speech rights as individuals. Well, likewise, I’d say, doesn’t matter whether the speech is generated by AI or by a human being, it’s still speech.
[Eric Miranda] 37:37
Got you. No, I think you answered my question perfectly. Kind of switching topics here. What do you, how would you say that free speech is important to you and to the idea of the internet as a growing and innovating product?
[Dean Chemerinsky] 37:55
The core of democracy is the ability of people to criticize the government, to push for change, to be able to advocate for candidates to be elected or defeated. I think the internet and social media obviously are playing an enormous role with regard to the elections. I think that the core of free speech is about people to hear many different ideas and to choose themselves. The alternative is to let the government decide what ideas are palatable and which should be prohibited. And I don’t want to give any government that power.
[Eric Miranda] 38:36
That’s amazing. And so talking about the government and returning to an earlier topic, I was wondering if you could go further into why the Florida laws, why those Texas laws that you were talking about, why are they unconstitutional, and why are they not protecting this speech?
[Dean Chemerinsky] 38:54
The Florida and the Texas laws say that large social media platforms can’t engage in content moderation. They can’t decide what to include or exclude, but the law is so clear that when it’s a private entity, it gets to decide what speech it will have. So if it’s a newspaper, it gets to decide what to include or exclude from the newspaper. There’s a famous Supreme Court case that involved a parade, and the court said those who put on the parade get to decide what to include or exclude.17 And so for this reason, I think that the 11th Circuit was right in striking down the Florida law very much from the perspective of Instagram, Google, YouTube. They get to make the choice what to include on their platform.
[Eric Miranda] 39:46
Do you think that the striking down of these laws, will that prevent or will that discourage other states from making similar laws? Or do you think that states will continue to try to pass their own laws?
[Dean Chemerinsky] 40:01
I think if these laws are struck down, we’re not likely to see laws that prohibit content moderation. But whether these laws are struck down or not, I think we’re going to see many other efforts to regulate the internet and social media, and it’ll be interesting to see what the courts do with those challenges.
[Eric Miranda] 40:20
So is the future of this debate on the courts, or is there something the federal government can do that will essentially chill out state efforts and say no, the federal government has the issue here?
[Dean Chemerinsky] 40:34
The federal government could adopt a statute that broadly prohibits state and local regulation of the Internet. I would favor such a statute because I believe any regulation needs to be national, not state or local. I don’t think such a law is likely to happen, but it’d be a good thing.
[Eric Miranda] 40:53
Why do you think it’s not likely to happen?
[Dean Chemerinsky] 40:55
There is such disagreement over the problem with the internet among liberals and conservatives, that’s why there has been no fix of Section 230 in Congress and to the extent that Republican states are adopting laws that are imposing more regulation of the internet and social media, like the Florida and the Texas law, I think it is increasingly unlikely that Republicans in congress would like to keep states from continuing to do so.
[Eric Miranda] 41:26
If there is such disagreement, how did we even get to Section 230? How did consensus arise around that, such?
[Dean Chemerinsky] 41:33
Remember, it was so early in the history of the internet. We’re talking about 1996 which you could say is either, as you did, a long time ago, or from my perspective, not very long ago at all.
[Eric Miranda] 41:46
It was before I was born. So…
[Dean Chemerinsky] 41:49
I understand, so it’s all the perspective of when it was adopted. But that doesn’t really matter to the underlying issue. But you have made me feel old.
[Eric Miranda] 42:00
I’m sorry about that, that was not my intention.
[Dean Chemerinsky] 42:03
I’m teasing. I’m teasing you. No, of course. I’m teasing you.
[Eric Miranda] 42:07
So do you believe if so, let’s say that Section 230 didn’t exist. Could that section be brought up today, or is it really just a product of—it was so early and we wanted to protect the internet as it was growing?
[Dean Chemerinsky] 42:23
I think it was very much a product of when it was adopted, it was very early in the history of the internet. The desire to protect the internet so that it would flourish. I think if Section 230 came for a vote today, it wouldn’t get adapted. But the question is, what will replace it?
[Eric Miranda] 42:41
That’s great. And so with this idea, since it was adopted and we have the internet as it is today, do you think the internet is in a good state, with content regulation, with free speech, of being this product that people can come and express their ideas?
[Dean Chemerinsky] 42:57
Overall, I’m a huge fan of the internet and social media for communication, but is it in a good state? I wish there was a lot less hate on the internet. I wish there was a lot less child sexual exploitation material on the internet. I wish there was a lot less material encouraging terrorism. I wish there’s a lot less material that’s harmful to youth on the internet. But I don’t know a way of how to regulate to achieve those goals and still preserve the internet and social media as such special and unique platforms, media for communication.
[Eric Miranda] 43:35
Do you think this is an issue unique to the internet, or why exactly has like this hate speech or the sexual predation that you’re talking about really flourished with the Internet?
[Dean Chemerinsky] 43:47
And this goes to what I said earlier. I do think it’s unique to the internet, the ability of anybody to put anything on the internet and reach a large number of people has no analog. There’s no editors for the internet. So, with regard to hate speech with child sexual exploitation material or material encouraging terrorism, it all goes up unless the internet company screens it. It’s part of content moderation.
[Eric Miranda] 44:16
So, I guess, really tying together everything we’re talking about, would you say the internet should have an editor? And I guess should, should that editor just be the federal government?
[Dean Chemerinsky] 44:30
Absolutely not. The last thing I want to see is the government making the choices of what we can see in here. I think social media companies need to be the editor. They should engage in content moderation, and can probably do a much better job of it. My colleague Hani Fareed at Berkeley talked about how they need much better screening mechanisms for child sexual exploitation material than the ones that we use now. I want to see internet companies use that.
[Eric Miranda] 44:59
And so I guess to really wrap up this conversation, is there anything that you would like to address, if there is a listener to this podcast, what would you like them to leave with about today’s discussion?
[Dean Chemerinsky] 45:10
I hope that we’ve illuminated why the internet and social media are so incredibly important for communication. I hope we’ve also illuminated why it’s so difficult to regulate it, that if we required internet companies to be liable for any harms they caused, they would then want to screen everything posted. And when you’re dealing with the millions and billions of things posted all the time, it’s just insurmountable, and we’d have a lot less speech. That’s why I agree with Jeff Kosseff when he said that Section 230 is the 26 words that made the internet, and I believe the internet is the most important development for free speech since the invention of the printing press.
[Eric Miranda] 46:54
Thank you, Erwin, it was really nice to have you speak with us today.
[Dean Chemerinsky] 45:58
Truly my pleasure. Thank you for the conversation.
[Meg O’Neill] 46:12
You have been listening to the Berkeley Technology Law Journal podcast. This episode was created by Eric Miranda, Alexa Chavera, Kosha Doshi, and Paul Wood. The BTLJ podcast is brought to you by podcast co-editors, Juliet Draper and Meg O’Neill and junior podcast editors, Braxton Cannon, Lucy Huang, Paul Wood, and Joy Fu. Our executive producer is BTLJ Senior Online Content Editor, Linda Chang. BTLJs editors in chief are Edlene Miguel and Bani Sapra, if you enjoyed our podcast, please support us by subscribing and rating us on Apple Podcasts, Spotify, or wherever you listen to your podcasts. Write to us at BTLJpodcast@gmail.com with questions or suggestions of who we should Interview next. This interview was recorded on October 10, 2024. The information presented here does not constitute legal advice. This podcast is intended for academic entertainment purposes only.