SPEAKERS Brandie Nonnecke, PhD; Anan Hafez; Ian Smith
Podcast Transcript:
[Anan Hafez] 0:07
Welcome to the Berkeley Technology Law Journal podcast. I’m your host, Anan Hafez. Today’s episode is all about Section 230 of the Communications Decency Act of 1996.[1] Now, you might be asking yourself, what is Section 230? And why should I care? Well, it’s one of the most important and impactful pieces of legislation in the tech world. Section 230 is what enables online platforms like social media websites, blogs, and forums to exist in their current form. And it’s what allows users to post and share content without fear of legal repercussions. Essentially, Section 230 states that online platforms cannot be held liable for user-generated content.[2] So if someone posts something defamatory, for example, the website itself cannot be sued for it. This immunity from legal action has allowed the free flow of information and expression online, but it has also created a whole host of new legal and ethical dilemmas. One of the most pressing issues today is the question of content moderation. With the rise of social media and online communities, platforms like Facebook, Twitter, and YouTube have become some of the most powerful gatekeepers of information in the world. But with great power comes great responsibility. And many argue that these companies have not done enough to prevent the spread of hate speech, disinformation and other harmful content. For example, is Google protected under Section 230 for harmful content posted by terrorist groups? That’s the question at the center of the eagerly awaited Supreme Court case, Gonzalez v. Google.[3]
In 2015, Nohemi Gonzalez, a US citizen, was killed by a terrorist attack in Paris, France, one of several terrorist attacks that same day. The day afterwards, the Foreign Terrorist Organization ISIS claimed responsibility by issuing a written statement and releasing a YouTube video. Gonzalez’s father filed an action against Google, Twitter and Facebook claiming, among other things, that Google aided and abetted international terrorism by allowing ISIS to use its platform, specifically YouTube, quote, “to recruit members planning terrorist attacks, issue terrorist threats, instill fear and intimidate civilian populations.” With implications for the future of Section 230, free speech, online platforms, and liability in the digital age, this case has the potential to shape the landscape of the internet as we know it.
Today, our colleague Ian Smith will discuss Gonzalez v. Google and dive into the complex and contentious world of Section 230 with our guest expert, Dr. Brandie Nonnecke. Brandie is an expert in information and communication technology, law, policy and governance, with a focus on responsible AI and platform governance. She is the founding director of the Citris Policy Lab at UC Berkeley and an associate research professor at the Goldman School of Public Policy where she directs the tech policy initiative. She’s also the director of Our Better Web, a program that addresses online harms and the co-director of the Berkeley Center for Law and Technology at Berkeley Law. We are thrilled to have Brandie on for today’s episode of the BTLJ podcast. We hope you enjoy the conversation.
[Ian Smith] 3:38
Hello, I’m Ian with the BTLJ podcast team. Brandie, we’re excited to have you and welcome.
[Brandie Nonnecke, PhD] 3:44
Yeah, thank you so much for having me today.
[Ian Smith] 3:47
So in broad strokes, what is Section 230? And how does it impact us and the internet?
[Brandie Nonnecke, PhD] 3:55
The internet would not be what it is today if it were not for Section 230. Section 230 was passed in 1996 as part of the Communications Decency Act.[4] Think back to this time when the internet was just starting and all these platforms were growing. There was this overwhelming fear that if platforms would be held responsible for all user-generated content, they would essentially over-police content. They would be overburdened with lawsuits. So Cox and Wyden introduced this legislation and here we are today. Now, Section 230 is shockingly all over the news. It’s in the headlines now. It’s common knowledge. A lot of people know about Section 230.
[Ian Smith] 4:50
And so Section 230, even at its inception, did contain certain exceptions. I was wondering if you could talk about these a little bit and how they’ve been implemented in comparison to the general protections of 230.
[Brandie Nonnecke, PhD] 5:06
Alright, so let’s be clear. First I’d like to read the protection part of it, Section 230(c)(1). Now that’s really the big thing. And many of the listeners might have heard Section 230 be referred to as the 26 words that created the internet. So if you’ll bear with me, I’d like to say those 26 words right now. So those words are “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” So essentially, there’s three elements there. You have to be a provider or user of an interactive computer service, you cannot be treated as the publisher or speaker in regard to the third element information provided by another party or a third party. Now, in your question here, you ask, well, are there exceptions? Well, yes, there are exceptions. Platforms can’t post or knowingly share illegal content. And one of the most important examples is child sexual abuse material. If a platform becomes aware of child sexual abuse material, they must remove it, and they must report it to the National Center for Missing and Exploited Children, and make sure that that is recorded. So it’s not just a carte blanche free for all protection for platforms to allow any content, if there is illegal content, and they’re aware of it, they must remove it.
[Ian Smith] 6:47
And so that illegal content — how have companies managed it? Because generally, from my understanding of Section 230, it is a pretty general protection for internet platforms, but in these specific exceptions, they’re held accountable for their users. What are the specific internet or website policies that really are directed towards addressing these exceptions?
[Brandie Nonnecke, PhD] 7:14
When you say internet policies, what are you referring to?
[Ian Smith] 7:18
I mean specific platform policies. So what does Google do to really try and address this? And what I’m really curious about is how this might be used to address a potential change in precedent or change in Section 230.
[Brandie Nonnecke, PhD] 7:35
So I think what we’re trying to get at here is, we know that there’s a lot of awful content on platforms that we would seek to mitigate. But oftentimes, that is awful but lawful content. We see harassing language, we could talk about and we’re going to talk about the Gonzalez case. And you can think about YouTube and perhaps how it might be used as a radicalization tool for extremist groups, which of course, we don’t want. But right now, there are certain areas where platforms have to remove that illegal content. So Child Sexual Abuse material was one that I mentioned, another might be the sale of illicit drugs on the platform, if they see that content, they should be removing it. But again, a lot of this really falls under platforms’ own decisions about what types of awful but lawful content they want to remove themselves. And the big issue here in the United States is we have the First Amendment, and that First Amendment applies to platforms. They have First Amendment rights over editorial control, over the content that they put on their platform. And so it’s pretty difficult to put in any reforms to Section 230 that will not run afoul of violating platforms on First Amendment rights.
[Ian Smith] 9:01
So with that reference to Gonzalez v. Google, and the First Amendment, I would like to dive into that case in specifics. This is a recent Supreme Court case that has brought Section 230 into the limelight. Do you mind providing a brief summary of the important topics in the case and the standing?
[Brandie Nonnecke, PhD] 9:20
Yeah, this case has been absolutely fascinating and absolutely heart wrenching. The Gonzalez case is based off of events that happened in November 2015, when armed men affiliated with a terrorist organization killed 130 people in six coordinated attacks across Paris. And one of the victims was Nohemi Gonzalez, a 23 year old student, the only American to die in these attacks. And the parents of Nohemi wanted to address this problem. Their point was that Google, which owns YouTube, that because these terrorist videos are being shared on YouTube and presented by the recommender system, perhaps to individuals who might be the most susceptible to their manipulative appeal, the parents felt YouTube should be held responsible. However, we have Section 230, that says platforms are not liable for third party content. And so that’s really what is at the core of this case: whether or not Google and YouTube should be held responsible, not for necessarily having that content on the platform, but targeting that content through the recommendation systems, which is, I think, a misunderstood technical feature of the internet. So I’d like to do a deep dive into that if you’d like to talk a little bit about recommender systems and what they are.
[Ian Smith] 11:04
Yeah, I’d actually I’d love to hear what you have to say on the algorithms and how they can be manipulated or how they apply to Section 230 and how it should or should not apply.
[Brandie Nonnecke, PhD] 11:15
Yeah, exactly. So the internet today, as everybody experiences it, I would say 98% of the websites that you go to or search engines platforms, they are using a recommender system. So a recommender system will take certain parameters. So for example, when I’m doing a query on YouTube, let’s say we’ll use YouTube as the example says, since that’s the Gonzalez case. You know, maybe I’m searching for funny kitten videos, right? In what comes up after that search, that’s called the recommender system. So it’s going to say, oh look, Brandie really likes funny kitten videos where they’re doing x, y and z. So now when she’s searching for funny kitten videos, we know she’s likely to like these ones, because she’s watched them in their entirety in the past, she’s even liked them. She’s reshared them. So next time she searches, we’re going to give that content back to her. And this really boils back down to this idea that platforms are trying to optimize for long-term engagement on their systems. And this, while it seems innocuous at first, when I go to YouTube, and I want to search for that kitten video, of course, I want to see that kitten video that makes me laugh really hard. But it is applied across all types of things that we view. So for example, if I’m an individual who might be starting to get interested in conspiracy theories, and I started to watch more and more of them, and I start to look up on YouTube, well, what about the 2024 presidential election? Well, maybe YouTube would learn that, hey, Brandie really likes conspiracy theories. Let’s give her some about the 2024 presidential election. I don’t think that the platform set out to build the recommender systems in a way that really has this fatal flaw. But here we are. And so I think it’s more about making sure that recommender systems are built in a way that tries to mitigate some of these harms, rather than get rid of them full-stop.
[Ian Smith] 13:27
Yeah, and there are differences, though, between various recommender systems, right? YouTube has its recommended videos, which is a pretty direct recommendation, I think most people know the suggestions in the top right corner of YouTube and Google lists websites in an order. So there are different degrees of algorithms, I guess you could say. And so, if it’s even possible, how should lawmakers draw a line between the various kinds and levels or strengths of algorithms?
[Brandie Nonnecke, PhD] 14:01
Yeah, this is a really tough question. I actually filed an amicus brief in collaboration with the Center for Democracy and Technology in the Gonzalez case, because the way that Gonzalez had framed whether or not YouTube should be held liable for the recommendation of the terrorist content really framed it in a way that they should be held liable for the recommendation of all content. And if they are, the worry is then that platforms won’t use recommender systems. And if you don’t use recommender systems, any website that you go to where you’re searching for something would be completely unusable.
Recommender systems vary drastically across different platforms and they’re a moving target, even on a single platform. The recommender system, the algorithm, might change from day to day based off of whatever the platform is trying to optimize for. So maybe they’re optimizing for short-term engagement or they’re optimizing for long-term engagement where they want you to come back to the platform day after day after day. And so there has been legislation proposed in the United States that would mandate that platforms actually just provide more transparency on how their recommender systems work. And then also giving end users greater control over that recommender system. That legislation has not passed yet in the United States. However, there is a piece of legislation that has passed in the European Union called the Digital Services Act.[5]
The Digital Services Act does mandate that platforms have that transparency around the recommender systems and what parameters, or in other words, what criteria, what data is used to provide a response to people when they make a query. And users do have control to change their recommender system and actually to say, hey, Google, when I query results, I don’t want you to use any of my personally identifiable information. Now, the implementation of the DSA, even though it has been passed into law, it won’t actually go into effect until April of next year. And so right now we are in the thick of figuring out how to actually implement these requirements in practice, which is quite the herculean task.
[Ian Smith] 16:23
It definitely sounds like it. It sounds like a fraught process, but one that’ll be very interesting and will potentially, hopefully, be helpful to American lawmakers should they want to change it.
[Brandie Nonnecke, PhD] 16:34
Yeah, I think so. And just one more thing on this is the responsible design idea. I recently published an op-ed in Wired with Professor Hany Farid. He’s a professor at UC Berkeley in the School of Information in the College of Engineering. And because in the United States we have the First Amendment, it makes putting in place laws around harmful content extremely hard. So instead, in our piece, we don’t focus on harmful content, we focus instead on harmful design.[6] And so part of that would be greater transparency by platforms on what they are trying to optimize for in their recommender system. How are they putting in place appropriate safeguards to ensure that they’ve looked to see if there are harmful spillover effects? If there are, what do we need to do to mitigate those harmful spillover effects? So I think that the discussion really needs to change and get away from Section 230, which is just fraught with issues and instead hold platforms accountable using a product liability tort and saying, hey, if you have negligently designed a feature of your system, you should be held accountable.
[Ian Smith] 17:51
So turning to the present case, how do you think the Supreme Court will look at this? From pretty much a strictly Section 230 standpoint? How do you think they will end up ruling on this case or what do you think will ultimately go into the opinion that they write?
[Brandie Nonnecke, PhD] 18:09
I encourage people to listen to the oral arguments or review the transcript. Because even in there, the Justices said, hey, you know, we’re not the nine best people to weigh in on this issue, which is quite interesting, because they took the case. I think that there are probably two things that could happen. Number one, they might not even issue a ruling, and they might pass it back and say, look, we took this case, but hearing the oral arguments, we are not the best position to weigh in on this, and instead push it back to Congress to actually provide more clarity on how Section 230 should be interpreted now versus back in 1996. The second outcome is we will end up exactly where we were at the beginning and they will actually argue in favor of Google, saying the platform should not be held liable for the recommendation of content. But in their writing, I think that they will again push it back to Congress to provide more clarity.
[Ian Smith] 19:19
So turning to Congress, there is talk in Congress on both sides of the aisle regarding changes to Section 230. I was wondering if you could provide where both sides stand and also address concerns over the risks. People fear changes being used as a political weapon in the sense that you could police content or not police content at all. What are the dangers there and what are politicians considering in Congress?
[Brandie Nonnecke, PhD] 19:50
Yeah, actually, the Citris Policy Lab, which I direct, has a database of all Section 230 related legislation on our website citrispolicylab.org. Citris is c-i-t-r-i-s. So I encourage listeners to go and check out that database because it does provide summaries for each of the pieces of legislation. Now, having said that, I honestly think that Section 230 has inadvertently become the boogeyman of all harmful content online. And instead, again, I think we should be focusing on harmful design, not on harmful content. To me, if we design recommender systems that prioritize the spreading of, you know, radical rather than rational content, that’s a problem. And so, instead of trying to address the harmful content by better ensuring that we’re building systems in a way that are safe, I think that’s going to address the harmful content. It’s really addressing the cause and not just the symptoms.
[Ian Smith] 20:56
Is there a country currently or system currently that does focus more on design rather than the First Amendment or Section 230? Whether that’s the American version of Section 230 or another? Is there one that’s currently in operation?
[Brandie Nonnecke, PhD] 21:14
Yeah, actually, and the United States is one of them. So in the United States, in California, we passed the California Age-Appropriate Design Code Act.[7] That’s getting at this issue of whether we should build all platforms or websites, where we are likely to have people under 18 years old access that website, by default to have privacy and security mechanisms in place. Now, there is a lawsuit against that law right now. I think it’s being pushed by NetChoice, which is also the lawsuit that’s gone to the Supreme Court with Texas, where Texas passed an essentially a “must carry rule” that platforms cannot moderate any content. So yes, California is trying to do this. But there is pushback against the platforms because they are claiming that it violates their First Amendment right because it’s requiring them to put in place certain features or make changes to their content and their editorial control. I think we’re going to see First Amendment pushback on this more and more frequently in the United States. Now, in other countries, the UK has a very, very similar law. It might actually be a bill at this stage. I don’t know if it’s been passed into law, but it’s extremely similar to the California Age-Appropriate Design Code Act, and essentially trying to do exactly the same thing. Now, I mentioned the Digital Services Act earlier. The Digital Services Act does have a mechanism in place where they are trying to facilitate platforms sharing data with independent third parties to do research that will inform oversight. So perhaps that is a mechanism where the European Union could lean on researchers to identify what are systemic risks. They call out recommender systems as being a feature of platforms that can cause these systemic risks. So perhaps the DSA would allow independent researchers to be able to get access to platform data, do research, identify that the recommender system, for example, has been prioritizing the sharing of extremely harmful content to teenage girls, let’s say, and then now the EU would have that evidence and they would be able to hold them accountable to now mitigate those identified systemic risks.
[Ian Smith] 23:42
And looking at individual households, how would this necessitate the creation of individual users that are tracked to tailor that content to them? Because if you have a family of four under the same roof, how do you tailor that content to each individual? Is it something that would track them based on your individual users? To me, that seems like an obstacle to this kind of policy.
[Brandie Nonnecke, PhD] 24:11
Yeah, that’s the same pushback that we’re hearing with the California Age-Appropriate Design Code Act because several people might be sharing one computer, right? And we’re all logging in under one account on Instagram, and looking at content, for example. And this is the problem, right? That anything that they roll out is going to just be rolled out and people who are not required like minors, okay, minors are required to be protected under the law, but others would also probably have the same experience as a youth. So yeah, it’s a challenge. But also, a lot of people have their own accounts, and you can see that they’re minors. That’s another issue that platforms have claimed that it’s difficult for them to identify who is and who is not a minor in this age verification. I don’t know if you’ve heard, but there’s a law proposed in Utah that would require parental consent for any youth or individual under the age of 18 to access and use social media platforms. And of course the platforms are arguing, well, it’s extremely difficult for us to verify age. Which I think is sort of a scapegoat. I think that they can.
[Ian Smith] 25:30
So you think that it’s pretty easy to identify people by the content they consume, but also through regulatory processes where you are required to identify someone as a youth?
[Brandie Nonnecke, PhD] 25:44
I think so. If they’re going to be targeting advertising to certain people based off of their age when they’re over 18, without them submitting their age, yeah, there’s a lot of data that they can triangulate, to say, hey, it’s probably pretty likely this person is under 18.
[Ian Smith] 26:05
Are there laws that currently prevent companies like Google and Facebook from tracking the age specifically of minors that impedes their ability to do this? Or is it really just them saying, oh, no, this is hard, don’t regulate this.
[Brandie Nonnecke, PhD] 26:21
I think it’s more than saying, oh, no, no, no, no, we got it under control. It’s okay. Don’t look behind the curtain. And by the way, it’s too difficult for us to do that. Now there are laws in place like the Child Online Privacy Protection Act that require these companies to put in place safeguards.[8] So isn’t this just bolstering the established law to say, hey, let’s actually make sure that we’re protecting our youth online?
[Ian Smith] 26:49
I have some more questions, or at least one more question on the algorithm. When you’re talking about better design, are you talking about seeing better recommendations or would there actually be an active component of suppressing certain material based on the person or an individual user? So if there’s particularly harmful content, if they go looking for it, they won’t be able to find it, versus if they go looking for it, they’ll be able to find it, but they’ll just never be recommended it?
[Brandie Nonnecke, PhD] 27:17
Sure, yeah. I actually think that suppression of content may be a better choice rather than removing content, full stop. But in all of this, it’s platforms making that decision about who they allow on their service. What types of content do they allow, what types of content do they suppress. Legislation can never touch that, because it would violate the First Amendment. It’s more of the platforms taking responsibility and being transparent about the decisions that they’re making. The platform trust and safety teams are about 15 years old. And it’s really starting to get more organized and cohesive. Whereas 15 years ago, we maybe just had you block someone or remove content. There are so many more interventions that we can do now. For example, like you said, don’t remove the content, just make sure it doesn’t go viral. Or, you know, on Twitter, if you see a tweet from somebody that’s making a false claim, they now add a clarification underneath from a verified source that says, maybe you should not believe that tweet, look at this research, or these facts from this trusted institution. We’re really at that moment right now of experimenting with different types of interventions that can be helpful.
Now, one of the biggest problems with doing this work is that the platforms are increasingly putting more and more of their API access, which is the application programming interface, (that’s how most researchers and developers gain access to platform data) behind extremely, extremely expensive paywalls. Elon Musk at Twitter just put API access at $45,000 a month, which essentially means that any researcher can no longer do research using Twitter data, which is a problem. How will we understand what’s happening on the platforms and how will we actually develop appropriate interventions engaging the larger research community? It’s a very troubling time.
[Ian Smith] 29:38
Yeah. And is that something that the DSA would provide? It would require Twitter, at least in Europe, to provide that data to researchers or at least a few independent researchers?
[Brandie Nonnecke, PhD] 29:49
Yes, that is the goal. We also have a piece of legislation that was introduced in Congress called PATA, the Platform Accountability and Transparency Act, which essentially did the same thing. It said, hey platforms, you can keep operating, but in order for you to receive Section 230 immunity, you need to provide data to independent researchers. That legislation didn’t pass. I’m hopeful it’ll be reintroduced. I did write a piece in Science last year where I compared the Platform Accountability and Transparency Act in the United States with the European Union’s Digital Services Act.[9] So if anybody’s interested, please go check that out. Now, the DSA was passed into law. In there, they will compel the platforms to make data available to independent researchers. Now there’s all of these caveats. The independent researcher has to be verified. So what does that process look like and who does the verification? What are the criteria for verifying whether or not an individual is or is not a researcher?
Second, which I think is actually one of the biggest problems with the Digital Services Act, is that it requires the researcher to state the research question or hypothesis that they want to investigate and identify the types of data that they think that they need. Then it goes to the national entity that will be established out of this act. They’ll review it and they’ll say, okay, yes, we agree, this is a valid research question or hypothesis, yeah, seems like that’s the data that you need, then it goes to Ireland because all of the companies are headquartered there. So then the final decision – Ireland works with the company. And if they all agree, everything goes great, then the data will be shared with that researcher, but that researcher has to be in the European Union. Okay, this sounds great, right? But think about the time that it’s going to take to go through that process. So if we have an election, and something is going terribly wrong, we won’t know until after the fact, very likely. That’s my number one problem.
My number two problem is that the researchers have to define the research question and a hypothesis with a pretty tight level of specificity and there’s a difference in research between inductive and deductive research. So what they’re prioritizing is deductive research, even though the Digital Services Act says that its goal is to identify new systemic risks, I don’t think it’s actually going to do that because it’s doing deductive research where you essentially are identifying the systemic risks beforehand and then you’re just verifying whether or not those risks are present or not present. Rather than inductive research, where you would just look at the data, look at the system, look at the platform more of an exploratory analysis and see oh, what are the new systemic risks? So to me, unfortunately, I think it’s moving us in the right direction, but there are some features of the Digital Services Act that I think will actually undermine those goals.
[Ian Smith] 32:58
You said that the companies have to agree to what’s going to be shared. That seems like a conflict of interest.
[Brandie Nonnecke, PhD] 33:07
Yeah, and there’s a little bit of pushback. They can claim that the data is proprietary, but that’ll be reviewed by the nation-state entity and they’ll verify whether or not really it is proprietary. Also, all of the data has to be compliant with data protection law. And of course, the European Union has one of the strictest data protection laws, the European Union General Data Protection Regulation or GDPR.[10] So at the end of the day, what data the researchers are actually going to get will be interesting. Now, quickly, in the US, PATA, I think it actually had a bit of a stronger mechanism where it worked through NSF, the National Science Foundation, to verify and validate the research question. And then I actually went to the Federal Trade Commission, and the Federal Trade Commission was the broker with the platform. And they could really push back against the platform if the platform was saying, no, no, no, it’s proprietary, we can’t share it, the FTC would have the final word, and they could overrule the platform and say, no, we don’t believe you that this is proprietary, you have to share the data.
[Ian Smith] 34:19
Interesting. I did not know that about the DSA. That is the end of the questions that I have. I wanted to leave it open to you, if you had anything else to say about where you would want to see this going in the future or if there are specific policies you wanted to address or talk about before we go, but if not, happy to end it here.
[Brandie Nonnecke, PhD] 34:40
I think platforms have a lot of power. They’re a place of commerce, of public deliberation, and as such, they should be held responsible for that power and all of the data that they hold. And so I would love to see more of a requirement for them to be transparent, and work with independent researchers. One of the big concerns I have right now is that the platforms are going to keep putting their data—that you would access through the API application programming interface—behind a very, very, very steep paywall. So then nobody can actually do that research. And I think that maybe, just maybe, Congress could step in there and make them provide access through their API to researchers.
[Ian Smith] 35:43
Yeah, that sounds like a great change. Well, Brandie, thank you for your time. I really appreciate it and I had a great time talking with you today.
[Brandie Nonnecke, PhD] 35:52
It was my pleasure. Thank you so much.
[Anan Hafez] 35:53
Thank you for listening. The BTLJ podcast is brought to you by podcast editors Isabel Jones and Eric Ahern. Our executive producers are BTLJ senior online content editors, Katherine Wang and Al Malecha. BTLJ editors in chief are Dylan Houle and Jessica Li. If you enjoyed our podcast, please support us by subscribing and rating us on Apple Podcasts, Spotify or wherever you listen to your podcasts. If you have any questions, comments or suggestions, write us at btljpodcast@gmail.com. This interview was recorded on March 17, 2023. The information presented here does not constitute legal advice. This podcast is intended for academic and entertainment purposes only.
Further reading and references:
[1] 47 U.S. Code § 230 (1996).
[2] Id.
[3]Gonzalez v. Google LLC, No. 21-1333, 2022-2023 Term.
[4] Supra 1.
[5] Proposal for a Regulation of the European Parliament and of the Council on a Single Market for Digital Services (Digital Services Act) and amending Directive 2000/31/EC, COM (2020) 825 final (Dec. 15, 2020).
[6] Farid, H., & Nonnecke, B. The Case for Regulating Platform Design. Wired. March 13, 2023.
[7] CA Civ Code § 1798.99.29 (2022).
[8] 15 U.S.C. 91.
[9] Carlton, C., & Nonnecke, B. EU and US Legislation Seek To Open Up Digital Platform Data. Science.org. Feb. 10, 2022.
[10] General Data Protection Regulation, Regulation (EU) 2016/679, 2016 O.J. (L 119) 1.