[Meg O’Neill] 00:08
Hello and welcome to the Berkeley Technology Law Journal podcast. My name is Meg O’Neill and I am one of the editors of the podcast.
Today we are excited to share with you a conversation between Berkeley Law LLM student Franco Dellafiori, and Professor Bertrall Ross. Professor Ross is a Professor of Law at the University of Virginia and the Director of UVA’s Karsh Center for Law and Democracy. Previously, Professor Ross taught at our very own Berkeley Law, where he received the Rutter Award for Teaching Excellence. Today, Franco and Professor Ross will discuss how artificial intelligence will impact elections like the November 2024 one and the state of our general democracy for years to come.
We hope you enjoy the podcast!
[Franco Dellafiori] 00:58
Hello, I’m Franco Dellafiori, your host for this episode and today, I have with me Professor Ross. Hi, Professor Ross, thank you for joining us today.
[Bertrall Ross] 01:08
Thanks for having me. It’s great to be here.
[Franco Dellafiori] 01:10
Amazing. So today we’re gonna discuss very interesting topics regarding democracy, AI, and how they interact with each other in this upcoming election, especially in this crazy year where we have 73 elections globally happening at the same time. So this will be very good to discuss at this exact point of the year. So I wanted to start by asking, what brought you to this exciting field?
[Bertrall Ross] 01:43
Yeah, it’s a great question. And, you know, it’s a bit ironic, because I don’t think of myself as all that technologically sophisticated, actually, quite the opposite. But I’ve always been a democracy scholar during my professional career, and one of the issues I’ve been interested in with respect to democracy, is what I refer to as the information problem: the challenge that people face with obtaining information about what their elected officials do so that they can hold their elected officials accountable. You know, the challenge is that there’s just so much information out there. We’re just kind of overwhelmed with the amount of information. We only have so much time to process or to acquire that information. And so I started to think, you know, can technology play a role in reducing the information acquisition and processing costs? And so I came around to studying the role of technology. You know, the role that artificial intelligence might be able to play in making information more efficient and easy to access. But then, of course, you know, once you get into this space and you start to think about technology and its relationship to democracy, you start to think more globally about other aspects of democracy, whether it be accountability, responsiveness and participation, and you start to reflect on the roles that technology plays, whether it be algorithms or other technological tools that can impact how our democratic system operates. So here I am now in this space, and it’s been a very interesting and exciting run so far.
[Franco Dellafiori] 03:20
Everything sounds very, very interesting and challenging. We’ll come back to many of those topics later. Also, we know that you’re currently teaching AI and democracy at UVA Law. So can you tell us more about this course? Like, what’s been like teaching it during this global election year?
[Bertrall Ross] 03:41
Yeah, it’s been an exciting and angst ridden time. You know, many people are quite anxious about the election to come, and they’re also anxious about the role of technology in our lives and how that impacts the way we operate as individuals and as a society. So this AI and Democracy course is something that I thought of last year when I was really starting to engage in these questions of the relationship between technology and democracy. And I was like, well, what better way to truly learn about a subject than to teach a subject? And so there’s kind of a forcing mechanism that occurs there. You don’t want to embarrass yourself in front of your students, so you better actually know what you’re talking about. And so I spent a lot of the summer really getting myself up to date with respect to understanding as much as I can about the operation of technology and how it might interact with democracy. And so the course has several different components to it. It starts off by looking at or thinking about the current crisis of American democracy, right? What are the sources of this crisis? And we’re not really thinking about it from a technological perspective. We’re just thinking about it sort of more globally in terms of what’s impacting, what are the causes of democracies backsliding in so many countries throughout the world, including the United States. And then we start to introduce, sort of the role of technology, and thinking about what contributions technology plays in these crises. We also reflect on the role of psychology, human psychology in this because that’s a component of it. Sociology is also a component of it. And then we started to get into the sort of the different aspects of technology and different uses of technology that are causing concerns. So we look at things like bots. We look at deepfakes. We look at filter bubbles, echo chambers, all related to each other in that way. We look at algorithms and how they impact the way that legislatures draw district lines and how it gives them the capacity to more efficiently gerrymander districts in the United States. We look at how campaigns are using algorithms to, what political science describe as micro-target, which is when they canvas and go knock on doors to ask you for your vote and give you information about their candidate. You know, they don’t canvas everyone. Instead, they engage in some scoring of people on the basis of their past voting history and their partisan orientation. That leaves a lot of people out in the cold in the sense of not having people knock on their door and give them information. So we’re studying it from a variety of different perspectives to understand, you know, what are the impacts of technology on the crises of democracy? Is it us? Is it technology? Is it both? How do each contribute, and what can we do about it?
[Franco Dellafiori] 06:31
Perfect, that’s very interesting, and it’s actually a topic we’re moving on to now because there could be many threats to the legitimacy of democracy. So starting for a very basic question, are there any concerns regarding the accuracy of the outputs generated by technology or AI? Is the information trustworthy?
[Bertrall Ross] 06:54
Yeah, that’s a huge concern, right? And so I came into it, as I mentioned, seeking to use AI tools to make information more easily accessible, efficient to process. But what if that information is faulty? All right? If that information is faulty, it doesn’t really help individuals hold their representatives to account because that information that is being conveyed to them is not actually factual. So how can we create reliable information sources, even in my positive account of the use of technology. And of course, we also have all these many nefarious uses of technology, right? These nefarious use technologies are actually designed to misinform, designed to create a situation where people get locked into these, you know, echo chambers in which the information that they’re conveying back and forth to each other is actually faulty information, get caught down these rabbit holes of conspiracy theories. And we lose, as a society, any sort of baseline of facts from which we can actually debate and discuss the issues. And it’s hard for a democracy to exist and to operate without some sort of baseline of facts to work from. And the extent to which technology is exacerbating that information problem through its delivery of misinformation is a huge problem. Can we counteract that? Can we sort of create AI institutions, AI instruments that actually advance and resolve information problems? Can we create trusted institutions to actually employ AI in this way? Those are sort of the really difficult questions that we’re wrestling with.
[Franco Dellafiori] 08:34
Yeah, that is exactly right. But, and following on in this previous question, do you know that at the current stage of generative model AI, do you believe that they have shown any political tendencies that may influence the citizens in their decisions?
[Bertrall Ross] 08:53
Yeah, I mean the big political tendencies I think that has been with, especially with respect to the way that they are able to personalize algorithms such that their tendencies that they have tend to map on to the habits and the tendency of the users of the AI system. And so one of the concerns that I have is that it could create this tendency towards polarization, right? There is this, you know, aspect of AI, and generative AI, in which it’s processing or being delivered or being trained on information that has a tendency to lead to rather polarized results in terms of the information and predictions that it makes. Now that’s deeply concerning because we are already deeply polarized as a society, and we have AI, in a sense, building off that polarization, being trained off that sort of polarized pieces of data. It’s only going to exacerbate the problem and make it worse. So I think that that’s a tendency that I’ve been particularly concerned about. I don’t think it’s a necessary tendency. I think that if you have the right training materials for the generative AI, you could have it at, be used as an instrument that conveys information that is factual and not designed to promote extremism. But, you know, the trick is like, who has an incentive to do that? Who has an incentive to build that type of system, because the extremism that it delivers and makes predictions and builds off is something that attracts more attention, attracts more, you know, clicks and advertising dollars and all those things that go with it.
[Franco Dellafiori] 10:31
Yeah, that makes me think, actually, if I asked the AI, who should I vote for? Would I get a direct answer, or anything like that?
[Bertrall Ross] 10:40
Yeah, it’s a great question. So ideally, you would have a system that would know your preferences, right? Know what you want, right? And would know what their candidate stood for as well, and would be able to make that match, right? Ideally. And there’s evidence that there are uses of AI, very primitive versions, uses of AI, more on the traditional rather than more sophisticated generative AI. That suggests that work can and has been done, but it’s unclear in terms of whether we can rely on that, right? Because one of the things that we struggle with with generative AI, in terms of the more sophisticated models, we just don’t know how the algorithm is working, right? We just don’t have, it’s not explainable to the average person what data that they’re bringing in and being trained on, and how the results are being produced. And that’s going to create tremendous distrust. You combine that with the psychological tendency that’s associated with, you know, motivated reasoning, confirmation bias, right? We tend to want the AI to deliver results that comport with our priors, right? And so to the extent that it does that, we’re happy. But when it doesn’t do that, are we willing to even listen to what AI has produced, even if it better accords with our preferences? So those are, it’s kind of a psychological component and a technological component that we have to think about.
[Franco Dellafiori] 12:02
Absolutely, an interesting approach. And so, with going back to this risk that AI may pose in the context of an election. Do you believe that AI is creating defamatory statements about politicians or otherwise creating content that may put at risk their reputation?
[Bertrall Ross] 12:25
Yeah, I do. And we have several examples from this presidential election in the United States to draw from. We have, you know, the use of AI deepfakes of candidate Trump that show him getting arrested and running from the police and all of these things that are, you know, are quite humorous, but are damaging to the reputation, depending on how you view it all. And then we have the famous viral post on X that Elon Musk reposted and retweeted, or, I think, developed himself that shows Kamala saying things that she did not say. That was very harmful to her reputation. Now, the claim that could be made from both sides, that’s what gets legally difficult, is that, you know, first I could say it’s a parody, right? If it’s a parody, then it’s protected by the First Amendment. There’s not really much that we can do. Or they could say that even if you think it’s defamatory, under the high standard for defamation under the First Amendment, you have to prove that it was made with malicious intent. And that’s very difficult to prove. So we have AI and the tools available to, you know, defame public figures, and we see them regularly doing that through different means, and we really don’t have any clear way to check those uses of AI, due to the constitutional protections under the First Amendment.
[Franco Dellafiori] 13:55
That is absolutely interesting. And I think I agree on this vision of how hard we need to enforce the legislation. So would you believe that we need any change on the current legislation, or it’s appropriately guaranteed under the First Amendment?
[Bertrall Ross] 14:14
Yeah, it’s difficult. I mean, I think that one of the things that legislation can do is probably cannot, sort of police it directly and regulate and prohibit or ban the use of AI to make deepfakes of public figures. There’s efforts in California and other places to regulate it in some ways, but we’ll see in terms of the most expansive form of that regulation was, of course, vetoed by Governor Newsom.1 So we’ll see in terms of where the piece of legislation that has made it through the California Legislature and been approved by Governor Newsom go. But there’s always a question with respect to deep fakes, it could be considered a form of speech. And to the extent that the deepfakes are seen as a content based restriction, there’s a high burden of proof, or high burden for the government to sustain that law, right? Those content-based restrictions are subject to the highest form of scrutiny. And, you know, usually laws fail under the highest form of scrutiny. So if we think about laws, I mean, I think about sort of the role of social media and tech companies, and I’m not sort of pollyannaish to think that they have any incentive right now to self regulate, because they draw a lot of benefits from AI and even AI that’s defamatory and harmful, and the profit motive is quite strong, and they’re going to go where the profits lead them. But is there a way that we can construct this incentive system such that we can incentivize social media companies to perhaps do a better job and at least add disclosures or disclaimers to the AI, which I think would be an important first step with respect to notifying people that this is AI. Are there ways in which you know, social media companies, could be held liable for not including those disclosures, right? So it wouldn’t be necessarily regulating them for using and being part of distribution of deepfakes or AI or other forms of defamatory content, but it would subject them to liability for failing to add disclaimers attached to them, you know. So I think that those are probably the directions that we will ultimately need to go to insulate any laws from First Amendment challenges. But it’s weaker than perhaps other tools that we might want to use and if we didn’t have the First Amendment to think about.
[Franco Dellafiori] 16:41
I see. We’ll have the development of this topic over the course of the year, because this is very recent.
[Bertrall Ross] 16:47
We will. I think that we’re going to see challenges to the California law. And we’re going to see, you know, other efforts that might be put forth also challenging it. We’ll see where the court ultimately stands on these questions.
[Franco Dellafiori] 17:01
And now moving on to another subject that poses a threat. And you also mentioned it, it’s about gerrymandering. Yeah. Do you think that, like, the information can be used to further encourage practices of gerrymandering that, of course, can be a threat to our democracy on the elections?
[Bertrall Ross] 17:25
Yeah, I do, right? Well, what we’ve learned in terms of how algorithms are used to, you know, in a sense, gather information about voters that can be used to make a prediction about how they might vote in an election, right? And those predictions are at the very individualized level. Now, taking a step back, it used to be that legislatures would have to draw districts with sort of a broad sense, you know. You never know. You know when you cast your ballot, nobody knows how you voted, right? And you know, the government collects information about whether you vote, but they don’t know how you voted. You might know sort of how a precinct or a particular jurisdiction voted, in terms of, they might have voted 60% Democrat, 40% Republican, but you don’t necessarily know who within that district voted and how they voted in that particular election. So gerrymandering occurred before the use of these technological tools and algorithms, but it was a little bit less efficient. You would perhaps not draw the lines as you might draw them if you had perfect information, and say you draw them with the more limited information. What the technology gets you towards is that more perfect information state. And in that more perfect information state, legislatures can very efficiently gerrymander. And they can gerrymander in a way that allows the party to entrench itself in power in that state, no matter how far the electorate shifts in one direction or the other. So you could have a state that is shifting to, you know, 55-60% Democrat for which the Republicans can maintain entrenched control over the state. One example, perhaps not as far as as I’ve described, is Wisconsin, which is a state that’s about 50-50, you know, maybe 51-49, but the Republican Party, at least prior to the most recent rulings out of Wisconsin, is firmly entrenched as the governing party of that state, due to the way that they draw lines. And so you think about how technology can make that even more efficiently done, then the entrenchment problem gets even worse. So that’s a concern I have with the use of technology and I’m not clear on how that’s going to be policed, given that the court and the recent case of Rucho2 said that we consider the Supreme Court said that they consider these issues to be non-justiciable. So any check on this type of gerrymandering will have to happen at the state level with state courts.
[Franco Dellafiori] 20:00
Yeah, absolutely, we’ve seen even crazier maps with this practice.
[Bertrall Ross] 20:05
Absolutely.
[Franco Dellafiori] 20:06
But this also leads us to discrimination issues that may be addressed regarding the elections and the use of AI. Do you believe that whether generative AI presents issues of discrimination or marginalization for now?
[Bertrall Ross] 20:25
Yeah, I do, I do. And in one instance, the one that I briefly alluded to earlier, is how AI and the tech algorithm, algorithmic technology is being used to help campaigns decide who they’re going to engage in outreach with, right? Who they’re going to try to recruit to vote, who they’re going to convey information to, who they’re going to call. And you might think, as a person, you know, these phone calls are an annoyance. I don’t want people knocking on my door and stuff and I get that. Or I don’t even want their mailer. But there’s an important function that has been found for these types of outreach in political science. Those who receive a door-to-door knock of people knocking on that door are 8 to 10% more likely to vote than those who do not. That’s a pretty large margin. It’s a much more important determinant of voting than voter ID laws, for example, which have been a focal point in terms of vote denial practices. It’s much more important to have someone knock on your door than for a state to impose voter ID laws in terms of your likelihood of voting. And so where does that lead? Well, that leads to this kind of vicious cycle that I find quite troubling, in which those who are not engaged and who are not exposed to information about the election or are not sought after for their vote, right? They tend to become and feel alienated from the process, right? And if they’re alienated from the process and don’t vote, then those elected officials who are in charge have little incentive to be responsive to those who do not vote, and so that leads to further alienation of those people who do not vote. You might say, well, who cares? They’re just kind of out of the system. Well, there’s an opening that I’m deeply concerned about with respect to these voters. These individuals are most prone to being recruited by authoritarian type figures. If they decide that democracy is not working for them, because nobody’s responding to them, nobody’s engaging them, they don’t participate. And someone else comes and says, look, when I take power, I’m going to take care of you, right? I’m going to do so, you know, outside the system, because the system doesn’t work. I’m going to do so exercising need, alone, power, right, in an autocratic, authoritarian way. You know, those individuals who see the democracy system is not working may be inclined to support that authoritarian type figure, and that presents a systemic risk and threat that should not be ignored. So to the extent that we have algorithms like technology that enhance the ability of campaigns to target some individuals and to ignore others, and their process of outreach and engagement during the campaigns, and ultimately in their decision about who to be responsive to once in office, that’s a problem for democracy more generally.
[Franco Dellafiori] 23:18
That’s a very interesting approach. And I further want to question. So this technology, since it can affect the actual development of an election, do you think the information it provides is biased? Does it have the same biases that the humans that program it have?
[Bertrall Ross] 23:43
Absolutely I mean, you think about, sort of, open AI, and whether they do, they train their models on all the information contained on the internet. And all of the information contained on the internet is human-produced information, right? And human-produced information is going to come with human biases, right? It’s just unavoidable, and therefore what you’re going to have is AI that’s trained on biased information that’s going to yield biased results and biased predictions. And it’s hard to really see how we get around that very fact, but one thing that we can do is help people understand how these generative tools are, what information that they’re drawing, right? My colleagues like to describe that, like a way for us to site check the AI, right, a very academic speak, right? Let’s sort of make sure we know where they got the source of information from. And then, even at a better, at a second level, right, like how they are using this information to make the predictions that they do, right? So even if the results come out that are biased, at least we know where the bias is coming in. And maybe we can, in a sense, process that information with that recognition which would lead us to use that information in a way that accounts for the bias as contained in the predictions that are being in the results—predictions that are being made and the results that are being produced by the AI model. But, yeah, I don’t know. I don’t know any other way to get out, get out of this bias problem, but I think that at least if we have transparency with respect to the AI tools, we can at least understand the bias.
[Franco Dellafiori] 25:24
Okay, I think I’ve been focusing too much on the negative stuff of AI. Let’s at least discuss something positive. Would you believe that the fast processing power of AI will enable technology to detect an adequate allocation of resources?
[Bertrall Ross] 24:43
Yeah, oh my gosh. It has such tremendous potential in that respect, right? So you think about, you know, the information that it can derive about the world and how it operates can, you know, open the door, not only to sort of more more efficient resource extraction, more efficient, sort of, information that’s just like, imagine this, and this is kind of like the sort of counterfactual I know, kind of hypothetical that I often pose, right? Imagine that you as a person, and thinking about it from the information perspective, you as a person, could say to you know this AI tool, in the form of an app or something like that. Look, this is my age, this is where I live. I have two children. They go to public schools. I live in this neighborhood. This is the value of my house, and I need help to decide what are the implications of this budgetary proposal on me, right? Most people when they see a state or locality or the federal government pass a budget it goes over their head. It goes over my head. I don’t know what the implications are of that to my life. But if you can sort of plug in that information to that tool and that tool based on predictions, based on background information about how these past budgets and different types of budget items have impacted people’s lives in different circumstances, right? And use that information to inform you about the probabilities. What are the probabilities that this particular budget law will impact you in X or Y way? That could be extraordinarily helpful to really start to understand the way government, the way that government does and how it impacts your lives. And so to the extent that we’re thinking about, you know, the resource of knowledge, this is something that I think that AI can contribute to in a positive way. I think we do need to make sure, though, any system that’s created along those lines is explainable and interpretable by the people that use it, so that we can build trust in the system. We also have to make sure that they’re designed by people that are trustworthy. And you know, when I talk about trust, I always pause a little bit because we are in a time where so many institutions are deeply distrusted, and so I struggle with, what are those trustworthy institutions right now? Historically, just thinking about trustworthy institutions from the past, the Supreme Court was once an institution in which 60 to 70% of people approved of what it does. Now 40% of people approve of what it does. Institutions of higher education for which I’m a part, of course, used to be highly trusted institutions. They are much less so now. So it’s hard to sort of understand what those institutions might be, but I think that those would have to be critical partners in this process of developing the AI system that can allow people to efficiently extract knowledge about politics and the way it impacts their lives.
[Franco Dellafiori] 28:39
Great. So bearing that in mind, in terms of accessibility, what are the benefits that AI technology can provide to citizens in the context of elections, other than resource allocations, of course?
[Bertrall Ross] 28:52
Yeah. I mean, it’s a huge thing, right? Because I think that right now we rely on, you know, media to a certain extent, to deliver information to voters just on the basic logistics of voting, right? That there is an election day that’s coming up, for example, but often the information is not particularly tailored, right? If you want to know where your voting booth is, you know, how to vote by mail, where to do, how to do these things. You can sort of look it up online and you’ll find perhaps some resource pages that will help you along. But it is sometimes difficult for people to access, and particularly those that lack political education or general education. You know, these can feel very daunting to figure out. Now, if you had tools that could just make this information just easily accessible, right, with just the ask of a question, right? You don’t have to write it in. You could just save the question into the phone, that could be a huge thing, right? In terms of helping, what do I need to be able to vote? Right? These places that people vote in, we might think as voters, that voting booths aren’t scary at all, but for many people, they are, right. And not knowing how this system works makes it daunting. But if there’s something that could sort of explain to people like this is how it works, you know, these are who the people are that are in your voting booth, or in your in your poll voting, poll station. These are poll workers. They’re there to help you. They’re not there to sort of police you and things like that, right? All these things that create sort of nerves around voting and elections, I think that could be, you know, addressed, or at least ameliorated with tools that could provide information in a very efficient way. We just can’t rely on people to put the effort to obtain this information. We have to make it as effort-free as possible. It is not that we are excusing, you might say, we’re excusing laziness or something like that. No, people are busy. They have a lot going on in their lives. They have families to care for. They have jobs to work. They have a lot of things to attend to, and they need time for themselves. Leisure is good. Rest is good. So let’s make this as effort-free as possible with respect to our election system, and then we can perhaps create a more inclusive democracy.
[Franco Dellafiori] 31:08
That is a very interesting take. And let’s, let’s move on to the politician’s side. Would you believe that AI lowers the barriers to entry so that candidates with less resources can step in, given that they have more information or assistance that can help them to be more competitive.
[Bertrall Ross] 31:27
Yeah, that’s what I would hope, right. Social media was a leveling of the playing field in some respects, because you could get your message out in a way, through a means that is less cost intensive than advertising on the radio or on television. So that’s a good thing. And then you can sort of think about AI tools. And so we talked about the negative of deepfakes and such, but there’s a positive about deepfakes that are worth considering, right? So to just give you a couple examples, right. There’s a politician in India who used deepfakes of themselves, or I think it was herself, it might have been himself, to deliver their message in the variety of different dialects and language throughout the country, right? And so what they were able to do is kind of replicate themselves and put themselves out there in a way that is perhaps not feasible or viable, and in the different languages and dialects of the different people of the country, in a way that would not be viable without the use of AI deepfake technology. There’s the other example of Mayor Adams, which I know is perhaps not a particularly popular figure to talk about, given the legal controversy surrounding him, but he does something during his campaign that was very similar to what we saw in India. He put out deepfakes of himself speaking in the different languages of different communities in New York. New York is a multilingual place, as we all know, right? And being able to connect with people through their language is some ways in which, you know, politicians and those who are running for office can expand their reach beyond what’s perhaps available right now. So deepfake technology can be used for the good. It can be used in ways that could make it more accessible for candidates to reach a broader audience without relying on, you know, I think I’ve learned yesterday, Kamala Harris has raised a billion dollars since she joined the race, right? You know, it may not change the dynamics at the presidential level, because there’s just so much. But if you think about sort of local and state races and the ability to get your name out there as a person who’s coming from a lower income, working class, middle class family who seeks to run for office because they want to advance the public good, that’s what democracy is supposed to be about, not just politics should not just be the domain for the wealthy. You know, AI technology could be helpful in that regard, if developed properly.
[Franco Dellafiori] 33:48
Well, this is good to hear that we can give a good use to deepfakes that have a very bad reputation.
[Bertrall Ross] 33:57
Yes. I will say that Mayor Adams got a lot of grief for what he did. You know, I don’t know, you can kind of be able to sort of follow Mayor Adams’ reasoning. Recently, we realized that he can be very stubborn and very, you know, he sticks to his ways of doing things, and he’s like, I’ll do it again if I need to. In some ways, I respect that. I understand the critiques in the sense that we don’t necessarily want to promote deepfakes, and deepfakes can be harmful, and I agree that they can be, but I don’t know if we’re going to get rid of deepfakes by just saying that they’re bad, and I think that one thing we might be able to do is see if we could use them for the good.
[Franco Dellafiori] 34:38
Yeah, I’ll probably take a look at that particular deepfake. And so with these possibilities of using them either for the good or for the bad, do you believe that the use of this technology favors a particular sector above the other one? For example, the politicians get more benefit, the electors, or a sector in both.
[Bertrall Ross] 35:00
Yeah, that’s a good question. I mean, I think that what it opens the door to, and this is kind of the worrisome aspect of it, is that, you know, to accept that it’s predominantly seen as a tool for nefarious uses to promote misinformation and disinformation. It promotes the sector of conspiratorial actors who want to raise chaos, right? Right now, you know. I would hope that we can start to develop a system in which we can use this technology for the good. But right now, we’re not there yet. We’re trying to police the bad and avoid the worst of it. And to the extent that you could sort of generate images and text and such and things through generative AI tools that is quite believable in a country where conspiracies have, you know, been able to attract more attention than they ever have because we are so deeply polarized and we don’t have a common basis in fact anymore. You know, I think that that’s a worrisome trend. And I worry that to the extent that it becomes the domain of conspiratorial type actors, it could, you know, exacerbate the democratic problems that we’re seeing. What I’m hoping is that over time, we can get into those that are the good government actors. Those that seek, they seek to use the technology to promote democracy in ways that I think it can. But it requires that we have to, you know, shift some of our attention, you know, or at least divide our attention some way from not just policing the bad use of AI, but also devoting, you know, more energy to thinking about the potential good uses of AI in this democracy space. And we have brilliant minds in this country, and we have, we’re sort of, you know, developing a lot of this technology, and we could think about this globally and internationally. We could draw from other places in terms of their mechanisms for policing AI, that will allow us to draw more attention on, sort of, advancing AI as a positive tool. But I think it will require, you know, that devoted effort to take this, you know, AI technology, out of the hands and out of the domain of the conspiratorial minded folks in our country.
[Franco Dellafiori] 37:15
Yeah, absolutely. And that leads us to our final section, which is, what are the next steps? Where do we go from here? The first question is, at what level is governance of AI most effectively addressed? Local, federal, national, international?
[Bertrall Ross] 37:34
Yeah, I mean, I would like to say international, but, you know, that’s me being pollyannaish, right? Let me, you know, American exceptionalism often limits our ability to think about this in an international way. But even if we’re not going to do it internationally, I think that there’s some value in borrowing and experimentation, right? There’s some really interesting things that are being done in the EU that we could potentially, you know, think about and monitor and see how effective they are, and see what modifications might be done at the national level here. I think that that’s where the experimentation is. It’s hard to imagine, because information crosses borders so easily. I just feel like local and state regulation is not going to be all that effective as a means for policing these actions. I do respect California and its efforts, but I think that that law is going to be challenging to enforce. It may run into First Amendment challenges as well. So I’m thinking, you know, of the solution more at the national level, thinking about how other countries are experimenting with different ways of regulating this technology that we could potentially draw from. But there’s a big problem, of course, with that, and that problem is called Congress. And as we know, Congress doesn’t work all that well, and we have to deal with the fact that you know, the amount of opportunity for agencies to fill in the gaps and to take the lead on these types of regulatory questions. That opportunity has been limited by recent Supreme Court decisions that deny the deference that they used to be accorded with respect to their decision.3 So working things at the national, federal level is going to be quite challenging. So we may have to accept the suboptimal, the second best, maybe the third best of state-level regulation for it to reach, as far as it can, at least test the limits of the law, test where the First Amendment comes in, and see if we could get borrowing across state lines until our federal government becomes functional again.
[Franco Dellafiori] 39:31
So to follow on, I guess there could be some interests at stake. One of them could be innovation, technology development, and the other one would be safeguarding democracy. How do both interact in your opinion?
[Bertrall Ross] 39:50
I think that they interact in critical ways, right? So I think that the innovation, safeguarding democracy is going to require technological innovation that’s designed for that purpose, right? I think that, you know, we can’t just safeguard democracy by seeking to block the bad uses of AI. I think the bad uses of AI are just so plentiful and so broad reaching, and there’s no clear, you know, means of regulating it such that we can block all those bad aspects of AI. But we need to acknowledge that AI is a contributor to the problem. And so can we use technology, good technology, to counter bad technology? And I think that that’s how we need to think about it. And so, just to give an example, right, we have deepfakes that we’ve been talking about throughout this conversation, but we also have deepfake detection tools, right? And they’re developing in parallel with each other. And so, you know, ideally, we kind of, rather than engage in the perhaps impossible and perhaps constitutionally illegitimate effort of banning AI or deepfakes. We come up with deepfake detection tools, and we use those deepfake detection tools to slap disclaimers on those, you know, uses of deepfakes so that people have knowledge of their use and can use that information as they wish. I think that sort of technology, counter technology, is going to be the best way. We just need more people, and we do have people that are already in the camp of trying to develop these positive uses of AI, we just need to be more energy devoted, more ideas devoted in this way as a means to safeguard democracy. I think that, you know, and just to add a caveat, we can’t innovate ourselves entirely out of this crisis of democracy, because even before AI, we were in crisis. And so we have to also look within ourselves. And we have to think about the operation of our polity, the operation of our institutions as well. I think that what AI and technology, they’re an accelerator, and they make the polity crisis worse, but they’re not the source of it. And so we also have to look deeply at what the source of these crises are, as well as using technology, innovative technology, to safeguard democracy as well.
[Franco Dellafiori] 42:04
Yeah, absolutely. And do you really, are there any final words of wisdom or warning that you wish to share with our viewers in this growing issue?
[Bertrall Ross] 42:11
Yeah, I mean, I don’t know, wisdom, I don’t really associate wisdom with my name. Maybe someday I will. But I just would say that, you know, this is a time in which people are feeling very anxious about our democracy, and it seems that we are just being hit with so many different things that are undermining the operation of our democracy and technology feels like another one of those things, right? But I just would encourage the audience to think of technology as a tool that can be, the effect of the tool depends on how we use the tool, how we utilize it. It could have a bad effect if we seek to utilize it in that way. But it can also have a good effect if we seek to utilize it in that way. And so I would encourage us to really, you know, in terms of us as an audience that cares about democracy, it cares about technology, not only think about sort of regulating the bad, but also advancing the good that technology could be used for. And the final word, I would say, is just, you know, we’ve got to keep faith with democracy. I know that this is a very hard time. It may lead us to question whether democracy is worth preserving, or whether democracy can ever work. And I get that, but I think that we also have to, you know, reflect on the fact that the alternatives, the alternatives that we as a country have not experienced, which includes authoritarian governments, very oppressive regimes, right? Those alternatives are so much worse than the system that we have, and so through all its faults, and through all its flaws, I think it’s a system that’s worth preserving and I hope that you know folks in the audience will see it the same way.
[Franco Dellafiori] 44:00
Thank you for your words of wisdom, of course. And to finish this with a silly question, because this also is a technology journal, what would you say is your favorite fictitious technology? I’ll go first. Okay, teleportation. I just believe the idea of teleportation is great, no need to use a plane or a car. Just save so much time.
[Bertrall Ross] 44:26
I mean, so this is going to be like, just going to date me right now, but I used to watch this cartoon called The Jetsons back in the day. And, you know, I’m still into the flying car, right? And I thought we’d be here by now. I find driving, especially in the town of Charlottesville that I’m in, which has no offense, the combination of college students and old people, both of which are kind of difficult drivers, right? In the sense that they drive too slow and too erratically. If I could just fly over them in my flying car, life would be beautiful. So that’s my fictitious technology that I hope becomes real someday soon.
[Franco Dellafiori] 45:06
Absolutely, let’s hope that there’s no traffic in the sky, though.
[Bertrall Ross] 45:11
Good point, the sky is large, so hopefully I can at least go above them.
[Franco Dellafiori] 45:16
Yeah. Well, thank you so much, Professor, for your time. This was definitely a very interesting topic, and your intake is very valuable for us.
[Bertrall Ross] 45:26
Thank you so much for having me. It’s great to be on the podcast. And hopefully we’ll do this again sometime.
[Franco Dellafiori] 45:31
Let’s hope so you.
[Meg O’Neill] 45:33
You have been listening to the Berkeley Technology Law Journal podcast. This episode was created by Franco Dellafiori, Jeyhun Khalilov, and Joshua McDaniel. The BTLJ podcast is brought to you by Podcast Co-editors Juliette Draper and Meg O’Neill, and Junior Podcast Editors Braxdon Cannon, Lucy Huang, Paul Wood, and Joy Fu. Our Executive Producer is BTLJ Senior Online Content Editor Linda Chang. BTLJ’s Editor-in-Chiefs are Edlene Miguel and Bani Sapra. If you enjoyed our podcasts, please support us by subscribing and rating us on Apple Podcasts, Spotify, or wherever you listen to your podcasts. Write to us at btljpodcast@gmail.com with questions or suggestions of who we should interview next. This interview was recorded on October 10, 2024. The information presented here does not constitute legal advice. This podcast is intended for academic entertainment purposes only.