SPEAKERS: Professor Andrea Roth, Eric Ahern, Joy Fu
Podcast Transcript:
[Joy Fu] 00:08
Hello and welcome to the Berkeley Technology Law Journal podcast. My name is Joy Fu and I am one of the senior editors of the podcast.
Today, we are excited to share a conversation between Berkeley Law alumnus Eric Ahern and Professor Andrea Roth, Barry Tarlow Chancellor’s Chair in Criminal Justice and Professor of Law at Berkeley Law. Prof. Roth is a leading scholar of evidence and criminal law and one of the co-directors of the Berkeley Center for Law and Technology. Her research focuses on how pedigreed concepts of criminal procedure and evidentiary law work in an era of science-based prosecutions. Her article, entitled “Machine Testimony,1” offers a coherent framework for conceptualizing and regulating machine evidence.”
In today’s episode, we explore one of the most pressing intersections of law and technology: how AI and machine-generated evidence challenge traditional evidentiary doctrines like hearsay and the Confrontation Clause. Professor Roth joins us for a conversation about how courts should handle machine-generated evidence, the challenges posed by facial recognition, predictive algorithms, and AI-assisted forensic tools, and what reforms are most urgently needed to preserve human-rendered justice. This episode is Part II of a two-part series on AI and rules of evidence. Whether you’re a technologist, a lawyer, or simply curious about how the law is adapting to AI, this series offers an accessible and thought-provoking look into the legal system’s evolving relationship with artificial intelligence. We hope you enjoy the podcast!
[Eric Ahern] 02:04
My name is Eric Ahern, and welcome to the Berkeley Technology Law Journal podcast. We are thrilled to have Professor Roth with us today. Welcome, Professor Roth.
[Professor Andrea Roth] 02:14
It is great to be here. Thank you, Eric.
[Eric Ahern] 02:18
So, to begin our discussion, I’d like to do a quick introduction about the legal doctrines in AI we’re discussing today. So, AI is increasingly used in legal settings, but definitions vary. From an evidentiary perspective. How should we define AI in this context?
[Professor Andrea Roth] 02:39
So, I fear a general use of the term artificial intelligence for evidence purposes. I think for purposes of evidence law, it’s better to focus on the source of the information and the nature of the information generating process. And the key is to identify sources of information that are different from a physical object, like a gun found at a crime scene, let’s say, but also different from a human witness who has a mind, and even different from animals. So I guess, instead of using the term artificial intelligence, I think in evidence law, it might be most helpful to talk about machine or electronically generated conveyances of information offered for their truth, or even software-enhanced images. You know, we could maybe define what we colloquially call a deep fake for certain authentication purposes. But otherwise, it gets really hairy really quick to try to agree on a definition of AI.
[Eric Ahern] 03:53
Well, that makes a lot of sense. My next question is, what evidence law doctrines are most challenged by, you know, any kind of machine generating software or any category of technology to which you just referred. Would the hearsay doctrine come up, Confrontation Clause or something else?
[Professor Andrea Roth] 04:13
Yeah, I think there are a few that are being affected most, even though I just said I don’t use the term AI, here I am saying AI. So one would be authentication. I think a lot of judges are worried about so-called Deep fake videos and images that go way beyond Photoshop, things like that. And another would be using generative AI for illustrative purposes. They’re not, until recently, was there even a rule of evidence that talked about illustrative exhibits that themselves are not evidence that’s introduced at trial, but they’re still being used by the parties in front of the Fact Finder, and they can be highly persuasive to a fact finder. So, you know, a computer-generated video that purports to explain how an incident happened, that is, that’s a pretty new issue, and judges are having to decide what limits to place on those illustrative exhibits. And then, like you said, Eric, issues that typically are dealt with through the hearsay doctrine and the Confrontation Clause, as far as the credibility of witnesses, and I should say, impeachment rules as well the the the reliability of what witnesses say and the ability of the parties to scrutinize the reliability of. What they say, those laws that we have now are really built for human witnesses, and we are increasingly seeing conveyances of information by machines instead. So hearsay and confrontation are huge ones, and they’re also the ones that I have focused on in my own scholarship.
[Eric Ahern] 05:56
And my understanding is, throughout your career, you’ve become an expert in criminal law. So this could be a great time to pivot to a conversation about AI in criminal law. Generally, AI is being used in risk assessments, forensic analysis and facial recognition. What are the biggest concerns when AI evidence is introduced in criminal trials.
[Professor Andrea Roth] 06:22
So there are concerns like you just said, in all of those stages of the criminal process. You know, first, with respect to investigations, the use of facial recognition technology and any number of other phenotyping, any number of other high-tech ways to try to find suspects. Those high-tech ways of investigating crime don’t always find their way into the criminal trial itself, except at, let’s say, a Fourth Amendment suppression hearing. They’re really you know how this person ended up being a suspect or a defendant. And I and my scholarship have focused on something that I think other people have focused on less, which is the use of these high tech tools or machine generated proof as evidence of guilt or innocence at trial, and I focus on it, not only because I think it’s some it had been somewhat ignored by other scholars, even law and tech scholars, but because it’s what has interested me over the years. I’m a Criminal Procedure adjudication person, but it’s also true that there’s a massive issue that has been written about very much by others, with respect to predictive uses of these technologies, both in the pretrial detention space and also in the post-conviction, sentencing, and parole revocation space. And it’s not that those things are not important. They’re incredibly important, and they raise a lot of the same issues as the use of AI at trial. I think a lot of great minds are thinking about it. I think with the, you know, ProPublica piece2 a few years back on Northpointe’s COMPAS instrument, I think the public knows a lot more about the concerns about predictive uses of algorithms for determining risk and dangerousness than they do about the use of algorithms to generate proof of guilt at trial. So, all of those things are important. That’s why I typically focus on the proof of guilt at trial.
[Eric Ahern] 08:37
Understood. Well, it’s certainly a good thing that this is coming up more in discussions. I know I’ve had a few classes where it’s come up, so it’s good that law students are being educated about this as well. It seemed like the incident with Compass may have, you know, been an example of what you’ve referred to as automation complacency, where people over trust AI or automated results. How does this concept of automation complacency play out in criminal law and what can be done to address it?
[Professor Andrea Roth] 09:14
Yeah, you’re not asking easy questions here, Eric, [my apologies.] I think, no, this is great. So I will say that an algorithm, you know, compass, might not be the best example, but like the Arnold Foundation, has created this public safety assessment, the PSA that they claim actually reduces significantly levels of pretrial incarceration without, you know, increasing risk to public safety, putting aside the empirical question of whether that’s true, I think the the use of of algorithms, some people say it’s just inherently bad. We should not go down that road. Other people say it should be a tool. And then the question is, how do we get judges to recognize the limits of the tool and not just do whatever the tool says? So I don’t think that the use of COMPAS or the PSA at all means that we’re buying into, you know, that we’re committing automation complacency. I think the issue is that if there should be a human in the loop, and the human is simply deferring to whatever the machine says, that’s what automation complacency is, we may want. A particular process to be absolutely automated, in which case there’s no problem with automation complacency, but, but if we want a human in the loop, we have to worry about it. And so here’s some examples. Just to be really concrete for a second. You know, lawyers using ChatGPT, forgetting that they hallucinate and just assuming that whatever citations they give or quotes they give are real and getting burned in the process. So police engage in confirmation bias. So if they see that facial recognition technology suggests that this person is a match, then it affects the suggestivity of regular identification procedures when they engage in it with human witnesses. We’ve seen that in a Detroit case3 judges in deciding pretrial detention or, you know, post conviction. Sonia Starr at Michigan has talked about how judges really do just defer to the algorithm and don’t exercise independent judgment as much as you would want them to.4 And I think really, you know, in forensic examiners confirmation bias, you know, like in the Brandon Mayfield case5, the person who was falsely accused of the Madrid train bombings back in 2004 the algorithm spit out the top 10 matches that were in the FBI database, and it turned out that Brandon Mayfield’s fingerprints were the best match of the 10. But everybody seemed to forget that maybe the actual person who committed the bombings isn’t even in the database. So these, you know, we it’s just like when you’re following Google and forgetting that you actually know how to get to your friend’s house and ending up in a ditch somewhere like that’s metaphorically what’s happening we’re ending up in ditches here. So what to do about it is, I think to have sequential unmasking is what they call it in the forensic examination context, where you don’t let the human know about specific results until the human has engaged in independent analysis, so that you kind of blind them from certain information to make sure that there isn’t confirmation bias. And so there’s that. And you know, you could also, just like human factors engineers, like for NASA, that, like their whole job is to make sure that drivers and pilots and all of these other people that have to deal with automated systems don’t engage in automation, complacency, and so this is not a new problem. It may be a new problem for the legal system, but I think human factors engineers could help us a lot with these sorts of problems.
[Eric Ahern] 13:27
Gotcha. So, pivoting now to a discussion about hearsay and the Confrontation Clause with with what we might call a machine witness, I’ve learned in evidence law that hearsay is defined as an out of court statement offered for the truth of matter, truth of the matter asserted, courts have ruled that machine generated evidence is not hearsay, because it’s not the human statement. Do you agree with that reasoning, or do we need a new framework?
[Professor Andrea Roth] 14:01
Yes and yes, I agree with that reasoning. I do think that machine generated assertions or conveyances of information are not hearsay, and it’s completely incoherent to think of them as hearsay, which I’ll explain why. But I also think that means we need a new framework for dealing with machine conveyances of information and the potential unreliability of them. So why are they not hearsay? The hearsay rule is a way of forcing people to give assertions for the most part, in court, subject to physical confrontation, the oath and cross examination. And that’s because we think that when a person gives an assertion in court, rather than somebody just repeating what they said out of court, then they’re more likely to tell the truth to begin with, they are and also the fact finder and the parties are more likely to be able to meaningfully scrutinize the assertion in real time and find out, you know, discover what might be incomplete or or wrong about the assertion. So the hearsay rule is built to enforce the oath, physical confrontation and cross examination, and that’s why it focuses on in court versus out of court. All right. Well, now we’ve got a machine, an AI, whatever you want to call it, that has made a statement, an assertion, if you want to call it that a claim. Well, the thing about. Machines are not going to see beads of sweat on their face like even if you could cross examine them. There’s no benefit to having a machine give its assertion in court, as compared to out of court, the oath, physical confrontation and to a lesser extent, cross examination just don’t mean a lot when it comes to scrutinizing machines. That’s not how we would choose to scrutinize machine conveyances of information. And so the hearsay rule, we don’t need it, and we don’t want it as a way of scrutinizing machine conveyances. We need something else, because the safeguards that the hearsay rule is intended to enforce. Don’t mean very much when it comes to machines. Okay? So if it’s not hearsay, what the heck do we do? Do we just give up? No, no. There’s lots of ways out of court to meaningfully scrutinize the reliability of a claim, and you don’t need cross examination to do it. And by the way, there’s lots of ways to meaningfully scrutinize a human assertion other than those safeguards as well, and so we need to focus on those other kinds of ways to meaningfully scrutinize machines and enforce those through a totally different rule that we would not call hearsay. And I’ve made concrete suggestions for what those rules and those sorts of things would be. And I think the Advisory Committee on the Federal Rules of Evidence has been also thinking about this and really receptive to some of these general ideas. So I think we’re moving in the right direction, even though the current rules aren’t there yet. Excellent.
[Eric Ahern] 17:23
Well, before we hear about your policy reform proposals, I just wanted to say I found it really interesting the example you gave, or how you pointed out that you can’t exactly put a computer on the stand and see if it’s sweating or looks nervous or pupils dilating. And it’s just so interesting to me that you know, what I’m inferring from that is that part of the reliability of a human witness is emotions that make us human in the first place. Is that an accurate reading of what you’re saying? Yeah.
[Professor Andrea Roth] 18:02
I mean, I think, I think the point can be overstated. It is true that using demeanor and courtroom presence as a way to infer the credibility of a witness is a very fraught and culturally determined issue. And there’s been a lot of psychological and sociological literature about how people view witnesses’ credibility differently based on, you know, cultural differences, racial differences. And so it can be, it can be a very fraught thing to try to suggest that somebody is not credible because they’re sweating or not credible because they’re not making eye contact. For example, it could be that in some cultures, it’s like deemed rude to not make eye contact, or if somebody is using a particular vernacular form of English that the jury doesn’t like or is not familiar with, they might see that person as not credible, even if they are telling the truth, and it may depend on the jury’s own life experiences. So it is true that demeanor is a fraught way of determining credibility. That said, I mean, let’s be honest. That’s why, that’s part of why we have live trials, is that we think there is something important about that. I mean, there’s also the dignitary interest in having somebody accuse you to your face and having the jurors have to sit there while they’re sitting in moral judgment of somebody. They’re not just, you know, doing it behind a computer screen. So the immediacy of it, it’s what you said, just underscores the fact that this whole system of justice we have is a human-rendered system of justice, and it’s got to be that way. And so there are going to be fundamental things that we cannot delegate to machines.
[Eric Ahern] 20:03
That makes sense. I wonder how we will react if AI becomes capable of passing the Turing test. And for our listeners who aren’t familiar with the Turing test, this is a kind of a theoretical test of a machine’s ability to exhibit intelligent behavior equivalent to or indistinguishable from that of a human and emotional intelligence is factored into this as well. So I wonder what we’re going to do if robots can gain consciousness, because that might create a whole new problem for the courts. But for now, I you know, I guess we don’t need to worry too much about that,
[Professor Andrea Roth] 20:47
and we’re already at the point where, you know, people will show an abstract of an article that was written by an AI and an abstract that was written by a human. And in the audience, I can’t tell which one was written by humans for sure, we’re already there, but people still, I think, would freak out if they knew that ChatGPT was deciding whether they were guilty or not guilty. No, we don’t want that. We don’t want that.
[Eric Ahern] 21:19
Okay, so, so we can’t cross examine a machine like we can a human. Some courts, however, have suggested that maybe cross examining a human expert who interprets ai ai evidence would be a sufficient kind of substitute here. Why is that approach inadequate? In your opinion?
[Professor Andrea Roth] 21:39
Yeah, and that is, there are some courts, like the New York Court of Appeals , that have really tried to acknowledge this inherent dilemma that we’ve been talking about, that, you know, machines can convey information, and there should be some way to meaningfully scrutinize them.6 But they said, hey, the programmers on the witness stand, and so that’s good enough. And I absolutely think that is obviously inadequate, and it’s going to become even more obviously inadequate as AI becomes more sophisticated. But you know the however important the programmer was in creating these, you know, 10s of 1000s of lines of code of this program at most, by putting that program on the witness stand, you’re going to get at what that person knows in broad strokes about the program and why it works and how it works. They don’t know, for example, the machine’s prior statements. They don’t know the precise scope of the validation studies. And in fact, just because some of the programmers on the stand doesn’t mean that validation studies even exist. They don’t know how the program would deal with particular hypotheticals, the way that you would if you had pretrial access to the machine to feed it different hypotheticals. They don’t know how it would respond to stress testing, like the IEEE, the you know, group of electrical engineers that has created standards for commercial software like that’s used in banking and elevators and things like that. They have, they require stress testing of programs that are used in high-stakes situations. They would never take the affidavit or testimony of a programmer as being sufficient to start running software in a bank or an elevator. So why would we take it as sufficient to, you know, give somebody the death penalty or put them in jail for 20 years. The last point on that, I guess, I would make, is that, as AI becomes more sophisticated, the programmers themselves, let’s be honest, are not going to know precisely why the program does or says what it does or says.
[Eric Ahern] 23:55
right. And that’s one of the goals, right? I think for exactly they’re trying to exactly this, this level of unpredictability, of, you know,
[Professor Andrea Roth] 24:10
unsupervised learning, deep neural networks, exactly, exactly. It’s, it’s the feature, not the bug. And so, you know, but you’ve got to acknowledge it and and not think that, just like a programmer’s two hour testimony on the witness stand is somehow meaningful scrutiny of what the program says.
[Eric Ahern] 24:25
And you know, when we have this situation where where it is a black box it is it’s not transparent. And if we have two different AI systems or AI forensic tools that are producing conflicting results, I imagine this is going to create a lot of problems. So I’m wondering, what do you think? How should courts handle situations where two different AI models reach, you know, conflicting, if not, opposite conclusions?
[Professor Andrea Roth] 24:52
Yeah, and just to, just for the audience, if they don’t know, this has definitely arisen before, like there’s a murder case a few years back in upstate New York, People v. Hillary7, where two different computer programs that purport to interpret DNA mixtures, one called STRmix, and one called TrueAllele, came to diametrically opposed conclusions as to whether this person was a contributor to the mixture based on the exact same DNA sample, and the way that the you know, one company said, Well, that’s because the other program is unreliable, and the other company said, Well, no, it has to do with the parameters that were fed into the machine at the time and the assumptions that they were made. Thinking. And it’s not that that’s wrong, it’s that we need to know what those parameters and assumptions are in order to make sense of like, what if we had never known of the second result? Okay, so this is not a theoretical concern. This has happened. What should we do about it? One thing to do is to have a corroboration rule, where you would have to, you know, before you find a key fact in a case based solely on a machine generated result, you would need to have it corroborated by another machine. I think there are some and you know, we have such corroboration rules with respect to like, accomplice testimony, confessions. It’s not unheard of in the law to have these sorts of corroboration rules. I think there would be some challenges to having that sort of corroboration rule with respect to machines, in part, because who knows whether there’s two programs available on the market for a particular purpose. And, you know, maybe there’s something wrong with the second machine too, if they have the same faulty assumptions. So, but that would be one way of dealing with it, and the other way to deal with it is just to use the fact that there are machines that come to two different conclusions as a sign that we need more meaningful scrutiny of the machine certainly, and just have better ways of scrutinizing whatever is brought into evidence.
[Eric Ahern] 27:14
And I imagine in this corroboration example, we’d need a second machine to independently verify that, because something that came to mind when you were when you were discussing the corroboration technique is that a lot of these generative AI models are programmed to to in a way please the users, right? You know they want to be as helpful as possible. And if you prompted it somehow, can you verify the accuracy of this? Or it’s like, oh, how can I help? Okay, that’s what you want me to do. Let me find a way to do that. Absolutely, absolutely
[Professor Andrea Roth] 27:50
Absolutely yes, yes. You’d have to find a way to ensure that it’s truly independent. Exactly. Yep.
[Eric Ahern] 28:03
All right, so some argue that AI used in court should be fully interpretable, interpretable, which might be like a glass box model, rather than opaque, which we could call a black box. Should transparency be a legal requirement for AI generated evidence?
[Professor Andrea Roth] 28:21
I hesitate to say yes, even though I do. Let me say a few things. First of all, the Cynthia Rudin, who’s a computer scientist, and Brandon Garrett, who’s a law professor at Duke, recently had an article8 where they called for, you know, all algorithms used in public law, essentially to be so called glass box, which they say means, as you said, fully interpretable . They explain why interpretability, in their view, is a different thing than transparency. Like in other words, interpretability, you know, means you, you, you understand what the process is doing and how it got to its result. And you, you understand why it’s saying what it’s saying. And you know, there’s that helps with dignitary concerns and also reliability concerns. You could have that in theory with something that’s not fully quite transparent, like you don’t need to know the full source code for it to be, say, interpretable. So you know all of these terms. It’s just another example of the terms being a little bit slippery. It’s not that I disagree with Garrett and Rudin. I do think that it’s a little bit more pie in the sky to think that all of our current Lee, you know, commercially generated algorithms are somehow, you know, going to be we’re going to have rules that are passed by legislatures that say you cannot use anything that’s not quote, fully interpretable. You know, they claim in the article that there’s absolutely no reduction in reliability or accuracy when you make something fully interpretable? And I, I honestly don’t know enough to be able to say that’s absolutely true. I defer to the computer science experts in the room. There are lots of people who say that’s not true. But yeah, you know, do I think it would be a better system? And if all complex algorithms used to convict or acquit people were, you know, open source and fully interpretable, yes, I think that would be a better system than what we have now. But I do think there’s also just lots of improvements that we could make, even to so-called Black Box algorithms to be able to scrutinize them so much more than we are now. So it’s not that I disagree, but I feel like if that’s the only goal that’s on the table, we are missing the chance to have these other reforms that would, I think, be more likely to pass and dramatically improve what we have. And the other thing that Garrett and Rudin say is that validation should be a requirement for any algorithm, but we have that essentially, like any algorithm that is subject to, you know, scrutiny, expert testimony scrutiny because it’s used by an expert already requires some validation, but the validation studies are not as good as they should be. They don’t show how these programs work under marginal conditions and as you know, the entire factor space, if you will, like different different conditions, different circumstances, permutations. So I think a fully interpretable algorithm like a validation study, isn’t necessarily the gold standard.
[Eric Ahern] 32:20
Current systems in place that fall short of meeting a gold standard, and I think that’s a great time. Great, we have a great point here to pivot to your policy and reform proposals, which I cannot wait to hear about. So you’ve proposed reforms to evidence law for handling AI generated or machine generated evidence. What are your key recommendations?
[Professor Andrea Roth] 32:42
So I think there’s a few things that we can and should do right away, one of them, the advisory committee, looks like they are pretty sympathetic to which is, at the very least, to expand what is now Federal Rule of Evidence 702, which applies only to human experts and the opinions of human experts, and apply it to expert systems as well. And so the idea would be, if the conveyance of information from machine, from a machine would be subject to rule 702 if it were conveyed by a human expert, then the machine also needs to meet the requirements, the reliability requirements, the basis of information requirements that a human expert would have to do. Part of that also. So that’s number one. Number two is to allow better impeachment of machines, like if a machine has said two inconsistent things, they should be allowed to be impeached by inconsistency, just like a human witness would, and that would require a very easy fix to current Federal Rule 8069. I think another thing that needs to happen is a change in pretrial disclosure rules. So I think the things that would be key would be to allow pretrial access by the opposing party to the algorithm for a meaningful amount of time before trial number one number two, make it a condition of admissibility under this new rule, you know right now it’s considered a rule 707 which would extend 70210 to machines, make it a condition that the program is available to independent researchers for audits, that you at least give independent researchers licenses and that they are freely available for money. You know, here at Berkeley, we’ve been trying to get a license to work with TrueAllele and STRmix on the DNA stuff, and they TrueAllele rejected it out of hand, and STRmix placed impossible conditions on it. So lots of these programs are just not even available. Not even a license is available to independent researchers to do their own research on it. Number three would be to make sure that the validation studies are similar to the case at hand. So you know if the case at hand involves a really complex DNA mixture with several contributors who are like. Related to each other and a low quantity of DNA, then the software has to be shown to be valid in those circumstances before being admitted. And that’s not what we have right now. And then the final thing would be, you know, right now there are rules in the federal system in most states that say that when you call a human witness to the stand, you have to give the opposing party all of their prior statements on the same subject matter to allow the opposing party to possibly impeach them. And I think we need that rule for machines as well. If this machine has said, has given made prior statements with respect to this case or this sample, then those should be disclosed. And, you know, I think, I think those requirements would would go a long way to massively improving the system without a complete overhaul of the system. So I think they’re realistic reforms.
[Eric Ahern] 36:30
Wonderful. Well, thank you so much for walking us through those we are lucky to have a mind like yours thinking about these issues.
[Professor Andrea Roth] 36:40
Thank you.
[Eric Ahern] 36:43
No problem. As my final question for our interview today, and this has been so informative, and it’s been such a pleasure to chat with you is as AI continues to evolve and become more sophisticated, what, in your opinion, is the most urgent legal question we need to address?
[Professor Andrea Roth] 37:02
I think there is a more mundane but immediate question, and then a much more profound question, and the more immediate question, it’s not even mundane, it’s just mundane by comparison, is, how do we give parties the tools to meaningfully scrutinize the reliability of machine assertions? Because they’re going to become even more sophisticated, really fast, and they’re going to be making these claims, and we’re going to be deciding cases based on them, and we’ve got to have better tools. Number two, the more profound, broader question is, how do we ensure that algorithms don’t just make all our decisions? You know, how do we ensure that we still have a system of human rendered justice, and that’s going to be up to us to make sure that we have humans in the loop, that we don’t allow humans to just defer to machines, and that we make sure that the judgments that humans are good At making, like moral judgments, are never delegated to machines. Machines can tell us whether the light was red, but they can’t tell us whether somebody was driving, you know, wantonly, willfully and recklessly. They just can’t, and they never will understood well.
[Eric Ahern] 38:25
Thank you so much for your time. Professor Roth, it has been such a pleasure, and I hope our listeners enjoy this wonderful episode.
[Professor Andrea Roth] 38:34
Thank you, Eric and and thanks to the Tech and Law Journal for doing this podcast. Appreciate it.
[Joy Fu] 38:44
You have been listening to the Berkeley Technology Law Journal Podcast. This episode was created by Eric Ahern, Cameron Bankes, Quinn Brashares, Joy Fu, Eric Miranda, and Robert Thyberg. The BTLJ podcast is brought to you by Senior Podcast Editors Joy Fu and Lucy Huang, and Junior Podcast Editors Cameron Bankes, Ellen Huang, Robert Thyberg, Paul Wood, and Eliza Zou. Our Executive Producer is BTLJ Senior Online Content Editor, Jesse Wang. BTLJ’s Editor-in-Chiefs are Yasameen Joulaee and Emily Rehmet. If you enjoyed our podcasts, please support us by subscribing and rating us on Apple Podcasts, Spotify, or wherever you listen to your podcasts. Write to us at btljpodcast@gmail.com with questions or suggestions of who we should interview next. This interview was recorded on April 9, 2025. The information presented here does not constitute legal advice. This podcast is intended for academic entertainment purposes only.