SPEAKERS Audrey Mitchell, Eric Ahern, Meg O’Neill
Podcast Transcript:
[Meg O’Neill] 0:08
Hello and welcome to the Berkeley Technology Law Journal podcast. My name is Meg O’Neill and I am one of the editors of the podcast.
Today, we are excited to share a conversation between Berkeley Law 3L student Eric Ahern and Audrey Mitchell, a current fellow with Berkeley’s AI Policy Hub and a 2L student here. Audrey brings a unique interdisciplinary perspective to the conversation around AI and the future of litigation, combining her background in STEM with legal experience in patent litigation and public interest tech law. At Berkeley’s AI Policy Hub, Audrey has been leading innovative research on how artificial intelligence is reshaping civil and criminal litigation—and whether our current legal rules are equipped to keep up. In this episode, we will explore the impact of AI-generated evidence on courtroom procedure, how judges are grappling with issues like authentication and bias, and why procedural safeguards—like discovery and transparency—matter more than ever in the age of machine-generated proof. This episode will be Part I of a two-part series on AI and rules of evidence. Whether you’re a technologist, a lawyer, or simply curious about how the law is adapting to AI, this series will offer an accessible and thought-provoking look into the legal system’s evolving relationship with artificial intelligence. We hope you enjoy the podcast!
[Eric Ahern] 1:53
Hi, I’m your host Eric Ahern. Audrey, thank you so much for joining us today.
[Audrey Mitchell] 1:58
Thank you for having me.
[Eric Ahern] 1:59
So today, we’re going to be talking about your work and your research, and specifically how your research connects to AI and AI’s reshaping of litigation. Could you give us an overview of how AI is already being used in civil and criminal cases?
[Audrey Mitchell] 2:19
I would say AI is being used both in the pre-trial stages of litigation as well as at trial itself. Pre-trial, it’s been used to do both case law research and drafting of filings before the court. I think we’ve probably all heard of the high profile cases where filings got filed with hallucinated citations, not a great look for attorneys. And of course, a lot of law students are using Lexis AI or Westlaw AI and many law firms have proprietary AI research tools as well. Then actually at trial, we’re seeing allegations and concerns about allegations of deep faked evidence actually being brought into the courtroom purporting to be real evidence, in addition to AI being used as the basis for expert opinions, whether that’s an AI fueled tool like facial recognition technologies or just a chat bot that an expert witness asks a question to. Something that we’re also starting to see are judges potentially using AI to draft opinions or actually come to opinions. I don’t know if the latter is actually happening yet, but there are tools entering the market for that purpose.
[Eric Ahern] 3:45
Fascinating, and you’ve studied judicial standing orders related to AI generated evidence. Have judges developed any creative ways to authenticate or challenge AI evidence?
[Audrey Mitchell] 3:56
So, I should clarify that the vast, vast majority of judicial standing orders on AI so far pertain only to AI’s pretrial use, in terms of how AI is used to draft or research filings before trial. I’ve actually only found one standing order that addresses even AI and evidence actually at trial. That’s Magistrate Judge Peter Kang of Northern California actually. And his doesn’t give any specific instructions on how to authenticate or challenge AI generated evidence, it just says something to the effect of, do not use AI to fabricate facts, and if you are using AI to generate evidence in some other way, maybe for demonstrative purposes, you have to give notice to the court and the other party. Beyond that, from standing orders, I haven’t seen anything, although we do have very early drafts of Federal Rules of Evidence1 that get towards that authentication issue.
[Eric Ahern] 5:05
Excellent. And switching now to a discussion about AI and bias in the legal system, AI tools like COMPAS have higher false positive rates for minority defendants. What steps should courts take to mitigate these biases?
[Audrey Mitchell] 5:20
Well, I think that judges have to tread carefully if they’re going to consider tools powered by AI shown to have bias. To take your example of COMPAS, my understanding is that’s a tool used in post sentencing determinations by judges. We do have some case law on this issue, Loomis v. Wisconsin,1 which alluded to but did not find a due process concern where this is one tool, perhaps, that judges can consider when coming to a post sentencing determination, but the problems really occur when judges place too heavy a reliance on an AI power tool, without that tool being able to make an individualized determination about a particular criminal defendant, and without anyone in the courtroom having a ton of knowledge about how that AI was trained, how it came to the output that it came to for that defendant. So, I would expect there to be more movement on that as we go forward, but there’s definitely due process concerns.
[Eric Ahern] 6:28
You touched on transparency issues there, which we will get to in just a moment. But first, I want to ask in your research, have judges expressed concerns about whether criminal defendants can meaningfully challenge AI generated evidence?
[Audrey Mitchell] 6:44
Yes, I think there are concerns about whether criminal defendants will be able to make meaningful and successful objections to AI generated evidence. The two rules systems we’ve seen most implicated by that are the authentication rules and the expert testimony rules. In terms of authentication under the current Federal Rules of Evidence,2 it’s a very low standard, it’s sufficiency. So if any reasonable juror could have found a piece of evidence to be authentic, then it comes in under 901, as written. So with deep fakes becoming more and more realistic, how possible is it for a criminal defendant to object that a certain piece of evidence has not met that low sufficiency standard. So in response to that, the Federal Rules of Evidence advisory committee is considering a rules change to have a trigger for a higher standard that needs to be met for authenticity if there’s an allegation of a piece of AI generated evidence.
[Eric Ahern] 7:50
Understood. Thanks for that explanation. Now seems like a great time to pivot to transparency and procedural issues. Companies may be able to claim trade secret protections over their AI models, making it difficult for defendants to challenge AI evidence. Should courts restrict these claims in criminal trials? How important is transparency here?
[Audrey Mitchell] 8:13
Transparency is absolutely crucial for criminal defendants to be able to mount a defense when there’s a piece of AI based evidence against them. So then it becomes a question for the courts about whether they care about that transparency and allowing the defendants to mount their case, or if they care about protecting those trade secrets from the perspective of businesses. We do have some case law on this topic as well. I believe it’s People v. Chubbs,3 where the judge sort of laid out that it can be a balancing test of how necessary and what specific pieces of information does the defendant need to be able to challenge an AI system, and how did it come up within the context of the case? So I’d say that’s an ongoing question.
[Eric Ahern] 9:02
Some have suggested that courts should develop specialized AI review panels similar to the PTAB—the Patent Trial and Appeal Board—for patent cases. Do you think this is a viable solution, or would AI cases, or cases involving AI evidence be better handled within the existing court system?
[Audrey Mitchell] 9:22
Yeah, I don’t know how workable that solution is from my perspective. There’s this ongoing debate as AI comes to the forefront of the legal landscape about the law of the horse and how it applies to AI.
[Eric Ahern] 9:40
The law of the horse – let’s hear more about that?
[Audrey Mitchell] 9:42
Right. So the law of the horse is this idea of, you come into law school, you’re looking through the courses you could sign up for as your electives. What you don’t see is an elective on horse law. Instead, for example, if you’ve purchased a horse and there’s a problem, contract law comes through, if someone steals your horse, then tort law and criminal law come through. And so it would be sort of an absurd result to create a whole new body of law just to deal with the subject matter of horses, when we’ve already got all of these bodies of law out there that can deal with different facets of horses. So similarly, how much do we have to build a whole new body of law, including specialized AI review courts to deal with AI and the law issues specifically versus how much will these just slot into our existing rules systems, perhaps with some adjustments to deal with AI specific problems. So I know there are already ongoing cases addressing how well AI fits into existing copyright law regimes, products liability regimes, and so I would say that’s the more natural solution here, rather than specialized AI review boards. But of course, a necessary prerequisite for existing rule systems being able to accommodate new AI issues is just a baseline understanding of the AI technologies for judges to have.
[Eric Ahern] 11:16
The law of the horse is an excellent analogy to apply here, and I know a lot of law professors who will be very happy to hear you bring that up in this context. So looking ahead now to the future of AI and the law. You’re working on a research paper about AI’s role in legal proceedings. Can you tell our audience a little bit about that? What do you hope your work will contribute to this field?
[Audrey Mitchell] 11:42
Absolutely. So, my current research and paper I’m working towards getting published goes into how AI fits into existing rules systems governing the litigation process. So, that’s some of the things we’ve already spoken about, like the Federal Rules of Evidence,4 Rules of Civil Procedure,5 attorneys duties, and judicial standing orders. Something that I’m thinking about is given how quickly AI technology is evolving, how little information we have about how it really will impact the litigation process, and how slowly the amendment process moves for things like the Federal Rules of Evidence,6 I suggest that standing orders for individual judges might be a good way to respond more quickly to AI problems that judges are actually seeing in their proceedings, and seeing how well a myriad of solutions work out for those individual judges before trying to implement anything on a nationwide scale. But of course, these standing orders have to be crafted very carefully, define AI, that’s a bigger problem than it would initially seem to be consistent across jurisdictions, or at least within a given jurisdiction, and maintaining fairness to litigants. So one of the things I hope that my paper does is bring awareness to the potential power, but also potential problems of these standing orders. As of now, a minority of federal judges even have AI related standing orders, and even those that do don’t necessarily have an understanding of the implication of those standing orders. Many haven’t had a chance to enforce their standing orders, so it’s not sure how it’ll play out, and many judges aren’t aware of whether judges in their same courthouse actually have contradictory AI standing orders to their own.
[Eric Ahern] 13:47
So you mentioned standing orders, so I completely understand if that’s your answer to this next question, but my question is, if you could implement one reform today to ensure AI is used responsibly in legal proceedings, what would it be?
[Audrey Mitchell] 14:04
I’m actually going to go even more basic than standing orders. This is something that I think is going to be foundational to any sort of solution we come to to resolve problems that AI could present to the litigation process, whether that ends up being standing orders, amendments to the FRE or anything else. Judges have to have a baseline understanding of the technology underlying AI systems. That understanding has to be ongoing as AI continues to evolve, and they have to understand the limitations of AI. I believe the Federal Judicial Center released a little bit of AI guidance for judges back in 2023.7 At that point that information could be a little bit out of date, and it also could go more deeply into how AI could impact evidentiary battles over expert testimony and authentication.
[Eric Ahern] 15:05
Well, I think it’s becoming abundantly clear that not just judges, but all lawyers need to gain a fundamental understanding of artificial intelligence, how it works, the risks and also the possibilities it opens up for the law. Audrey Mitchell, thank you so much for joining us today.
[Audrey Mitchell] 15:25
Thank you for having me.
[Meg O’Neill] 15:35
You have been listening to the Berkeley Technology Law Journal Podcast. This episode was created by Eric Ahern, Quinn Brashares, Joy Fu, and Eric Miranda. The BTLJ podcast is brought to you by Podcast Co-editors Juliette Draper and Meg O’Neill, and Junior Podcast Editors Braxdon Cannon, Joy Fu, Lucy Huang, and Paul Wood. Our Executive Producer is BTLJ Senior Online Content Editor, Linda Chang. BTLJ’s Editor-in-Chiefs are Edlene Miguel and Bani Sapra. If you enjoyed our podcasts, please support us by subscribing and rating us on Apple Podcasts, Spotify, or wherever you listen to your podcasts. Write to us at btljpodcast@gmail.com with questions or suggestions of who we should interview next. This interview was recorded on March 7, 2025. The information presented here does not constitute legal advice. This podcast is intended for academic entertainment purposes only.