By Audrey Mitchell, J.D. Candidate, 2026
GenAI’s threat to authenticating evidence
In June 2023, the story of Mata v. Avianca swept the nation— the first highly publicized case of AI-hallucinated citations in a brief. A multitude of hallucinated citation cases have since followed, often eclipsing their underlying cases in terms of publicity.
However, the rancor caused by these pretrial hallucinated citations pales against the concerns surrounding generative AI’s potential use at trial. Generative AI’s (“genAI”) ability to “deepfake” audiovisual evidence presents dual concerns: (1) parties could present deepfaked evidence as real, or (2) parties could challenge real evidence as deepfaked, requiring resources for evidence validation on top of the already lengthy and expensive litigation process. In either case, genAI undermines trust in litigation and could render all evidence potentially suspect.
In response, the Federal Rules of Evidence Advisory Committee is considering amending Rule 901. Rule 901 currently provides that evidence is deemed authentic if there is a sufficient basis to find that it is what the proponent claims it is. The legal field is increasingly concerned that this bar is too low, especially considering that authenticity is a threshold requirement for admissibility. Even scholars who previously criticized a genAI-specific amendment to 901, such as Berkeley Law’s Professor Wexler, now advocate for a change.
In advocating for change, we must consider the on-the-ground reality: that very few cases so far have been impacted by deepfake allegations, but that the cases that do exist show staggeringly inconsistent results. This blog post presents an overview of existing examples of judges responding to allegedly deepfaked evidence.
What our limited case law tells us
In some cases, judges have displayed strong displeasure at a party trying to pass the buck by crying “deepfake” without basis. In Huang v. Tesla, discovery revealed a video of Elon Musk making statements about Autopilot’s safety. Plaintiff submitted a Request for Admission for Tesla to admit the video’s authenticity. Tesla refused to admit it, given Elon Musk’s fame and the potential that he could be targeted for deepfakes. The court reproached Tesla’s refusal to cooperate and raised a slippery-slope concern that every famous person could “hide behind the potential for their recorded statements being a deepfake to avoid taking ownership of what they did actually say and do.” Similarly, in Valenti v. Dfinity, defense moved to disqualify plaintiff’s attorney for making statements caught on video. In response, plaintiff sought to introduce an expert report alleging these videos were deepfaked. The court responded by siding with the defense, finding that plaintiff’s deepfake allegation “underscores the concern that [plaintiff’s counsel] is heavily invested in protecting its own interests, to the detriment of those of the class.”
Clearly, deepfake allegations are often an unsuccessful way to challenge authenticity. However, some responses to deepfake allegations demonstrate the obsolescence of current rules in accurately determining authenticity. In USA v. Khalilian, defense moved to exclude a voice recording ostensibly showing the defendant making threats on the grounds that it could be deepfaked. In a colloquy with the court, the prosecution argued that a witness familiar with the defendant’s voice could listen to the audio and affirm that it sounds like the defendant. The Court responded, “That’s probably enough to get it in.” This level of scrutiny may not be enough to adjudicate deepfake allegations, as traditional metrics of authenticity, such as the sound of a voice in a video, can now easily be falsified.
Other cases, however, demonstrate the other concern regarding deepfake allegations: the potency of the deepfake defense—that a party can question any evidence’s authenticity, undermining the jury’s trust in it, without basis in fact. In the next cases, there was no evidence of genAI involvement, but the judges allowed counsel to question authenticity.
At the Wisconsin v. Rittenhouse trial, prosecution attempted to zoom in on an iPad video that had already been admitted into evidence. Defense objected, arguing that the Apple pinch-to-zoom function uses AI to manipulate video. The court held that the prosecution, as the proponent, had the burden to show via expert testimony that the pinch-to-zoom function would not alter the underlying footage. Because the prosecution did not have an expert ready to go, it was not allowed to zoom in on the video. Finally, during the US v. Reffitt trial, the court similarly entertained a deepfake allegation without basis. On direct examination of an FBI agent, prosecution entered, without defense objection, a video of the defendant at the January 6th riot. On cross examination, defense questioned the agent on the possibility that the video was AI-manipulated. Prosecution objected. The court allowed the questioning, despite the fact that defense counsel, when asked, was not able to provide any support for its theory of AI manipulation.
The path forward
The above cases exemplify the dueling concerns that judges will need to balance. A necessary prerequisite to fair rulings that do not over or under-include evidence is judicial education on AI. Judges must learn genAI’s capabilities, limitations, and misuses to understand these technological disputes.
Judges can also use existing rules structures to prevent both AI-manipulated evidence and baseless deepfake allegations. For instance, Model Rule of Professional Conduct 3.1 requires that lawyers assert issues only when they have a basis in law and fact. Judges should remind attorneys that, under this rule, they cannot frivolously assert a deepfake defense. In addition, judges can implement judicial standing orders on genAI. Judges have broad discretion to regulate their proceedings through standing orders. GenAI standing orders can provide pretrial hearings on authenticity given deepfake allegations, preclude baseless claims of AI manipulation in front of the jury, or take any other steps judges deem appropriate in response to abuse of this evolving technology by parties in lawsuits.
However, the potential of deepfaked evidence must eventually be addressed at a systemic level. A proposed 901(c) amendment to the Federal Rules of Evidence would conditionally raise burden for authenticity above the current sufficiency standard. Under this amendment, a party could object to the authenticity of evidence due to deepfake concerns. However, the opponent’s objection itself would need to meet the sufficiency standard—in other words, the opponent would need to show that there is a sufficient basis to find the evidence is deepfaked. Then, the proponent would need to show by a preponderance standard (more likely than not), rather than the ordinary sufficiency standard, that their evidence is in fact authentic.
The Advisory Committee should immediately accept this proposal, beginning the three-year process to amend the FRE. In the meantime, the responsibility to prevent genAI misuse in legal procedures, rather than allow genAI to control juries’ perception of evidence altogether, will fall to judges.