🔬 Research Summary by Maura R. Grossman, a research professor in the School of Computer Science at the University of Waterloo, an adjunct professor at Osgoode Law School of York University, and an affiliate faculty member at the Vector Institute of Artificial Intelligence, all in Ontario, Canada.
[Original paper by Maura R. Grossman, Paul W. Grimm, Daniel G. Brown, and Molly (Yiming) Xu]
Overview: This article provides a comprehensive yet comprehensible description of Generative Artificial Intelligence (GenAI), how it functions, and its historical development. It explores the evidentiary issues the bench and bar must address to determine whether actual or asserted (i.e., deepfake) GenAI output should be admitted as evidence in civil and criminal trials. It offers practical, step-by-step guidance for judges and attorneys to follow in meeting the evidentiary challenges posed by GenAI in court. Finally, it highlights additional impacts that GenAI evidence may have on the law and the justice system.
Introduction
In the past few months, GenAI has come to the forefront of the news media and captivated the public’s attention. Students are using OpenAI’s ChatGPT to do their schoolwork for them, to the alarm of teachers and school boards. An administrator at Vanderbilt University used ChatGPT to write a message to the university community in response to tragic shootings at Michigan State, which sparked outrage. Websites routinely use images generated by Midjourney and Stable Diffusion, and cover artists and other illustrators are suddenly fearing for their livelihoods. Clarkesworld, a major science fiction magazine, had to close its doors to new submissions after an influx of AI-generated stories prevented it from performing its normal review process for new manuscripts. Increasingly lifelike pornographic videos and still images are being created using AI systems that incorporate the faces and bodies of celebrities and other pop culture figures into the media they generate.
These systems did not come out of nowhere. For decades, systems that simulate creativity or generate text have been a thriving branch of computer science research. But in the past few years, this technology has become increasingly powerful. The quality of these systems is now such that it is challenging to tell computer-generated images from those produced by human illustrators or photographers or to separate text generated by a computer from that written by a human author. Similarly, evidentiary materials—including documents, videos, audio recordings, and more—that are AI-generated are becoming increasingly difficult to distinguish from non-AI-generated materials. While it may seem like years before GenAI will appear in the courtroom, these cases will be coming our way much sooner than anyone thinks.
Key Insights
GenAI systems such as ChatGPT have recently developed to the point where they can produce computer-generated text and images that are difficult to differentiate from human-generated text and images. Similarly, evidentiary materials such as documents, videos, and audio recordings that are AI-generated are becoming increasingly difficult to differentiate from those that are not AI-generated. These technological advancements present significant challenges to parties, their counsel, and the courts in determining whether evidence is authentic or fake. Moreover, the explosive proliferation and use of GenAI applications raise concerns about whether litigation costs will dramatically increase as parties are forced to hire forensic experts to address AI-generated evidence, the ability of juries to discern authentic from fake evidence, and whether GenAI will overwhelm the courts with AI-generated lawsuits, whether vexatious or otherwise. GenAI systems have the potential to challenge existing substantive intellectual property (IP) law by producing content that is machine, not human, generated but that also relies on human-generated content in potentially infringing ways. Finally, GenAI threatens to alter how lawyers litigate, and judges decide cases.
This article presents several different but highly realistic hypothetical case examples showing how GenAI issues might arise in litigation very soon. It explains where GenAI came from and the important developments both in Generative Adversarial Networks (GANs) and in transformer architecture that have occurred since 2014, which have led to the tremendous advancements in GenAI that we have seen in the past year. It describes the different types of GenAI that exist and explains what they can do. It then provides a detailed framework for how lawyers may offer or challenge–and how judges should determine disputes relating to–evidence that may be produced in court from GenAI applications or maybe truly human-generated but challenged as inauthentic. Next, it discusses the enhanced need for forensic experts to distinguish real versus fake evidence and the problem of dueling experts that will only increase the costs of and delay litigation.
The authors question whether juries will be hampered in deciding cases when they can no longer rely on their senses to assess the trustworthiness and weight of the evidence presented to them, discussing how deepfakes will have a profoundly negative impact on the justice system, both in terms of generating deep skepticism of all digital evidence as well as the rampant use of a deepfake defense for evidence that is genuine. While the authors welcome the possibility of GenAI increasing access to justice for parties who cannot afford legal counsel–now allowing them to generate pleadings automatically on their own–but also expresses concerns about the additional workload that will be imposed on a judicial system that AI-generated filings could flood that it cannot handle.
The article explores whether and how GenAI will impact IP law–in particular, whether GenAI-created work is subject to copyright protection and whether the scraping of the Internet that is integral to the training of GenAI tools reflects a copyright infringement or is “fair use” of published artistic content. The article closes by looking at the use of GenAI by judges and cautions about the risks that use might entail.
Between the lines
This is the first piece of its kind to provide a thorough overview of GenAI as evidence in court, guidelines on how lawyers and judges can approach this kind of evidence, which is likely to present severe challenges at trial, and its implications for IP law and the justice system overall.