AI has taken the legal industry by storm, promising to transform civil litigation for both clients and attorneys by lowering costs, increasing efficiency, identifying helpful factual and legal patterns, and even predicting case outcomes. There are, however, cautionary tales of briefs or expert reports being filed with courts which cite fake sources and data or misrepresent the content of legitimate sources, exposing attorneys and parties to sanctions and even dismissal of claims and defenses. This article discusses potential uses of AI in civil cases and highlights some of the challenges and concerns that must be considered and addressed when using AI in civil lawsuits and arbitration proceedings.
Legal Research
For decades now, attorneys have been able to conduct research online, using word searches on legal research platforms to quickly access and cross-check case law, statutes, regulations, administrative rulings, and secondary sources. Legal research platforms have become more robust and sophisticated and now use AI to make research suggestions and guide practitioners to other potentially applicable sources. This can reduce research costs and help attorneys quickly find the highest and best legal authority which supports their clients’ cases.
Discovery
With the proliferation of electronic communications, documents, and other electronically stored information across a wide variety of platforms and file types, document review and management can be expensive and time consuming. Using natural language processing and machine learning algorithms, AI allows attorneys to sort, review, and analyze voluminous and complex data sets with more efficiency, accuracy, and cost-effectiveness. Among other things, AI can detect factual patterns, sort and group data, summarize the content of documents and transcripts, and create timelines that are useful for determining case strategy, finding key documents, deposition preparation, and developing factual evidence in support of clients’ cases.
Court Filings
AI tools can provide drafting and editing support. For instance, AI can be used to check spelling and make grammatical and other editorial suggestions in court filings such as briefs and motions. AI tools can also be used to review and analyze legal citations in court filings before they are submitted to the court or an arbitrator and to review and analyze the opposing parties’ court filings for accuracy and veracity. This saves time and increases efficiency in preparing and reviewing court filings to determine if the law relied upon is still good law and is accurately represented.
Predictive Analytics
AI tools can analyze information such as court filings and reported jury verdicts, settlements, and other data to predict how a judge or arbitrator may rule or to predict the likely settlement value of a case. These predictive analytics may help attorneys evaluate client risk and assist in determining case strategy.
Challenges and Concerns
AI is a rapidly evolving technology and must be used carefully and cautiously in civil litigation. Legal experts agree that “[h]uman insight is essential to the legal process.”[1] AI is a tool to be used by attorneys, judges, and arbitrators, not a replacement for them.
Having a human in the process ensures accuracy and the quality of results, a process artificial intelligence experts call a ‘human in the loop.’ AI-supported, human-led processes allow human users to take over decision making or complex thinking while harnessing artificial intelligence’s capacity for breadth and speed. Human in the loop allows for consistent results in less time—which is why analysts . . . predict that these tools will soon make up 30% of automation in legal tech.[2]
A human in the loop is particularly important because AI can be “biased” and it can “hallucinate”, leading to serious consequences in litigation. Because of these risks, many state bar associations have released guidance on the use of AI tools in the legal profession and a growing number of state and federal courts are requiring attorneys to disclose or monitor the use of AI in their courtrooms.[3] AI also presents confidentiality, security, and privacy concerns. These challenges are discussed in more detail, below.
AI Bias and Hallucinations
“AI bias, also called machine learning bias or algorithm bias, refers to the occurrence of biased results due to human biases that skew the original training data or AI algorithm—leading to distorted outputs and potentially harmful outcomes.”[4] In other words, AI can produce distorted and incorrect results and relying on those results may negatively affect the outcome of the case.
AI hallucinations are “incorrect or misleading results that AI models generate.”[5] Stated otherwise, “AI hallucination is a phenomenon where, in a large language model (LLM) often a generative AI chatbot or computer vision tool, perceives patterns or objects that are nonexistent or imperceptible to human observers, creating outputs that are nonsensical or altogether inaccurate.”[6]
A Stanford University study found that general-purpose chatbots hallucinate “between 58% and 82% of the time on legal queries, highlighting the risks of incorporating AI into legal practice.”[7] Researchers also found that even “bespoke legal AI Tools still hallucinate an alarming amount of the time…”[8] A growing body of case law proves that the risks are high when relying on AI tools without human verification in civil proceedings.
For example, a Missouri appeals court held that a self-represented party’s conduct in filing a brief containing over twenty fictitious case citations generated by AI rose to the level of abuse of the judicial system. The court dismissed the defendant’s appeal as frivolous and awarded the appellee damages in the amount of $10,000.[9] In an unpublished federal case, the district court entered monetary sanctions against a plaintiff’s attorneys after they filed motions in limine citing nine cases, eight of which were non-existent and generated by an AI platform.[10] The Colorado Court of Appeals has also warned lawyers as well as self-represented parties that any appellate filing containing AI-generated hallucinations may result in sanctions.[11]
It is also important for clients and experts to use AI responsibly and to disclose its use to legal counsel. In an unpublished federal district court case, the defendant’s expert witness submitted an expert opinion which relied, in part, on hallucinations created by ChatGPT-40. The expert failed to check all his citations prior to submitting the report to ensure they were real and accurate. When the hallucinations were discovered and presented to the court, the court excluded the expert’s testimony, stating “the Court cannot accept false statements—innocent or not—in an expert’s declaration submitted under penalty of perjury.”[12]
Confidentiality, Security, and Privacy Concerns
Finally, use of AI tools in civil litigation poses confidentiality, security, and privacy concerns. While “[m]any companies offer safe, secure, and encrypted platforms that incorporate generative AI technology to protect client data . . . many publicly available generative AI programs aren’t encrypted and data input to them could become publicly available.”[13] Thus, attorneys, experts, and clients, should fully investigate the safety and security protocols of any AI tools that they intend to use and avoid using open platforms.
There are many AI tools which can be utilized by clients, experts, and attorneys in civil lawsuits and arbitration proceedings and those tools are likely to become more powerful as time goes by and machine learning improves. However, these tools are not foolproof and require human verification and oversight to avoid serious court sanctions, loss of credibility, and/or dismissal of claims or defenses.
This article is informational only. The information provided on this website does not, and is not intended to, constitute legal advice; instead, all information, content, and materials available on this site are for general informational purposes only. Information on this website may not constitute the most up-to-date legal or other information. Readers of this website should contact their attorney to obtain advice with respect to any particular legal matter. No reader, user, or browser of this site should act or refrain from acting based on information on this site without first seeking legal advice from counsel in the relevant jurisdiction. Only your individual attorney can provide assurances that the information contained herein—and your interpretation of it—is applicable or appropriate to your particular situation. All liability with respect to actions taken or not taken based on the contents of this site are hereby expressly disclaimed. The content on this posting is provided “as is;” no representations are made that the content is error-free.
[1] See, e.g., https://www.americanbar.org/groups/law_practice/resources/law-technology-today/2023/how-artificial-intelligence-in-document-processing-impacts-a-legal-firm/.
[2] Id.
[3] See, e.g., Tracking Federal Judge Orders on Artificial Intelligence at https://www.law360.com/pulse/ai-tracker.
[4] https://www.ibm.com/think/topics/ai-bias.
[5] https://cloud.google.com/discover/what-are-ai-hallucinations.
[6] https://www.ibm.com/think/topics/ai-hallucinations.
[7] https://hai.stanford.edu/news/ai-trial-legal-models-hallucinate-1-out-6-or-more-benchmarking-queries
[8] Id.
[9] Kruse v. Karlen, 692 S.W.3d 43, 52-54 (Mo. App. 2024).
[10] In Wadsworth v. Walmart Inc., ___ F.R.D. ___, 2025 WL 608073 at *2, *6-7 (D. Wyo. Feb. 24, 2025).
[11] Al-Hamim v. Star Hearthstone, LLC, 564 P.3d 1118 (Colo. App. 2024).
[12] See Kohls v. Ellison, 24-cv-3754 (LMP/DLM) (D. Minn. Jan. 10, 2025) found at: https://fingfx.thomsonreuters.com/gfx/legaldocs/lgpdjbnkjpo/Kohls%20v.%20Ellison%20-%20Provinzino%20order.pdf
[13] https://news.bloomberglaw.com/us-law-week/with-ai-use-lawyers-need-to-ponder-confidentiality-stipulations
