Expect less from AI in law

I suppose it was only a matter of time before AI hallucinations infiltrated legal writing. Still, I’ve been disappointed to see multiple stories of attorneys submitting briefs containing AI fabrications, the latest example coming from California’s 2nd District Court of Appeal. In Noland v. Land of the Free, L.P., et al., the Court dunked on the Plaintiff’s attorney for submitting a brief with nonsensical quotations and cases: 

This appeal is, in most respects, unremarkable. [ . . . ] What sets this appeal apart—and the reason we have elected to publish this opinion—is that nearly all of the legal quotations in plaintiff’s opening brief, and many of the quotations in plaintiff’s reply brief, are fabricated. That is, the quotes plaintiff attributes to published cases do not appear in those cases or anywhere else. Further, many of the cases plaintiff cites do not discuss the topics for which they are cited, and a few of the cases do not exist at all. These fabricated legal authorities were created by generative artificial intelligence (AI) tools that plaintiff’s counsel used to draft his appellate briefs. The AI tools created fake legal authority—sometimes referred to as AI “hallucinations”—that were undetected by plaintiff’s counsel because he did not read the cases the AI tools cited.

The attorney involved was fined $10,000, which appears to be the highest monetary penalty levied against an attorney caught using AI so far. Check out this database of legal decisions where generative AI produced hallucinations, which has accumulated 390 cases. 

CalMatters interviewed the attorney responsible for the AI hallucinations in Noland. He stated he “didn't know it would add case citations or make things up.” And honestly, I believe him. One of the biggest problems regarding generative AI and LLMs (Large Language Models) is that many people do not know how truly unreliable they are. AI uses mathematics to analyze data, but it cannot reliably reason or tell what is true and what is false, which leads to fabrications. According to a New York Times article from May 2025, hallucination rates of newer A.I. systems were as high as 79 percent. On top of that, Meta just had a disastrous AI demo livestream that you can cringe at here.

Despite these glaring drawbacks, Silicon Valley snake oil salesmen like Sam Altman have done a great job of making AI seem capable of superhuman thought processing. Altman, CEO of OpenAI, is a media darling, and has been covered extensively in pieces depicting him as a groundbreaking genius. But we’ve seen this before. Remember when Elon Musk said he would have everyone traveling by hyperloop and living on Mars, and now he just sits on Twitter all day retweeting white supremacists? We have to stop listening to these Silicon Valley weirdos.

Back to Noland, the offending attorney loses me later in the CalMatters article

He thinks it is unrealistic to expect lawyers to stop using AI. It’s become an important tool just as online databases largely replaced law libraries and, until AI systems stop hallucinating fake information, he suggests lawyers who use AI to proceed with caution.

Oh c’monnn man. Online databases replacing law libraries is not the same thing as relying on an LLM to do your work for you. Also, yes, legal writing can be somewhat taxing, but that’s just the nature of the job. It’s what we went to school for. If you can’t write a brief without an LLM, might be time to hang it up!

Unfortunately, even if you use Westlaw to do your own research, the first thing you see is Westlaw hawking their generative AI model. The technology pulls results from Westlaw itself, so you may fare a bit better than letting ChatGPT hallucinate your cases. However, this technology is still not good enough to generate perfect results. It can offer you case summaries and link you to cases it thinks applies to the information you provided, but as with any LLM, it can misinterpret the legal content it consumes. You will have to double-check anything it offers, so in the end, it’s not really more efficient. 

And I’m not a TOTAL hater, I’m sure there are certain use cases for AI. But not for lawyers. 

I’ll end by recommending Ed Zitron’s newsletter and his podcast Better Offline for comprehensive critiques of the AI industry. 

Sources: