Artificial Intelligence & Robotics

Are lawyers ‘extra careful’ about avoiding hallucinated cases? No, insurance coverage attorney says

Dan D. Kohane and wife Chris

Dan D. Kohane, left, with his wife, Chris Naples. A Hurwitz Fine insurance coverage partner in practice for more than 45 years, Kohane also finds and posts opinions about lawyers submitting briefs with hallucinated cases. He insists he is not a Luddite.

Did you hear about how a California federal judge fined a law firm and one of its partners $10,000 for filing court documents with hallucinated cases in a lawsuit against OnlyFans’ parent company?

If not, Buffalo, New York-based insurance coverage attorney Dan D. Kohane can tell you all about this case—and many others—in which lawyers are getting into hot water over how they use generative artificial intelligence. For the past year and a half, he’s been writing LinkedIn posts about judges’ opinions blasting—and often sanctioning—lawyers who filed court briefs replete with AI-hallucinated cases.

Kohane admits that it’s something of an obsession for him at this point. He has more than 6,000 followers tracking his posts.

“For a lawyer to submit something to a court which is hallucinated is hard for me to imagine, and yet that’s what’s happening,” Kohane says. “It boggles my mind.”

It’s been clear for a while, Kohane says, that AI programs can hallucinate cases, faking citations and arguments. But once the legal community realized this was a risk, he adds, one would assume “lawyers would be extra careful before they submitted something.”

Many are not, according to Kohane. And he says once he started reading about cases in which lawyers submitted briefs with AI-created fake citations, he became increasingly committed to spreading the word.

“I keep thinking, at some point, lawyers will get it and stop embarrassing themselves,” he adds.

A senior partner in the Buffalo office of Hurwitz Fine, Kohane has been practicing law for more than 45 years, mostly focused on insurance coverage. He chairs the firm’s Insurance Coverage & Extracontractual Liability Team and its Mediation – Arbitration practice groups.

But he finds a special joy and mission in teaching the next generation of lawyers. For 38 years, he’s worked as an adjunct professor at the University at Buffalo School of Law, teaching insurance law. And he says it’s his interest in education that drives him to highlight the consequences of carelessly submitting AI-generated briefs.

Kohane says that he researches lawyers’ AI troubles and posts about them about twice a week. He insists he’s not a Luddite. In fact, Kohane says he’s “always been a gadget guy and a computer nerd.”

All he’s asking is that lawyers be aware that AI programs “can create these cases out of whole cloth” and to review their court filings.

“We lawyers build our reputation on the submission of cases and arguments that are genuine and meaningful,” Kohane says. “Any lawyer who doesn’t take the time to review a case is forfeiting the ethical high ground.”

Kohane says he’s hoping lawyers start checking citations so he can stop writing about their mistakes.

“I don’t want to do this,” he says. “I feel an obligation, and yes, I’m passionate about it because I’m passionate about ethics.”