Judge Scott Schlegel of the Louisiana 5th Circuit Court of Appeal has said when judges misuse generative artificial intelligence, it’s not just a problem but a potential crisis.
“When lawyers make mistakes, they get sanctioned, or the pleading gets dismissed,” says Schlegel, who is on the advisory council of the ABA Task Force on Law and Artificial Intelligence. “When judges do it, it becomes the law. That’s a much different thing.”
In recent months, federal judges in Mississippi and New Jersey withdrew their rulings after litigants pointed out several errors, including nonexistent allegations, misstated case outcomes and made-up quotes. A state appellate court in Georgia also overturned a divorce decree after discovering the trial judge’s order relied on fake caselaw.
To prevent these mishaps, Schlegel, four other judges and a computer sci-ence professor developed guidelines for responsible AI use by the judiciary. Collaborating as members of the ABA task force’s Working Group on AI and the Courts, they outlined specific ways judges could enlist AI tools for their work. Schlegel has also come up with his own set of guidelines for his courtroom that he says builds off of those proposed standards.
“There are plenty of guidelines out there for lawyers, but there was nothing that existed for judges,” says Schlegel, who notes that he and others on the bench use AI for legal research.
As an appellate judge, Schlegel also uses AI to search through extensive case records and to summarize testimony. He says this saves time, as the results cite directly to the record. He cautions judges to treat AI like a first-year law clerk and double-check everything.
U.S. Magistrate Judge Allison Goddard of the Southern District of California helped bring in experts to teach her and her colleagues about AI after judges nationwide began issuing standing orders that expressed concerns about lawyers’ use of the tools.
“I felt like, not only do these smell like fear, but they also inadvertently can chill or discourage people from trying things out,” says Goddard, who also helped draft the working group’s AI guidelines.
Goddard began experimenting with OpenAI’s ChatGPT and Anthropic’s Claude to see how litigants might use them. As one example, she and her law clerks uploaded a publicly available complaint and motion to dismiss in a pro se case and asked the tools to draft an opposition to the motion.
Goddard since has tried other tools, including vLex’s Vincent and Thomson Reuters’ CoCounsel, through trials offered to her court’s judges. She often uses AI to create timelines for cases and to summarize transcripts for settlement conferences. She also views AI as a “thought partner,” and in cases involving technical issues, asks what questions a judge should pose to litigants based on their briefs.
“Some of it is junk, right?” Goddard says. “But it helps with your thought process to evaluate what the tool suggests. It’s another angle for you to come at the information.”
Goddard warns judges not to upload confidential documents into AI tools and to check their terms of service to determine whether they use inputs or outputs to train their models. She also suggests judges use AI to detect hallucinated cases before they send out orders, a practice she and her clerks now follow.
The Working Group on AI and the Courts provides in its guidelines more judicial uses for AI, including drafting routine administrative orders, generating standard court notices and translating foreign language documents.
Several states, including California, Illinois and Arizona, also have released AI guidance for their courts.
Meanwhile, in late October, the Thomson Reuters Institute and the National Center for State Courts launched an educational program for judges to teach them about AI and how to use it.
Judge Samuel Thumma, who serves on Division One of the Arizona Court of Appeals, offers other examples of how judges can use AI. He mostly uses it for “law-adjacent” work such as writing articles. He also is working to simplify Arizona’s rules for protective orders for litigants, especially those who are self-represented.
“The other thing I’ve mused about is how we can use generative AI to enhance access to justice,” says Thumma, who chairs his state’s Commission on Access to Justice. “I’m vaguely obsessed with seeing what we can do to use this incredible machine for the forces of good.”
Thumma, another author of the working group’s AI guidelines, speaks about AI at judicial conferences. He generally asks how many judges in the audience have knowingly used AI tools and estimates the response to be about 50%.
“It doesn’t have to be legal stuff,” says Thumma, who suggests that nonusers start with something simple, like asking for the best risotto recipe. “Just get a feel for it, and if you like it, think about how you can implement it into what you do.”
Goddard uses AI to draft award nominations. She uploads the award criteria and her experiences with the nominee and then asks AI to use that information to create a first draft. She removes parts she doesn’t like and adds details she feels are missing.
AI could similarly be used for recommendation letters, says Goddard, who adds that she could upload her past letters into a tool like Google’s
NotebookLM and ask it to use her writing style for future letters.
Schlegel doesn’t see any issues with judges using AI tools to proofread or edit their draft opinions. So what shouldn’t judges use AI for? Decision-
making, Schlegel says.
“It’s part of the social contract—you want a human being struggling through the difficulties of a case and making a decision, not a bot,” he says.