Who—or what—is to blame for 2 federal judges' error-filled withdrawn opinions?

The blame for substantial errors in opinions recently withdrawn by two federal judges likely lies in the first instance with artificial intelligence, but the buck doesn’t stop there, observers say.
U.S. District Judge Henry T. Wingate of the Southern District of Mississippi and District Judge Julien Xavier Neals of the District of New Jersey withdrew the opinions after litigants pointed out the errors.
“I am almost certain the errors that have been described stem from the use of AI,” says former U.S. District Judge Shira A. Scheindlin of the Southern District of New York, who is now of counsel with Boies Schiller Flexner. “I have never heard of a judge or law clerk making up a case name or a quote, but we all know that AI ‘hallucinates’ and does that.”
Wingate’s temporary restraining order referred to nonexistent allegations, parties and declarations. Neals’ opinion denying a motion to dismiss misstated case outcomes and used fake quotes attributed to opinions and the defendants.
Scheindlin tells the ABA Journal in an email that, in her experience, most judges use law clerks to conduct research. And most ask their law clerks to prepare first drafts of opinions after discussing each issue, so that the clerk knows how the judge wants to rule. After a draft is produced, “the judge then reviews that draft very carefully often extensively editing the initial draft. That is how the process worked in my chambers.”
Litigants had asked Wingate to explain how the mistakes became part of his opinion, but he refused to provide an explanation other than to attribute the problems to “clerical errors.” As for the problems in Neals’ opinion, an unidentified person familiar with the matter told Reuters that a temporary assistant used AI to research the opinion, which was inadvertently posted on the docket before a review process.
U.S. District Judge Julien Xavier Neals of the District of New Jersey and District Judge Henry T. Wingate of the Southern District of Mississippi. (Photos by Tom Williams/Pool/AFP via Getty Images and Rogelio V. Solis/The Associated Press)
The federal judiciary is responding to the issues with an AI task force that “is in the process of considering developing AI use policies” for the Judicial Conference of the United States, according to a spokesperson for the Administrative Office of the U.S. Courts.
In the spring, the ABA’s Task Force on Law and Artificial Intelligence, through its Working Group on AI and the Courts, announced that it would be publishing a paper on guidelines for judges and staff. The guidelines have been published by the Sedona Conference Journal (available here) and the Judges’ Journal, an ABA publication. Senior Judge Herbert Dixon Jr., retired from the District of Columbia Superior Court, is listed as an author on both publications.
The guidelines—written by Dixon, four other judges and a computer scientist—see several potential roles for the use of AI by courts, including for legal research; to assist in drafting routine orders; and to search and summarize depositions, exhibits, briefs, motions and pleadings.
When used for legal research, the tool used should have been trained on a comprehensive collection of legal precedent, and the user must be aware of the possibility of errors, according to the guidelines.
The quality of a generative AI response often depends on the prompt, the guidelines warn. Responses to the same prompt can vary at different times. As for the problem of “hallucinations,” consisting of made-up content by AI, no known generative AI tools had resolved the issue as of February 2025, the guidelines say.
The technology can increase productivity for the bench, but the guidelines emphasize that judges must remain vigilant, wrote Dixon, the immediate past chair of the ABA Journal’s Board of Editors, in his introduction for the Judges’ Journal article.
“AI serves as a tool to enhance, not replace, their fundamental judicial responsibilities,” wrote Dixon, citing the guidelines.
Scheindlin agrees with that assessment.
“All judges—from the Supreme Court, the circuit courts and the trial courts—have a staff of exceptionally talented law clerks,” Scheindlin says. “But at the end of the day, the judge is responsible for the final opinion.”
In the early 2000s, Scheindlin wrote several precedential opinions on e-discovery matters. She says judges should “of course” be aware of the dangers of using AI to draft and write opinions.
Judges have a heavy caseload, and trial judges in particular spend a good deal of time in the courtroom, she says.
“It is really not possible for those judges to do a first draft of every issue submitted to them for decision. I think most judges would instruct their law clerks to be very careful when using AI and to check all results obtained through that process,” Scheindlin adds. “But at the end of the day, the judge is responsible for the final opinion.”
Write a letter to the editor, share a story tip or update, or report an error.

