Artificial Intelligence & Robotics

ChatGPT creator isn't liable for embezzlement hallucinations that journalist didn't publish, judge says

ChatGPT

A lawsuit filed by a radio host who alleged that a ChatGPT hallucination defamed him has been tossed by a judge who found no negligence or actual malice by OpenAI, the creator of the artificial intelligence platform. (Illustration by Sara Wadford/ABA Journal/Shutterstock)

A lawsuit filed by a radio host who alleged that a ChatGPT hallucination defamed him has been tossed by a judge who found no negligence or actual malice by OpenAI, the creator of the artificial intelligence platform.

Judge Tracie H. Cason of Gwinnett County, Georgia, ruled May 19 against nationally syndicated radio host Mark Walters, who bills himself as “the loudest voice in America fighting for gun rights.”

ChatGPT falsely said Walters was “defrauding and embezzling funds” from the Second Amendment Foundation, a nonprofit organization that supports gun rights, when a journalist repeatedly asked the chatbot in 2023 to provide a summary of a suit filed by the group.

The journalist had encountered disclaimers warning that some ChatGPT information may be incorrect. His first request that ChatGPT open a link to read and describe the suit yielded an answer that the platform could not open the link. The false information was provided after additional queries.

The journalist was able to establish within about an hour and a half that the claims were not true, and he never published the information.

An expert for OpenAI testified that the ChatGPT output “contained clear warnings, contradictions and other red flags that it was not factual.”

No reasonable reader would have understood that the ChatGPT output contained actual facts, so it is not defamatory as a matter of law, Cason said. In addition, Walters did not establish negligence or actual malice, she concluded.

Walters’ lawyer had asserted in oral arguments that “a prudent man would take care not to unleash a system on the public that makes up random false statements about others.” But a publisher is not negligent simply because it knows that it can make a mistake, Cason said.

Cason cited an opinion by OpenAI’s expert that it leads the industry in attempting to reduce and avoid mistakes. OpenAI has also “taken extensive steps” to warn users about possible inaccuracies, Cason said.

And even if OpenAI did act with negligence, it would be protected because Walters is a public figure, and OpenAI did not act with actual malice, Cason said.

Finally, Walters cannot recover because he did not incur actual damages, Cason concluded. Nor can he obtain punitive damages because he did not request a correction or a retraction, she said.

Law.com and Reuters are among the publications that covered the decision.

John Monroe, Walters’ lawyer, told Law.com in an email that he and his client “are reviewing the order and considering our options.”