Artificial Intelligence & Robotics

Radio host faces hurdles in ChapGPT defamation suit

  •  
  •  
  •  
  •  
  • Print.

shutterstock_gavel on laptop

Image from Shutterstock.

A lawsuit filed by a Georgia gun rights radio host alleges that he was defamed when OpenAI’s artificial intelligence platform ChatGPT made up fake accusations against him.

But radio host Mark Walters may face an uphill battle, according to experts interviewed by Bloomberg Law and Ars Technica.

Walters’ June 5 suit says ChatGPT made up the accusations when a journalist was using the chatbot to research a suit filed by the Second Amendment Foundation.

The journalist provided a suit link to the Second Amendment Foundation and asked ChatGPT to provide a summary. ChatGPT falsely said the suit accused Walters of “defrauding and embezzling funds” from the Second Amendment Foundation when he was its treasurer and chief financial officer.

The statement was libelous per se, according to the suit, filed in state court in Gwinnett County, Georgia.

According to Walters’ suit, every so-called fact by ChatGPT that pertained to Walters was false. He was never accused of defrauding and embezzling from the Second Amendment Foundation, and he has no employment relationship and no official relationship with the foundation.

ChatGPT also provided the journalist a supposed copy or the Second Amendment Foundation suit.

It was “a complete fabrication and bears no resemblance to the actual complaint,” according to the suit.

The journalist, Fred Riehl— the editor-in-chief of the magazine AmmoLand—asked the founder of the Second Amendment Foundation about ChatGPT’s accusations, who confirmed that they were false.

Megan Meier, a Clare Locke defamation lawyer, told Bloomberg Law that plaintiffs in Georgia are limited to actual economic losses if they don’t seek a retraction at least seven days before filing suit.

Riehl never published the false claims, which would likely limit economic damages in the case, according to Eugene Volokh, a First Amendment professor at the University of California at Los Angeles School of Law, who spoke with Bloomberg Law about the case.

Volokh blogged about the case at the Volokh Conspiracy. He said defamation liability is available only in two instances. The first is when the plaintiff can show that the defendant knew that the statement was false or knew the statement was likely false but recklessly disregarded that knowledge. The second applies when the plaintiff is a private figure, the defendant is negligent in making a false statement, and actual damages can be proven.

The first theory is unavailable because there is no allegation that Walters put OpenAI on notice that ChatGPT was making false statements about him, Volokh said. The second theory is unavailable because there is no allegation of actual damages, he said.

Walters’ lawyer, John Monroe, told Bloomberg Law in an email: “I am not aware of a request for a retraction, nor the legal requirement to make one.”

“Given the nature of AI, I’m not sure there is a way to retract,” Monroe added.

Another issue is whether Section 230 of the Communications Decency Act would shield OpenAI from liability. Section 230 protects technology companies from being held liable for third-party content posted on their platforms.

“Many legal observers,” Bloomberg Law reports, “including the co-authors of Section 230, have argued that a program like ChatGPT falls outside the immunity.”

Volokh told Ars Technica that Section 230 may not shield AI companies because there is no immunity when a defendant materially contributes to alleged unlawfulness.

“An AI company, by making and distributing an AI program that creates false and reputation-damaging accusations out of text that entirely lacks such accusations, is surely ‘materially contribut[ing] to [the] alleged unlawfulness’ of that created material,” Volokh told Ars Technica.

Give us feedback, share a story tip or update, or report an error.