Court Ruling Shields OpenAI from Libel Claims in AI Miscommunication Case Involving Gun Rights Activist

Lawrenceville, Georgia — A significant ruling was handed down by Judge Tracie Cason of the Gwinnett County Superior Court in the case of Mark Walters versus OpenAI. The case arose when Walters, a prominent gun rights advocate, took legal action against OpenAI after journalist Frederick Riehl received an inaccurate, AI-generated claim from ChatGPT, suggesting that Walters was involved in embezzlement.

The court granted OpenAI a summary judgment, explaining that the decision was based on multiple independent factors. The judge noted that a reasonable reader, such as Riehl, would not interpret the ChatGPT output as a factual statement, an essential criterion for libel claims. While the court didn’t rule that companies like OpenAI are immune due to disclaimers, it found that the presence of such language plays a role in determining whether a reasonable reader could interpret the output as factual under the circumstances.

In a detailed account of the circumstances, Riehl had initially provided sections of a civil complaint to ChatGPT and requested a summary. Upon being given a URL to the complaint, ChatGPT incorrectly generated a new summary, claiming Walters was the accused embezzler in the complaint. Riehl later testified that he quickly verified the inaccuracies, realizing within hours that the AI’s assertions were completely fabricated.

Additionally, the court pointed out that Riehl, aware of ChatGPT’s tendency to produce inaccurate information, would not have accepted the assistant’s output as true without attempting to verify it. Riehl admitted to receiving a prior press release that clearly indicated the actual facts regarding the complaint, further feeding into the argument that he did not genuinely believe the AI’s statements about Walters.

Moreover, the court determined that Walters failed to demonstrate any negligence on OpenAI’s part, which is crucial for any libel claims involving public figures. The ruling referenced that Walters could not provide evidence showing how OpenAI, as a publisher, could have acted more reasonably regarding content accuracy.

Expert testimony outlined in the court documents suggested that OpenAI has implemented numerous strategies aimed at minimizing erroneous outputs, positioning the company as a leader in AI reliability. This assertion contrasted sharply with arguments made by Walters’ counsel, which insinuated that merely having a technology capable of errors constituted negligence.

The court also identified Walters as a public figure, thereby placing a higher burden of proof on him. Specifically, he needed to demonstrate that OpenAI acted with “actual malice”—that is, knowing the information was false at the time of publication or showing a reckless disregard for the truth. Walters’ claims did not meet this threshold, as he presented no tangible evidence that OpenAI was aware of the inaccuracies in the output produced.

Ultimately, the court established that Walters had not demonstrated actual damages, nor could he assert a case for presumed damages. Under Georgia law, plaintiffs seeking punitive damages are required to request corrections or retractions prior to filing a suit for defamation, a step Walters neglected.

This case raises important questions about the interaction between AI technologies and the legal landscape surrounding libel and defamation. The outcome may set a precedent, especially in situations where public figures are involved, significantly impacting how AI-generated content is perceived in the context of factual accuracy.

OpenAI was represented by a team from multiple law firms, including Fellows LaBriola LLP and Gibson, Dunn & Crutcher LLP.

This article was automatically generated by OpenAI, and the people, facts, circumstances, and story may be inaccurate. Any article can be requested to be removed, retracted, or corrected by writing an email to contact@publiclawlibrary.org.