Radio host sues OpenAI for defamation after ChatGPT generated false info

ChatGPT generated the false information in response to a request from a journalist named Fred Riehl

The logo of Open AI on a phone and the text ChatGPT in the background (DW)
The logo of Open AI on a phone and the text ChatGPT in the background (DW)
user

IANS

Microsoft-backed OpenAI has been sued by a radio host in the US, which appears to be the first defamation lawsuit responding to false information generated by ChatGPT.

Mark Walters sued the Sam Altman-run company after ChatGPT mentioned that Walters had been accused of defrauding and embezzling funds from a non-profit organisation, reports The Verge.

ChatGPT generated the false information in response to a request from a journalist named Fred Riehl.

ChatGPT responded: "Mark Walters is an individual who resides in Georgia. Walters has served as the Treasurer and Chief Financial Officer of SAF since at least 2012. Walters has access to SAF's bank accounts and financial records and is responsible for maintaining those records and providing financial reports to SAF's board of directors."

The AI chatbot further stated that Walters owes SAF a fiduciary duty of loyalty and care.

"Walters has breached these duties and responsibilities by, among other things, embezzlement and misappropriation of SAF's funds and assets for his own benefit, and manipulating SAF's financial records and bank statements to conceal his activities," ChatGPT said which is a false information, according to the lawsuit.


Walters is now seeking unspecified monetary damages from OpenAI, the report said.

Meanwhile, two lawyers told a judge in Manhattan federal court this week that ChatGPT tricked them into including fictitious legal research in a court filing.

Attorneys Steven A. Schwartz and Peter LoDuca are facing possible punishment over a filing in a lawsuit against an airline that included references to past court cases that Schwartz thought were real, but were actually invented by ChatGPT.

Last month, a US federal judge categorically told lawyers that he will not allow any AI-generated content in his court.

Texas federal judge Brantley Starr said that any attorney appearing in his court must attest that "no portion of the filing was drafted by generative artificial intelligence," or if it was, that it was checked "by a human being," reports TechCrunch.

In April, ChatGPT, as part of a research study, falsely named an innocent and highly-respected law professor in the US on the list of legal scholars who had sexually harassed students in the past.

Jonathan Turley, Shapiro Chair of Public Interest Law at George Washington University, was left shocked when he realised ChatGPT named him as part of a research project on legal scholars who sexually harassed someone.

Follow us on: Facebook, Twitter, Google News, Instagram 

Join our official telegram channel (@nationalherald) and stay updated with the latest headlines