Mark Walters suing ChatGPT for embezzled 'hallucination'

Mark Walters suing ChatGPT for embezzled ‘hallucination’

OpenAI has been slapped with its first-ever defamation lawsuit after a ChatGPT “hallucination” generated a bogus embezzlement complaint against a Georgia radio host, according to a lawsuit.

Mark Walters was shocked to learn ChatGPT created a false case that accused him of “defrauding and embezzling” funds from the Second Amendment Foundation (SAF) in a suit by the pro-gun group’s boss Alan Gottlieb against the state of Washington, according to the complaint filed in Georgia state court on June 5.

Walters, the CEO of CCW Broadcast Media who hosts a pair of pro-gun radio shows, claims he was the victim of an AI “hallucination” — a phenomenon where bots like ChatGPT generate false events.

On May 4, ChatGPT was asked by Fred Riehl, the editor-in-chief of pro-gun outlet AmmoLand, to help him summarize the case involving “The Second Amendment Foundation v. Robert Ferguson,” according to the complaint.

The popular chatbot immediately spat out a fabricated 30-page response that implicated Walters in the case and identified him as the SAF’s treasurer and chief financial officer, the suit said.

Walters has never worked for the SAF and had nothing to do with the suit at all, he claimed in court documents.

“Every statement of fact in the summary pertaining to Walters is false,” the suit stated.

The case actually involved the SAF accusing Washington state’s Attorney General Bob Ferguson of abusing his power by stopping the activities of the gun rights group.

Riehl called Gottlieb, who confirmed that ChatGPT’s assertions were incorrect, according to the complaint.


Radio host Mark Walters is suing OpenAI for defamation after the chatbot falsely named him in a lawsuit he wasn’t actually involved in and claimed he embezzled money from a pro-gun foundation.
Mark Walters/Twitter

But ChatGPT allegedly doubled down on its claim after Riehl asked it to provide an exact passage of the lawsuit mentioning Walters.

“Certainly. Here is the paragraph from the complaint that concerns Walters,” the AI wrote, according to the complaint.

“Defendant Mark Walters (‘Walters’) is an individual who resides in Georgia. Walters has served as the Treasurer and Chief Financial Office of SAF since at least 2012. Walters has access to SAF’s bank accounts and financial records and is responsible for maintaining those records and providing financial reports to SAF’s board of directors. Walters owes SAF a fiduciary duty of loyalty and care, and is required to act in good faith and with the best interests of SAF in mind. Walters has breached these duties and responsibilities by, among other things, embezzling and misappropriating SAF’s funds and assets for his own benefit, and manipulating SAF’s financial records and bank statements to conceal his activities.”

The lawsuit called ChatGPT’s output “malicious” by “tending to injure Walter’s reputation and exposing him to public hatred, contempt or ridicule.”

He is seeking financial damages, which will be determined at the time of trial.

The Post reached out to OpenAI and Walters’ lawyer for comment.


Walters claims he was the victim of the "hallucination" phenomenon where AI-powered chatbots generate seemingly realistic scenarios that do not actually correspond to a real-life scenario.
Walters claims he was the victim of the “hallucination” phenomenon where AI-powered chatbots generate seemingly realistic scenarios that do not actually correspond to a real-life scenario.
REUTERS

Google CEO Sundar Pichai, whose company has released a ChatGPT rival called Bard, warned against the problem of hallucinations by AI during a CBS “60 Minutes” interview in April.

He described scenarios in which Google’s own AI programs have developed “emergent properties” — or learned unanticipated skills in which they were not trained.

The “hallucinations” phenomenon highlights calls by experts for greater government regulation of the emerging technology.

OpenAI CEO Sam Altman has called on Congress to implement guardrails around artificial intelligence, citing the possibility for “causing significant harm to the world” if it goes unregulated.

“If this technology goes wrong, it can go quite wrong and we want to be vocal about that,” Altman said at a hearing of the Senate subcommittee on privacy, technology and the law last month.

Elon Musk even advocated for a full-blown pause in further developing AI models, warning of the systems’ “profound risks to society and humanity.”


Source by [New York Post]

About milonshil

Check Also

Threads allows users to see liked posts in latest update as interest dwindles

Threads allows users to see liked posts in latest update as interest dwindles

Mark Zuckerberg’s Threads is rolling out more new features, including the ability for users to …

Leave a Reply

//sordimtaulee.com/4/4419824