Wednesday, December 18, 2024
Advertisement
  1. You Are At:
  2. News
  3. Technology
  4. Radio host files defamation lawsuit against OpenAI over false information produced by ChatGPT

Radio host files defamation lawsuit against OpenAI over false information produced by ChatGPT

This lawsuit marks the first instance of legal action taken against OpenAI in response to defamatory content generated by ChatGPT.

Edited By: Vishal Upadhyay San Francisco Published : Jun 11, 2023 18:42 IST, Updated : Jun 11, 2023 18:42 IST
openai, chatgpt sued, chatgpt app, radio host sues chatgpt
Image Source : FILE Radio host files defamation lawsuit against OpenAI over false information produced by ChatGPT

Recently, In a defamation lawsuit, radio host Mark Walters has filed a complaint against OpenAI, the Microsoft-backed organization behind ChatGPT. Walters alleges that false information generated by ChatGPT has harmed his reputation. According to reports by The Verge, ChatGPT falsely claimed that Walters had been involved in defrauding and embezzling funds from a non-profit organization.

The incident unfolded when journalist Fred Riehl requested information about Mark Walters from ChatGPT. In response, the AI chatbot provided fabricated details, stating that Walters was responsible for financial misappropriation within the organization he was associated with, among other allegations. Walters, disputing the accuracy of the information, has taken legal action and is seeking unspecified monetary damages from OpenAI.

This lawsuit marks the first instance of legal action taken against OpenAI in response to defamatory content generated by ChatGPT. It raises questions about the accountability and reliability of AI-generated information and its potential impact on individuals' reputations. 

ALSO READ: WhatsApp brings redesigned emoji keyboard in Android Beta

Reportedly, In a separate incident, attorneys Steven A. Schwartz and Peter LoDuca found themselves facing potential consequences after ChatGPT misled them into including fictitious legal research in a court filing. The attorneys unknowingly included references to non-existent court cases, which were generated by ChatGPT and believed to be genuine by Schwartz. 

These recent events have sparked concerns among legal professionals and prompted a US federal judge, Brantley Starr, to issue a strict directive against the use of AI-generated content in his court. Judge Starr now requires attorneys appearing in his court to affirm that no portion of their filing was drafted by generative artificial intelligence or, if it was, that it was thoroughly reviewed by a human.

In April, ChatGPT, as part of a research study, falsely named an innocent and highly-respected law professor in the US on the list of legal scholars who had sexually harassed students in the past.

Jonathan Turley, Shapiro Chair of Public Interest Law at George Washington University, was left shocked when he realized ChatGPT named him as part of a research project on legal scholars who sexually harassed someone.

ALSO READ: Apple introduces built-in VPN support in tvOS 17: Check details here

Inputs from IANS

Advertisement

Read all the Breaking News Live on indiatvnews.com and Get Latest English News & Updates from Technology

Advertisement
Advertisement
Advertisement
Advertisement