ChatGPT Faces Its First Lawsuit For Producing Incorrect Information

ChatGPT, despite all its potential, didn't command a particularly warm reception in Australia. In response to the rising tide of ChatGPT in classrooms, a consortium of top Australian universities decided to double down on pen-and-paper-based exams to allay cheating and misuse risks. Now, the ChatGPT creator is staring at a lawsuit over providing inaccurate information about Brian Hood, who is currently serving as the mayor of Hepburn Shire Council in Melbourne.

Advertisement

According to a report from The Sydney Morning Herald, the conversational AI labeled Hood as one of the perpetrators in a foreign bribery scandal, when in reality, he was a whistleblower. When ChatGPT was asked about Hood's role in the case, the AI reportedly responded that Hood "was involved in the payment of bribes to officials in Indonesia and Malaysia" and that he also went to jail for his involvement in the saga. "I felt a bit numb. Because it was so incorrect, so wildly incorrect, that just staggered me," Hood was quoted as saying by the Australian publication.

A predictable affair for AI woes

According to a report from Reuters, Hood's lawyers sent a letter to OpenAI about the alarming issue, giving the company 28 days to fix the erroneous answer, or meet them in court battling a defamation lawsuit. OpenAI is yet to issue a public response, but if the company doesn't come clear on the controversy, it would be the first lawsuit that the Microsoft-backed company has faced over its buzzy AI tech that has found widespread adoption in the industry. Assuming the legal tussle turns out in favor of Hood, it would amount to at least AUD 200,000, says an expert cited by Reuters.

Advertisement

ChatGPT is no stranger to providing false and misleading misinformation, and it persists to this day. OpenAI, on the other hand, has repeatedly argued that its natural language model can produce inaccurate information. Part of this shortcoming can be attributed to the fact that the training dataset only dates as far back as 2021. Plus, there are inherent technology issues such as AI hallucinations, which are yet to be resolved. Even President Biden said earlier this week that industry stakeholders need to ensure that AI products do not compromise the safety aspect. Additionally, the likes of Elon Musk signed an open letter this year, demanding a six-month pause on the development of advanced AI products like OpenAI's GPT-4 model.

Advertisement

Recommended

Advertisement