ChatGPT Vulnerability Discloses Chat Details, Goes Offline


ChatGPT remains offline after experiencing its first major gaffe. A ChatGPT vulnerability uncovered brief descriptions of user conversations to registered users.

ChatGPT’s chat history remained offline at the time of publishing this report. ChatGPT services were not available for non-Plus users at the time of publishing this report.

According to a Bloomberg report, the issue is apparently caused by a ChatGPT vulnerability in an unnamed piece of open-source software. A probe is on to find the exact reason of the incident, the report said.

ChatGPT is already on the cybersecurity news because of the various instances of cybercriminals using it to generate malicious code.

Even though ChatGPT will not respond if it is directly asked to create a malicious script, it can be indirectly asked by means of manipulation or confusion, The Cyber Express reported earlier.

ChatGPT vulnerability and privacy risks

OpenAI temporarily took down the ChatGPT service early on 20 March, after users alerted about a vulnerability that permitted some users to see the titles of chat histories of registered ChatGPT users.

The titles were shown in the user-history sidebar, which appears on the left side of ChatGPT website’s landing page, an OpenAI spokesperson told Bloomberg.

According to the Bloomberg report, the chatbot was briefly deactivated after OpenAI received these complaints of the ChatGPT vulnerability. The content of the other users’ discussions was obscured.

A Reddit user uploaded a picture with descriptions of several ChatGPT chats that they claimed were not their own, while another on Twitter shared a screenshot of the same problem, reported Verge.

What makes the situation risky is the fact that several users put confidential work and personal data on the chat window.

ChatGPT vulnerability and user discretion

ChatGPT feeds and grows on the information its users add on the platform. “Please don’t share any sensitive information in your conversations,” read a section at OpenAI’s website’s FAQ page.

Given that the conversations may be used for training the chatbot, and there is no mention whether the company has the wherewithal to delete prompts from a person’s chat history, the situation raises great privacy concerns.

Researchers have warned repeatedly on the practice of entering confidential work and business data into ChatGPT, putting organisations at risk.

Cybersecurity firm Cyberhaven recently spotted that of the 1.6 million workers using the company’s products, 4.9% of them have tried at least once to copy and paste company information into ChatGPT.

Global organisations including Verizon, Softbank, JP Morgan, Hitachi etc. have blocked ChatGPT access on their office devices. An attorney with Amazon in January warned the company’s employees against feeding confidential details into the ChatGPT.

ChatGPT vulnerability, data breach, and ramifications

“When a user inputs data of any kind into ChatGPT, they provide that data to be used by the tool provider – the US company OpenAI L.L.C,” wrote Polish lawyers Agnieszka Wachowska and Marcin Ręgorowicz.

The Service Terms and Terms of Use give the company the rights to use not just the content fed to the chatbot for “the purpose of maintaining, developing, and upgrading its technology”, the lawyers noted.

“This does not only apply to the input data, i.e. the content input into ChatGPT by the user to obtain a summary and an abridged version, but the output data as well, i.e. the generated content, which in this case is a summary and abridged version of the text,” they wrote.

“Employees may be inputting sensitive and highly confidential information into ChatGPT, creating the risk that such data may be inadvertently or deliberately revealed to third parties or the general public, including through ChatGPT using such data when responding to other requests or through hacking,” Michael Harmer, Chairman of the Australian Institute of Employment Rights, wrote in February.

According to him, the reveal or abuse of such data can have serious repercussions for companies, including not only the sharing of individually and/or financially sensitive and secret data, but also exposure to claims for damages and fines for law violations.

When it comes to the implementation of restraints, the exposure of commercially confidential data to ChatGPT may also be a pertinent factor, as courts may consider that the data is not truly confidential and should not be protected in those circumstances, he noted.





Source link