ChatGPT users data leaked because of bug vulnerability


OpenAI-developed ChatGPT has hit the news headlines because user information has been leaked on the web by some threat actors who claim to have accessed and stolen data from the database of the OpenAI platform via a bug vulnerability.

As a result, credit card details, the last 4 digits of credit card numbers, credit card expiration dates, first and last names, and emails of those using the conversational AI are available for others to see. OpenAI acknowledged the issue as true in a statement released on Sunday and announced that information related to 1.2% of its overall subscribers was accessible and probably stolen by hackers.

Founded in 2015 by a group of angel investors led by Sam Altman, who is now the OpenAI CEO, ChatGPT is a large-scale language model trained with massive amounts of data to generate conversational responses. Now part of OpenAI, the AI-based model is funded by Microsoft for further development.

The platform currently acts as a centralized data generation trove capable of generating poems, articles, texts, and other written works related to academics.

Last Monday, a tech enthusiast who is a Twitter user warned ChatGPT users about leaking personal information from the platform due to a bug in the Redis Client Open Source library, redis-py. The bug also leaked data related to users who were using the search platform to generate content on various topics, and the content might have reached the desks of hackers, which is only a possibility.

NOTE: For the past two weeks, certain media resources related to the educational field have warned that the Artificial Intelligence-based company might encourage students to cheat in exams or essays, which is a threat to the integrity of academia.

 

Ad





Source link