Refuting Group-IB’s Threat Intelligence report that allege a ChatGPT data breach, OpenAI, the parent company of ChatGPT, has attributed the credential leak to “existing commodity malware” on ChatGPT users’ devices.
The Cyber Express previously reported about the Group-IB report on the alleged ChatGPT data breach, which revealed over 100,000 infected devices housing ChatGPT hacked credentials.
Upon learning about the ChatGPT data breach, The Cyber Express reached out to OpenAI to learn more about the ChatGPT data breach update.
ChatGPT data breach update
In response to the ChatGPT data breach update query, the OpenAI communications team said the ChatGPT was not hacked and they are currently investigating the alleged hacked ChatGPT accounts.
“The findings from Group-IB’s Threat Intelligence report result from commodity malware on people’s devices and not an OpenAI breach. We are currently investigating the accounts that have been exposed,” an OpenAI spokesperson told The Cyber Express.
“OpenAI maintains industry best practices for authenticating and authorizing users to services including ChatGPT, and we encourage our users to use strong passwords and install only verified and trusted software to personal computers.”
On Tuesday, Group-IB reported about the 101,134 infected devices containing compromised ChatGPT credentials. According to the report, these user credentials were found in logs of info-stealing malware.
Group-IB found a peak of 26,802 compromised ChatGPT accounts, with the Asia-Pacific region having the maximum number of hacked ChatGPT credentials available for sale on dark web forums.
On the other end of the spectrum, OpenAI claims that these infected devices were not the direct outcome of a ChatGPT breach but were made available on the dark web due to third-party malware installed on ChatGPT users’ devices.
This is an ongoing story, and The Cyber Express will closely follow the ChatGPT data breach updates.
ChatGPT data breach: The latest in the list
Concerns about ChatGPT’s implications in cybersecurity have been there since its inception, sparking discussions among security leaders and experts.
Several organizations, including Amazon, JPMorgan Chase & Co., and Bank of America, have restricted or blocked access to the AI software due to security concerns.
Italy became the first country to ban ChatGPT in April 2023, accusing OpenAI of stealing user data.
A report from Check Point Research (CPR) highlighted an increase in the trade of stolen ChatGPT Premium accounts, allowing cybercriminals to bypass geofencing restrictions and gain unlimited access.
The market for stolen ChatGPT accounts has seen a rise, with cybercriminals utilizing tools like bruteforcing and checkers to hack into accounts. Additionally, “ChatGPT Accounts as a Service” has emerged, offering stolen premium accounts, often using stolen payment cards.
The demand for stolen ChatGPT accounts, especially premium ones, has increased due to the ability to bypass restrictions and access premium features.
Cybercriminals exploit this demand in the dark web underground, where stolen accounts are sold and even shared for free to promote other malicious services.
ChatGPT accounts store the queries of their owners, providing cybercriminals with access to personal information, corporate details, and more when they steal existing accounts. This poses significant privacy concerns and further fuels the trade of stolen ChatGPT accounts.
Malicious actors take advantage of users’ habit of reusing passwords across multiple platforms. They use account checkers to match email and password combinations, leading to account takeovers. Underground forums have seen increased discussions related to leaking or selling compromised ChatGPT premium accounts.
Cybercriminals employ tools like SilverBullet, a web testing suite, for credential stuffing and account checking attacks against various websites, including OpenAI’s platform.
They offer configuration files for automated attacks, allowing for the mass-scale theft of accounts. Some cybercriminals specialize in abusing and defrauding ChatGPT products, offering related services for sale.