Anonymous Sudan, a self-proclaimed activist group, allegedly launched a cyberattack on OpenAI, the artificial intelligence (AI) giant, on February 14, 2024, resulting in disruptions to its ChatGPT service.
The alleged cyberattack on ChatGPT, carried out through a distributed denial-of-service (DDoS) method, caused outages as evidenced by screenshots shared by the group on Telegram.
In a post by Anonymous Sudan, the hacker collective criticized OpenAI’s security measures, alleging poor protection and attributing the alleged attack to flaws in Cloudflare’s security services.
“ChatGPT, you cannot fix your poor protection? Thank you Cloudflare for the worst protection,” it wrote beneath in a post shared by the group’s leader.
Another screenshot provided by the group displayed error messages on ChatGPT and outage notices on the OpenAI website, indicating the severity of the disruption caused by the alleged cyberattack on ChatGPT.
Why is Anonymous Sudan Targeting OpenAI?
This alleged cyberattack on ChatGPT is not merely a random act of hacking but is deeply rooted in political motivations. Anonymous Sudan explicitly stated its rationale for targeting OpenAI, citing the company’s perceived support for Israel amidst the ongoing conflict between Israel and Hamas.
The group’s concerns go beyond cyber warfare, as they demand specific actions from OpenAI, including the removal of Tal Broda, who serves as the Head of the Research Platform for ChatGPT.
“Attacks will continue if above issues weren’t resolved, especially firing Tal Broda,” said Anonymous Sudan in the post.
Furthermore, the group denounced OpenAI’s collaboration with Israel, particularly highlighting CEO Sam Altman’s expressed interest in investing in the country and his meetings with Israeli officials, including Prime Minister Benjamin Netanyahu.
According to Reuters, Altman’s remarks during his visit to Israel in January 2024, where he emphasized the country’s potential role in mitigating risks associated with artificial intelligence, served as a catalyst for Anonymous Sudan’s ire.
The utilization of AI in the development of weaponry and intelligence operations by agencies like Mossad further exacerbated tensions, with Anonymous Sudan condemning OpenAI’s complicity in what they perceive as the oppression of Palestinians.
Moreover, the group’s animosity extends beyond OpenAI’s collaboration with Israel, targeting American companies in general. This broad anti-American sentiment highlights Anonymous Sudan’s larger ideological agenda.
Lastly, Anonymous Sudan pointed out ChatGPT’s alleged bias towards Israel and against Palestine, citing instances of bias observed on platforms like Twitter. The group contends that such bias undermines the model’s credibility and must be addressed.
The true motives behind this alleged attack, whether it serves merely as a tactic to draw attention and convey a message to OpenAI or if there are deeper motivations at play, will only become clear once officials release a statement on the matter. Despite attempts by The Cyber Express to seek clarification from OpenAI officials, no response has been forthcoming at the time of this report.
Anonymous Sudan Previous Cyberattack on ChatGPT
Moreover, this incident is not the first instance of Anonymous Sudan targeting OpenAI. In 2023, the group launched multiple attacks on ChatGPT.
In May 2023, Anonymous Sudan claimed responsibility for an assault on the American artificial intelligence company’s website. Their actions suggest that the cyberattack on OpenAI may not be an isolated event, hinting at potential future breaches and raising questions about possible communication between the hacktivist group and the AI institution.
In another incident in November, OpenAI purportedly became the target of a cyberattack by Anonymous Sudan, in collaboration with a partner known as “Skynet.”
Numerous users encountered difficulties logging into their ChatGPT portals, prompting them to voice concerns on platform X, formerly known as Twitter.
The group claims to have executed a Distributed Denial of Service (DDoS) attack against OpenAI’s login portal. However, the veracity of these claims remains unverified by official sources.
In December, Anonymous Sudan once again declared a direct cyberattack on OpenAI. In a Telegram post, the collective shared details of the attack, demanding the dismissal of Tal Broda, Head of the Research Platform at OpenAI, accusing him of supporting genocide.
The hackers persist in posing a threat to ChatGPT, pledging to continue their attacks until their demands regarding Tal Broda and alleged dehumanizing views on Palestinians are met.
OpenAI’s Cybersecurity Commitment
In response to the allegations, OpenAI has not issued an official statement. However, the company recently released a blog post discussing the termination of accounts associated with state-affiliated threat actors, emphasizing their commitment to cybersecurity. The blog highlighted collaborative efforts with Microsoft Threat Intelligence to disrupt malicious activities by identified threat actors.
The terminated accounts reportedly belonged to state-affiliated groups, including those from China, Iran, North Korea, and Russia.
These actors allegedly attempted to utilize OpenAI services for various malicious activities, such as researching companies, translating technical papers, and scripting support for phishing campaigns.
The repeated targeting of OpenAI by Anonymous Sudan raises questions about the underlying motives driving these cyberattacks. While the group claims to advocate for various causes, including accountability and justice, their methods and demands remain controversial.
The lack of official confirmation from OpenAI regarding the alleged cyberattacks adds another layer of complexity to the situation, leaving room for speculation and uncertainty.
Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.