It has been a whirlwind few months for Peter Steinberger and his creation, OpenClaw. The AI tool, which acts as a personal assistant for developers, exploded in popularity, racking up 100,000 GitHub stars in less than a week. It even caught the eye of OpenAI’s Sam Altman, who recently brought Steinberger on board, calling him a genius. But according to researchers at Oasis Security, that rapid success came with a hidden danger.
The Oasis Research team has just released details on ClawJacked (CVE-2026-25253), a significant vulnerability chain that effectively allowed any website to take over a person’s AI agent. For your information, this isn’t a problem with a fancy plugin or a shady download; it was a flaw in the main gateway of the software itself. Because the tool is designed to trust connections from the user’s own computer, it left a door wide open for hackers.
The Silent Hijack
Oasis’s research revealed a clever trick involving WebSockets. Normally, your web browser is quite good at keeping different websites from messing with your local files. However, WebSockets are an exception because they are designed to stay “always-on” to send data back and forth quickly.
According to researchers, the OpenClaw gateway assumed that if a connection was coming from the user’s own machine (localhost), it must be safe. However, this is a dangerous assumption; if a developer running OpenClaw accidentally landed on a malicious website, a hidden script on that page could quietly reach out through a WebSocket and talk directly to the AI tool running in the background. The user wouldn’t see a pop-up or warning.
Proving the Threat
To show just how serious this was, the team built a proof-of-concept to test the attack. They demonstrated the hijack “all without the user seeing any indication that anything had happened.” During this test, their script successfully guessed the password, connected with full permissions, and began interacting with the AI agent from a completely unrelated website.
The speed of the attack was the most alarming part. The software didn’t have a limit on how many times someone could try a password if they were connecting from the same machine. Researchers noted in the blog post that they could guess hundreds of passwords every second, concluding that “a human-chosen password doesn’t stand a chance” against that kind of speed.
The Fix
Once the script guessed the password, the attacker gained admin-level permission, and from this position, they could read private Slack messages, steal API keys, and even command the AI to search for and exfiltrate files from the computer.
Thankfully, the OpenClaw team’s response was incredibly fast. After being alerted to the mess, the team released a fix within just 24 hours. If you are using this tool, you need to update to version 2026.2.25 or later immediately to stay safe.
This news comes shortly after a separate issue earlier this month, where over 1,000 malicious skills were found in OpenClaw’s community marketplace, showing that hackers are specifically targeting this new technology.
Expert Perspectives
In response to the discovery, the following insights were shared with Hackread.com. Diana Kelley, Chief Information Security Officer at Noma Security, notes that this is a vital reminder that AI agents must be treated as highly privileged systems. “The core issue was misplaced trust in local connections. ‘Local’ does not automatically mean ‘safe,’” she explained. Kelley advises organisations to strictly review how their AI tools handle authentication and user approval.
Randolph Barr, Chief Information Security Officer at Cequence Security, points out that this flaw, dubbed “ClawJacked,” highlights a gap where product usefulness grew faster than security. “The design focused on making the developer experience as smooth as possible… this made adoption faster but also made defensive controls less effective,” Barr said. He warns that in the age of AI, a quick patch might not be enough, as these agents often have the authority to act with the full permissions of the user.
Mark McClain, Chief Executive Officer at SailPoint, concludes that this incident should be a wake-up call for identity security. “These agents are no longer just tools for communication. They are powerful, always-on identities embedded in critical workflows,” McClain said. He stresses that organisations must treat AI agents as “first-class citizens” in their security frameworks, applying the same rigour to them as they do to human employees.



