
Clawdbot, the surging open-source AI agent gateway, faces escalating security concerns, with hundreds of unauthenticated instances exposed online and multiple code flaws that enable credential theft and remote code execution.
Clawdbot is an open-source personal AI assistant that integrates with messaging platforms like WhatsApp, Telegram, Slack, Discord, Signal, and iMessage.
It features a Gateway for control plane operations, including WebSocket handling, tool execution, and credential management, and a web-based Control UI for configuration, conversation history, and API key management.
Deployed via npm on Node.js ≥22, it defaults to loopback binding on port 18789 but supports remote access via Tailscale or reverse proxies like nginx/Caddy.
Security researcher Jamieson O’Reilly detailed the issue in a January 23, 2026, X thread, highlighting misconfigurations in this popular open-source AI agent gateway.
O’Reilly used Shodan to query for the Control UI’s unique HTML title tag, “Clawdbot Control,” and found hundreds of public instances shortly after deployment.
Services like Shodan and Censys index HTTP fingerprints, such as favicons or specific phrases, enabling rapid discovery. Similar scans revealed over 300+ exposed Gateways on port 18789, many of which were unauthenticated.

While some had authentication, others left configs, Anthropic API keys, Telegram/Slack tokens, and months of chat histories fully accessible.

The issue stems from localhost auto-approval in Clawdbot’s auth logic, designed for local dev but exploitable behind reverse proxies. Proxies forward traffic via 127.0.0.1, bypassing checks since gateway.trustedProxies defaults to empty, ignoring X-Forwarded-For headers.
O’Reilly confirmed via source code: socket addresses appear local, granting auto-access to WebSockets and UI. A GitHub issue notes this for Control UI exposure. O’Reilly submitted a hardening PR; docs now recommend setting trustedProxies: [“127.0.0.1”] and proxy-overwriting headers to prevent spoofing.
Attack Impacts
Exposed servers enable severe compromise. Read access dumps credentials (API keys, OAuth secrets) and full histories with attachments. Attackers inherit agent agency: sending messages, executing tools, or manipulating perceptions by filtering responses.
| Access Type | Compromised Assets | Exploitation Examples |
|---|---|---|
| Configuration Read | API keys, bot tokens, signing secrets | Credential theft for Anthropic, Telegram, Slack |
| Conversation History | Private messages, files | Exfiltrate months of data |
| Command Execution | Root shell access | Pair the the attacker’s phone for full access |
| Signal Integration | Device linking URIs | Pair the attacker’s phone for full access |
Some ran as root containers, allowing arbitrary host commands without auth.
Clawdbot docs urge clawdbot security audit –deep to flag exposures, tightening DM/group policies and perms. For proxies, enable gateway.auth.mode: “password” via CLAWDBOT_GATEWAY_PASSWORD and trusted proxies. Rotate secrets post-exposure: auth tokens, model keys, channel creds.
Use Tailscale Serve/Funnel or Cloudflare Tunnels instead of direct binds. Latest release (2026.1.14-1, Jan 15) predates reports; run clawdbot doctor for migrations.
Users should audit exposures immediately, as AI agents concentrate on high-value assets, demanding proxy hardening and least-privilege defaults.
Follow us on Google News, LinkedIn, and X for daily cybersecurity updates. Contact us to feature your stories.
