A high-severity vulnerability in PraisonAI is drawing urgent attention after security researchers observed exploitation attempts within hours of public disclosure.
The flaw, tracked as CVE-2026-44338 and documented in the GitHub advisory GHSA-6rmh-7xcm-cpxj, exposes a critical authentication bypass in the platform’s legacy API server, potentially allowing attackers to execute AI workflows without credentials.
PraisonAI Vulnerability
The issue affects PraisonAI versions 2.5.6 through 4.6.33. According to the advisory, the root cause lies in a legacy Flask-based API server that ships with authentication disabled by default.
Specifically, the configuration hard-codes AUTH_ENABLED = False and AUTH_TOKEN = None, effectively removing all access control protections.
This design flaw causes the server’s authentication check always to return true, allowing any remote user who can access the API to interact with sensitive endpoints. The vulnerability is particularly dangerous because the server binds to 0.0.0.0:8080, making it accessible over the network if exposed.
Attackers can abuse two key endpoints:
- GET /agents – Retrieves metadata about configured AI agents.
- POST /chat – Triggers execution of workflows defined in the agents.yaml file.
Notably, the /chat endpoint only requires a JSON request containing a message field, but the input itself is ignored. Instead, the server executes the predefined workflow directly using PraisonAI(agent_file=”agents.yaml”).run().
Security researchers confirmed that both endpoints respond successfully without any Authorization header, returning HTTP 200 responses. This confirms a complete authentication bypass rather than a misconfiguration.
The vulnerability allows unauthenticated users to:
- Execute AI workflows remotely without permission.
- Enumerate agent configurations and internal metadata.
- Consume API or model usage quotas, potentially leading to financial loss.
- Access outputs generated by backend workflows, which may include sensitive data.
While the flaw does not directly enable prompt injection, its impact depends heavily on how the agents.yaml workflow is configured. In environments where workflows perform privileged actions, the risk escalates significantly.
Further compounding the issue, PraisonAI’s deployment configurations also promote insecure defaults. The API configuration model sets auth_enabled to false by default, and sample deployment templates recommend binding to 0.0.0.0 with authentication disabled.
Although a newer serve agent command offers improved security by binding to localhost (127.0.0.1) and supporting API keys, the vulnerable legacy server remains included in production releases up to version 4.6.33.
The vulnerability has been fixed in version 4.6.34. Users are strongly advised to upgrade immediately.
For those unable to patch right away, mitigation steps include:
- Restrict network access to the API server using firewalls.
- Avoid exposing the service to the public internet.
- Manually enable authentication mechanisms if possible.
- Transition to the newer, secure server agents deployment method.
This incident highlights a broader trend where AI platforms ship with insecure defaults, making them attractive targets for opportunistic attackers.
The rapid exploitation observed in this case underscores how quickly threat actors can weaponize newly disclosed vulnerabilities, especially those that require no authentication.
Organizations deploying AI infrastructure should audit exposed services and prioritize secure configurations to prevent similar incidents.
Follow us on Google News, LinkedIn, and X to Get Instant Updates and Set GBH as a Preferred Source in Google.

