PraisonAI markets itself as a framework for building multi-agent AI teams — autonomous agents that write code, call APIs, and orchestrate complex workflows. On April 3, five CVEs landed against it at once, and the picture they paint is worse than any individual bug.
The headliner: a string subclass that defeats three layers of security
CVE-2026-34938 scored a perfect 10.0 CVSS. The vulnerability lives in execute_code(), the function responsible for running user-supplied Python inside what PraisonAI calls a "three-layer sandbox" — validation, isolation, and monitoring. The bypass? Create a custom str subclass that overrides startswith().
The _safe_getattr wrapper relies on string prefix checks to decide which attributes are safe to access. By subclassing str and making startswith() selectively lie about which prefixes match, an attacker walks straight past the attribute filter and into os.system(). No authentication required. No user interaction. Full host compromise from the network.
The CVSS vector tells the whole story: AV:N/AC:L/PR:N/UI:N/S:C/C:H/I:H/A:H. CWE-693, Protection Mechanism Failure — the classification for when your security control exists but doesn't actually work.
Patched in praisonai-agents version 1.5.90.
The other sandbox escape is worse, honestly
CVE-2026-34938 got the headlines because of the CVSS score. But CVE-2026-34955 (CVSS 8.8) might be more damning.
In STRICT mode — the most restrictive sandbox configuration — an attacker escapes every command restriction by invoking sh -c '' and injecting arbitrary shell commands through the argument. The sandbox dutifully blocks rm, sudo, chmod, and dangerous patterns like piping and command substitution. It does not block sh itself. So you ask sh nicely, and it obliges.
This is the kind of finding a junior pentester stumbles on in the first 30 minutes. It means nobody with offensive experience reviewed the allowlist before shipping it.
The rest of the disclosure
Three more CVEs rounded out the week:
| CVE | CVSS | What broke |
|---|---|---|
| CVE-2026-34935 | 9.8 | CLI command injection via --mcp argument. Shell metacharacters in the flag value land in an unquoted subprocess call. |
| CVE-2026-34953 | 9.1 | OAuth token validation accepted any Bearer token. Not a timing attack, not weak entropy — the check was functionally a no-op. |
| CVE-2026-34952 | 9.1 | WebSocket connections to the Gateway server required zero authentication. Connect, send agent commands, control the whole swarm. |
Average CVSS across five vulnerabilities: 9.4. That's not a bug cluster. That's a security architecture that was never adversarially tested.
Why AI agent frameworks keep shipping like this
PraisonAI's sandbox documentation describes a thoughtful-sounding architecture: validation (command allowlists), isolation (filesystem and resource constraints), monitoring (logging and policy violation tracking). Three layers. Four configurable modes. It reads well.
Then you notice the default mode is "Disabled."
The --sandbox flag is opt-in. Most developers running agent code — and agent frameworks spend most of their life in development — never flip it on. The security architecture protects exactly the users who went looking for it, which is roughly nobody.
Even when enabled, the model is wrong. Allowlisting shell commands while permitting arbitrary Python execution is like locking the front door and leaving the garage open. The attacker isn't going to type rm -rf /. They're going to import subprocess, or subclass a built-in type, or find whichever escape hatch the allowlist didn't anticipate. In-process Python sandboxes (RestrictedPython, custom attribute wrappers, AST-level restrictions) have a two-decade history of bypasses. Every one of them eventually falls.
And the auth story — an OAuth validator that accepts any token, a WebSocket gateway with no auth at all — suggests authentication wasn't forgotten. It was never part of the design. The framework was built to demonstrate capabilities, then security was stapled on before release.
This pattern keeps repeating. Langflow had a functionally identical exec() RCE just days earlier (CVE-2026-33017). Agenta had its own sandbox escape (CVE-2026-27952). The OWASP Top 10 for Agentic Applications exists because the category needed one.
What to do about it
If you're running PraisonAI, upgrade both components: praisonai-agents to 1.5.90+ (fixes CVE-2026-34938) and the main package to 4.5.97+ (fixes CVE-2026-34955, -34952, -34953). Don't expose the Gateway to untrusted networks — the WebSocket auth patch is a fix, not a guarantee. Treat the in-process sandbox as a speed bump. If your agents run untrusted code, they need container-level or VM-level isolation, full stop.
If you're evaluating any agent framework, ask three questions before you look at the feature list:
Is code execution sandboxed by default, or opt-in? Opt-in sandboxing protects nobody in practice.
What isolation primitive does the sandbox use? In-process Python filtering is not sufficient for untrusted code. You want process boundaries at minimum — ideally containers or microVMs.
Does the agent gateway require authentication? If not, nothing else in the security model matters.
The pattern that won't break
Five critical CVEs in one framework sounds extraordinary. It isn't. PraisonAI's security posture is typical for the AI agent category, not exceptional. These frameworks compete on agent count, tool integrations, and orchestration flexibility. Security is a roadmap item, not a design constraint.
The next agent framework disclosure will look the same. The sandbox will have a clever bypass. The auth will be broken or missing. The command injection will hide somewhere obvious. Average CVSS north of 9.
Patch your dependencies. This isn't a "next sprint" situation.