Critical Langchain Vulnerability Let attackers Exfiltrate Sensitive Secrets from AI systems

Critical Langchain Vulnerability Let attackers Exfiltrate Sensitive Secrets from AI systems

A critical vulnerability in LangChain’s core library (CVE-2025-68664) allows attackers to exfiltrate sensitive environment variables and potentially execute code through deserialization flaws.

Discovered by a Cyata researcher and patched just before Christmas 2025, the issue affects one of the most popular AI frameworks with hundreds of millions of downloads.​

LangChain-core’s dumps() and dumpd() functions failed to escape user-controlled dictionaries containing the reserved ‘lc’ key, which marks internal serialized objects.

This led to deserialization of untrusted data (CWE-502) when LLM outputs or prompt injections influenced fields like additional_kwargs or response_metadata, triggering serialization-deserialization cycles in common flows such as event streaming, logging, and caching. A CNA-assigned CVSS score of 9.3 rates it Critical, with 12 vulnerable patterns identified, including astream_events(v1) and Runnable.astream_log().​

Cyata security researcher uncovered the flaw during audits of AI trust boundaries, spotting the missing escape in serialization code after tracing deserialization sinks.

Reported via Huntr on December 4, 2025, LangChain acknowledged it the next day and published the advisory on December 24. Patches rolled out in langchain-core versions 0.3.81 and 1.2.5, which wrap ‘lc’-containing dicts and disable secrets_from_env by default—previously enabled, allowing direct env var leaks. The team awarded a record $4,000 bounty.​

google

Critical Langchain Vulnerability Let attackers Exfiltrate Sensitive Secrets from AI systems
Critical Langchain Vulnerability Let attackers Exfiltrate Sensitive Secrets from AI systems 5

Attackers could craft prompts to instantiate allowlisted classes like ChatBedrockConverse from langchain_aws, triggering SSRF with env vars in headers for exfiltration.

PromptTemplate enables Jinja2 rendering for possible RCE if invoked post-deserialization. LangChain’s scale amplifies risk: pepy.tech logs ~847M total downloads, pypistats ~98M last month.​

Upgrade langchain-core immediately and verify dependencies like langchain-community. Treat LLM outputs as untrusted, audit deserialization in streaming/logs, and disable secret resolution unless inputs are verified. A parallel flaw hit LangChainJS (CVE-2025-68665), underscoring risks in agentic AI plumbing.​

Organizations must inventory agent deployments for swift triage amid booming LLM app adoption.​

Follow us on Google News, LinkedIn, and X for daily cybersecurity updates. Contact us to feature your stories.

googlenews



Source link