Headline
GHSA-r399-636x-v7f6: LangChain serialization injection vulnerability enables secret extraction
Context
A serialization injection vulnerability exists in LangChain JS’s toJSON() method (and subsequently when string-ifying objects using JSON.stringify(). The method did not escape objects with 'lc' keys when serializing free-form data in kwargs. The 'lc' key is used internally by LangChain to mark serialized objects. When user-controlled data contains this key structure, it is treated as a legitimate LangChain object during deserialization rather than plain user data.
Attack surface
The core vulnerability was in Serializable.toJSON(): this method failed to escape user-controlled objects containing 'lc' keys within kwargs (e.g., additional_kwargs, metadata, response_metadata). When this unescaped data was later deserialized via load(), the injected structures were treated as legitimate LangChain objects rather than plain user data.
This escaping bug enabled several attack vectors:
- Injection via user data: Malicious LangChain object structures could be injected through user-controlled fields like
metadata,additional_kwargs, orresponse_metadata - Secret extraction: Injected secret structures could extract environment variables when
secretsFromEnvwas enabled (which had no explicit default, effectively defaulting totruebehavior) - Class instantiation via import maps: Injected constructor structures could instantiate any class available in the provided import maps with attacker-controlled parameters
Note on import maps: Classes must be explicitly included in import maps to be instantiatable. The core import map includes standard types (messages, prompts, documents), and users can extend this via importMap and optionalImportsMap options. This architecture naturally limits the attack surface—an allowedObjects parameter is not necessary because users control which classes are available through the import maps they provide.
Security hardening: This patch fixes the escaping bug in toJSON() and introduces new restrictive defaults in load(): secretsFromEnv now explicitly defaults to false, and a maxDepth parameter protects against DoS via deeply nested structures. JSDoc security warnings have been added to all import map options.
Who is affected?
Applications are vulnerable if they:
- Serialize untrusted data via
JSON.stringify()on Serializable objects, then deserialize withload()— Trusting your own serialization output makes you vulnerable if user-controlled data (e.g., from LLM responses, metadata fields, or user inputs) contains'lc'key structures. - Deserialize untrusted data with
load()— Directly deserializing untrusted data that may contain injected'lc'structures. - Use LangGraph checkpoints — Checkpoint serialization/deserialization paths may be affected.
The most common attack vector is through LLM response fields like additional_kwargs or response_metadata, which can be controlled via prompt injection and then serialized/deserialized in streaming operations.
Impact
Attackers who control serialized data can extract environment variable secrets by injecting {"lc": 1, "type": "secret", "id": ["ENV_VAR"]} to load environment variables during deserialization (when secretsFromEnv: true). They can also instantiate classes with controlled parameters by injecting constructor structures to instantiate any class within the provided import maps with attacker-controlled parameters, potentially triggering side effects such as network calls or file operations.
Key severity factors:
- Affects the serialization path—applications trusting their own serialization output are vulnerable
- Enables secret extraction when combined with
secretsFromEnv: true - LLM responses in
additional_kwargscan be controlled via prompt injection
Exploit example
import { load } from "@langchain/core/load";
// Attacker injects secret structure into user-controlled data
const attackerPayload = JSON.stringify({
user_data: {
lc: 1,
type: "secret",
id: ["OPENAI_API_KEY"],
},
});
process.env.OPENAI_API_KEY = "sk-secret-key-12345";
// With secretsFromEnv: true, the secret is extracted
const deserialized = await load(attackerPayload, { secretsFromEnv: true });
console.log(deserialized.user_data); // "sk-secret-key-12345" - SECRET LEAKED!
Security hardening changes
This patch introduces the following changes to load():
secretsFromEnvdefault changed tofalse: Disables automatic secret loading from environment variables. Secrets not found insecretsMapnow throw an error instead of being loaded fromprocess.env. This fail-safe behavior ensures missing secrets are caught immediately rather than silently continuing withnull.- New
maxDepthparameter (defaults to50): Protects against denial-of-service attacks via deeply nested JSON structures that could cause stack overflow. - Escape mechanism in
toJSON(): User-controlled objects containing'lc'keys are now wrapped in{"__lc_escaped__": {...}}during serialization and unwrapped as plain data during deserialization. - JSDoc security warnings: All import map options (
importMap,optionalImportsMap,optionalImportEntrypoints) now include security warnings about never populating them from user input.
Migration guide
No changes needed for most users
If you’re deserializing standard LangChain types (messages, documents, prompts) using the core import map, your code will work without changes:
import { load } from "@langchain/core/load";
// Works with default settings
const obj = await load(serializedData);
For secrets from environment
secretsFromEnv now defaults to false, and missing secrets throw an error. If you need to load secrets:
import { load } from "@langchain/core/load";
// Provide secrets explicitly (recommended)
const obj = await load(serializedData, {
secretsMap: { OPENAI_API_KEY: process.env.OPENAI_API_KEY },
});
// Or explicitly opt-in to load from env (only use with trusted data)
const obj = await load(serializedData, { secretsFromEnv: true });
Warning: Only enable
secretsFromEnvif you trust the serialized data. Untrusted data could extract any environment variable.
Note: If a secret reference is encountered but not found in
secretsMap(andsecretsFromEnvisfalseor the secret is not in the environment), an error is thrown. This fail-safe behavior ensures you’re aware of missing secrets rather than silently receivingnullvalues.
For deeply nested structures
If you have legitimate deeply nested data that exceeds the default depth limit of 50:
import { load } from "@langchain/core/load";
const obj = await load(serializedData, { maxDepth: 100 });
For custom import maps
If you provide custom import maps, ensure they only contain trusted modules:
import { load } from "@langchain/core/load";
import * as myModule from "./my-trusted-module";
// GOOD - explicitly include only trusted modules
const obj = await load(serializedData, {
importMap: { my_module: myModule },
});
// BAD - never populate from user input
const obj = await load(serializedData, {
importMap: userProvidedImports, // DANGEROUS!
});
Context
A serialization injection vulnerability exists in LangChain JS’s toJSON() method (and subsequently when string-ifying objects using JSON.stringify(). The method did not escape objects with ‘lc’ keys when serializing free-form data in kwargs. The ‘lc’ key is used internally by LangChain to mark serialized objects. When user-controlled data contains this key structure, it is treated as a legitimate LangChain object during deserialization rather than plain user data.
Attack surface
The core vulnerability was in Serializable.toJSON(): this method failed to escape user-controlled objects containing ‘lc’ keys within kwargs (e.g., additional_kwargs, metadata, response_metadata). When this unescaped data was later deserialized via load(), the injected structures were treated as legitimate LangChain objects rather than plain user data.
This escaping bug enabled several attack vectors:
- Injection via user data: Malicious LangChain object structures could be injected through user-controlled fields like metadata, additional_kwargs, or response_metadata
- Secret extraction: Injected secret structures could extract environment variables when secretsFromEnv was enabled (which had no explicit default, effectively defaulting to true behavior)
- Class instantiation via import maps: Injected constructor structures could instantiate any class available in the provided import maps with attacker-controlled parameters
Note on import maps: Classes must be explicitly included in import maps to be instantiatable. The core import map includes standard types (messages, prompts, documents), and users can extend this via importMap and optionalImportsMap options. This architecture naturally limits the attack surface—an allowedObjects parameter is not necessary because users control which classes are available through the import maps they provide.
Security hardening: This patch fixes the escaping bug in toJSON() and introduces new restrictive defaults in load(): secretsFromEnv now explicitly defaults to false, and a maxDepth parameter protects against DoS via deeply nested structures. JSDoc security warnings have been added to all import map options.
Who is affected?
Applications are vulnerable if they:
- Serialize untrusted data via JSON.stringify() on Serializable objects, then deserialize with load() — Trusting your own serialization output makes you vulnerable if user-controlled data (e.g., from LLM responses, metadata fields, or user inputs) contains ‘lc’ key structures.
- Deserialize untrusted data with load() — Directly deserializing untrusted data that may contain injected ‘lc’ structures.
- Use LangGraph checkpoints — Checkpoint serialization/deserialization paths may be affected.
The most common attack vector is through LLM response fields like additional_kwargs or response_metadata, which can be controlled via prompt injection and then serialized/deserialized in streaming operations.
Impact
Attackers who control serialized data can extract environment variable secrets by injecting {"lc": 1, "type": "secret", "id": [“ENV_VAR”]} to load environment variables during deserialization (when secretsFromEnv: true). They can also instantiate classes with controlled parameters by injecting constructor structures to instantiate any class within the provided import maps with attacker-controlled parameters, potentially triggering side effects such as network calls or file operations.
Key severity factors:
- Affects the serialization path—applications trusting their own serialization output are vulnerable
- Enables secret extraction when combined with secretsFromEnv: true
- LLM responses in additional_kwargs can be controlled via prompt injection
Exploit example
import { load } from "@langchain/core/load";
// Attacker injects secret structure into user-controlled data const attackerPayload = JSON.stringify({ user_data: { lc: 1, type: "secret", id: [“OPENAI_API_KEY”], }, });
process.env.OPENAI_API_KEY = "sk-secret-key-12345";
// With secretsFromEnv: true, the secret is extracted const deserialized = await load(attackerPayload, { secretsFromEnv: true });
console.log(deserialized.user_data); // “sk-secret-key-12345” - SECRET LEAKED!
Security hardening changes
This patch introduces the following changes to load():
- secretsFromEnv default changed to false: Disables automatic secret loading from environment variables. Secrets not found in secretsMap now throw an error instead of being loaded from process.env. This fail-safe behavior ensures missing secrets are caught immediately rather than silently continuing with null.
- New maxDepth parameter (defaults to 50): Protects against denial-of-service attacks via deeply nested JSON structures that could cause stack overflow.
- Escape mechanism in toJSON(): User-controlled objects containing ‘lc’ keys are now wrapped in {"lc_escaped": {…}} during serialization and unwrapped as plain data during deserialization.
- JSDoc security warnings: All import map options (importMap, optionalImportsMap, optionalImportEntrypoints) now include security warnings about never populating them from user input.
Migration guide****No changes needed for most users
If you’re deserializing standard LangChain types (messages, documents, prompts) using the core import map, your code will work without changes:
import { load } from "@langchain/core/load";
// Works with default settings const obj = await load(serializedData);
For secrets from environment
secretsFromEnv now defaults to false, and missing secrets throw an error. If you need to load secrets:
import { load } from "@langchain/core/load";
// Provide secrets explicitly (recommended) const obj = await load(serializedData, { secretsMap: { OPENAI_API_KEY: process.env.OPENAI_API_KEY }, });
// Or explicitly opt-in to load from env (only use with trusted data) const obj = await load(serializedData, { secretsFromEnv: true });
Warning: Only enable secretsFromEnv if you trust the serialized data. Untrusted data could extract any environment variable.
Note: If a secret reference is encountered but not found in secretsMap (and secretsFromEnv is false or the secret is not in the environment), an error is thrown. This fail-safe behavior ensures you’re aware of missing secrets rather than silently receiving null values.
For deeply nested structures
If you have legitimate deeply nested data that exceeds the default depth limit of 50:
import { load } from "@langchain/core/load";
const obj = await load(serializedData, { maxDepth: 100 });
For custom import maps
If you provide custom import maps, ensure they only contain trusted modules:
import { load } from "@langchain/core/load"; import * as myModule from "./my-trusted-module";
// GOOD - explicitly include only trusted modules const obj = await load(serializedData, { importMap: { my_module: myModule }, });
// BAD - never populate from user input const obj = await load(serializedData, { importMap: userProvidedImports, // DANGEROUS! });
References
- GHSA-r399-636x-v7f6
- langchain-ai/langchainjs@e5063f9
- https://github.com/langchain-ai/langchainjs/releases/tag/%40langchain%2Fcore%401.1.8
- https://github.com/langchain-ai/langchainjs/releases/tag/langchain%401.2.3