Skip to main content
Mallory

LangChain Serialization Injection Vulnerabilities Enable Secret Extraction

ai-platform-securityopen-source-dependency-vulnerabilitywidely-deployed-product-advisorydata-exfiltration-method
Updated March 21, 2026 at 03:01 PM6 sources
Share:
LangChain Serialization Injection Vulnerabilities Enable Secret Extraction

Get Ahead of Threats Like This

Know if you're exposed. Before adversaries strike.

Two critical serialization injection vulnerabilities were discovered in the LangChain framework, which is widely used for building LLM-powered applications. The first vulnerability (CVE-2025-68665) affects the toJSON() method in LangChain JS and related serialization routines, where user-controlled data containing the reserved lc key could be misinterpreted as legitimate LangChain objects during deserialization. The second vulnerability (CVE-2025-68664) impacts the dumps() and dumpd() functions, allowing attacker-supplied dictionaries with the lc key to be treated as internal objects, potentially leading to the extraction of secrets or the instantiation of internal classes with attacker-defined parameters. Both vulnerabilities are remotely exploitable and have been patched in recent versions of LangChain and LangChain Core.

Exploitation of these flaws could allow attackers to extract sensitive information such as environment variables or manipulate application behavior by injecting malicious data structures. Organizations using affected versions of LangChain are strongly advised to upgrade to the patched releases—@langchain/core versions 0.3.80 and 1.1.8, langchain versions 0.3.37 and 1.2.3 for CVE-2025-68665, and langchain-core 0.3.81 and 1.2.5 for CVE-2025-68664—to mitigate the risk of exploitation.

Timeline

  1. Dec 25, 2025

    Technical details of 'LangGrinch' exploitation are published

    Public write-ups described how prompt-injected or otherwise user-controlled dictionaries containing the reserved 'lc' key could trigger unsafe deserialization, secret extraction, and potentially SSRF, file operations, or code execution. The reporting also attributed the discovery of CVE-2025-68664 to a Cyata researcher and highlighted broad exposure in AI application workflows.

  2. Dec 23, 2025

    CVE-2025-68664 and CVE-2025-68665 are publicly disclosed

    On December 23, 2025, the two LangChain serialization injection vulnerabilities were published in advisories and vulnerability feeds. Public disclosure identified CVE-2025-68664 as a critical LangChain Core issue and CVE-2025-68665 as a high-severity LangChain JS issue.

  3. Dec 23, 2025

    LangChain patches related CVE-2025-68665 in LangChain JS

    A related serialization injection issue in LangChain JS, tracked as CVE-2025-68665, was fixed in @langchain/core 0.3.80 and 1.1.8 and langchain 0.3.37 and 1.2.3. The flaw in the toJSON() path could let attacker-controlled data be deserialized as legitimate LangChain objects and expose secrets.

  4. Dec 23, 2025

    LangChain patches CVE-2025-68664 in Python packages

    LangChain released fixes for CVE-2025-68664 in langchain-core 0.3.81 and 1.2.5. The patch escaped reserved keys, restricted unsafe object reconstruction, and disabled secret resolution by default.

  5. Dec 1, 2025

    Cyata researcher reports LangChain Core flaw to maintainers

    A critical serialization injection vulnerability later tracked as CVE-2025-68664 was reported to LangChain in early December 2025. The flaw affected LangChain Core's handling of user-controlled data during serialization and deserialization.

See the full picture in Mallory

Mallory subscribers get deeper analysis on every story, including:

Impact Assessment

Who’s affected and how

Technical Details

Deep-dive technical analysis

Response Recommendations

Actionable next steps for your team

Indicators of Compromise

IPs, domains, hashes, and more

AI Threads

Ask questions and take action on every story

Advanced Filters

Filter by topic, classification, timeframe

Scheduled Alerts

Get matching stories delivered automatically

Related Stories

LangChain Flaws Enable Information Disclosure and Security Bypass

LangChain Flaws Enable Information Disclosure and Security Bypass

German CERT advisories disclosed two vulnerabilities in **LangChain**, warning that the framework is affected by flaws that can lead to **information disclosure** and the **bypassing of security measures**. The issues were published in separate notices, identified as `2026-0877` and `2026-1010`, indicating multiple security weaknesses affecting the widely used LLM application framework. The advisories provide limited public detail, but the reported impact suggests attackers could expose sensitive data and circumvent protections built into LangChain-based deployments. Organizations using LangChain should review the affected advisories, identify exposed implementations, and prioritize vendor guidance, patching, and compensating controls to reduce the risk of data exposure and weakened application security.

5 days ago
High-Severity Flaws in Langflow and vLLM Expose Secrets and Enable RCE

High-Severity Flaws in Langflow and vLLM Expose Secrets and Enable RCE

Two high-severity vulnerabilities were disclosed in widely used AI application components, affecting **Langflow** and **vLLM**. In Langflow, `CVE-2026-33497` impacts versions before **1.7.1** and stems from improper filtering of `folder_name` and `file_name` in the `/profile_pictures/{folder_name}/{file_name}` endpoint. The path traversal flaw (`CWE-22`) allows unauthenticated attackers to read files across directories, including the application's `secret_key`, creating a direct risk of secret exposure and follow-on compromise. The issue is addressed in **Langflow 1.7.1** and tracked in GitHub advisory `GHSA-ph9w-r52h-28p7`. A separate flaw in vLLM, `CVE-2026-27893`, can lead to **remote code execution** by bypassing a user's attempt to disable remote code trust. In versions from **0.10.1** up to but not including **0.18.0**, two model implementation files hardcoded `trust_remote_code=True`, overriding the safer `--trust-remote-code=False` setting and allowing malicious model repositories to run code during model use. The vulnerability, classified as `CWE-693`, was patched in **vLLM 0.18.0**, underscoring supply-chain and configuration-bypass risks in AI infrastructure components.

1 months ago
SSRF and XSS Vulnerability Disclosures in LeafKit and LangChain Community

SSRF and XSS Vulnerability Disclosures in LeafKit and LangChain Community

Multiple application-layer vulnerabilities were disclosed across popular developer components, including an **XSS escaping bypass in LeafKit** (Vapor’s Swift templating engine) and an **SSRF bypass in `@langchain/community`**. Vapor released an updated LeafKit version to address an HTML escaping flaw where **Unicode extended grapheme clusters** could bypass escaping in Leaf templates: Swift treats certain sequences as a single character while browsers parse them as multiple characters, enabling attackers to break out of HTML attributes and inject malicious attributes/scripts. The issue was reported by **bawolff** and fixed in a LeafKit release referenced by Vapor’s advisory. Separately, `@langchain/community` was reported vulnerable to **CVE-2026-26019** (also tracked as **GHSA-gf3v-fwqg-4vh7**), affecting versions **≤ 1.1.13** and fixed in **1.1.14**. The flaw sits in `RecursiveUrlLoader`, where a non-semantic `String.startsWith()` check could be bypassed with crafted hostnames (e.g., `https://example.com.attacker.com`), and where insufficient filtering allowed access to **private/reserved IP ranges** and **cloud metadata endpoints** such as `169.254.169.254`, potentially exposing IAM credentials/tokens in cloud-hosted deployments. A separate write-up describes a *different* SSRF scenario involving a **misconfigured Sentry tunnel** endpoint and provides general SSRF background, but it does not appear to be part of the same LeafKit or LangChain disclosure.

1 months ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed. Before adversaries strike.