Skip to main content
Mallory

LangChain Flaws Enable Information Disclosure and Security Bypass

ai-platform-securityopen-source-dependency-vulnerabilitywidely-deployed-product-advisory
Updated April 27, 2026 at 12:02 PM3 sources
Share:
LangChain Flaws Enable Information Disclosure and Security Bypass

Get Ahead of Threats Like This

Know if you're exposed. Before adversaries strike.

German CERT advisories disclosed two vulnerabilities in LangChain, warning that the framework is affected by flaws that can lead to information disclosure and the bypassing of security measures. The issues were published in separate notices, identified as 2026-0877 and 2026-1010, indicating multiple security weaknesses affecting the widely used LLM application framework.

The advisories provide limited public detail, but the reported impact suggests attackers could expose sensitive data and circumvent protections built into LangChain-based deployments. Organizations using LangChain should review the affected advisories, identify exposed implementations, and prioritize vendor guidance, patching, and compensating controls to reduce the risk of data exposure and weakened application security.

Timeline

  1. Apr 27, 2026

    dCERT discloses LangChain multiple vulnerabilities enabling info disclosure and SSRF bypass

    dCERT published advisory 2026-1253 for LangChain components openai and text-splitters, describing multiple vulnerabilities that could allow information disclosure and SSRF bypass. The reference does not provide additional technical details or remediation information.

  2. Apr 9, 2026

    dCERT discloses LangChain security bypass vulnerability

    dCERT published advisory 2026-1010 for a LangChain vulnerability that could allow bypassing security measures. The reference does not provide further details on impact, exploitation, or fixes.

  3. Mar 27, 2026

    dCERT discloses LangChain information disclosure vulnerability

    dCERT published advisory 2026-0877 for a LangChain vulnerability that could allow information disclosure. No additional technical details or remediation information are provided in the reference.

See the full picture in Mallory

Mallory subscribers get deeper analysis on every story, including:

Impact Assessment

Who’s affected and how

Technical Details

Deep-dive technical analysis

Response Recommendations

Actionable next steps for your team

Indicators of Compromise

IPs, domains, hashes, and more

AI Threads

Ask questions and take action on every story

Advanced Filters

Filter by topic, classification, timeframe

Scheduled Alerts

Get matching stories delivered automatically

Related Entities

Affected Products

Related Stories

LangChain Serialization Injection Vulnerabilities Enable Secret Extraction

LangChain Serialization Injection Vulnerabilities Enable Secret Extraction

Two critical serialization injection vulnerabilities were discovered in the LangChain framework, which is widely used for building LLM-powered applications. The first vulnerability (CVE-2025-68665) affects the `toJSON()` method in LangChain JS and related serialization routines, where user-controlled data containing the reserved `lc` key could be misinterpreted as legitimate LangChain objects during deserialization. The second vulnerability (CVE-2025-68664) impacts the `dumps()` and `dumpd()` functions, allowing attacker-supplied dictionaries with the `lc` key to be treated as internal objects, potentially leading to the extraction of secrets or the instantiation of internal classes with attacker-defined parameters. Both vulnerabilities are remotely exploitable and have been patched in recent versions of LangChain and LangChain Core. Exploitation of these flaws could allow attackers to extract sensitive information such as environment variables or manipulate application behavior by injecting malicious data structures. Organizations using affected versions of LangChain are strongly advised to upgrade to the patched releases—@langchain/core versions 0.3.80 and 1.1.8, langchain versions 0.3.37 and 1.2.3 for CVE-2025-68665, and langchain-core 0.3.81 and 1.2.5 for CVE-2025-68664—to mitigate the risk of exploitation.

1 months ago
Multiple Vulnerabilities Reported in Langflow

Multiple Vulnerabilities Reported in Langflow

German security advisories reported **multiple vulnerabilities** in **Langflow**, with separate notices identifying the product as affected and indicating that more than one security issue required attention. The advisories, `2026-0806` and `2026-1154`, both classify the matter as a set of vulnerabilities rather than a single flaw, pointing to an ongoing security issue affecting the Langflow platform. The available notices do not include public technical synopses, but the repeated publication of advisories for the same product indicates continued vulnerability management activity and the likelihood of updated findings or remediation guidance. Organizations using Langflow should review the referenced advisories, validate their deployed versions, and apply vendor or maintainer fixes and mitigations as they become available.

5 days ago
High-Severity Flaws in Langflow and vLLM Expose Secrets and Enable RCE

High-Severity Flaws in Langflow and vLLM Expose Secrets and Enable RCE

Two high-severity vulnerabilities were disclosed in widely used AI application components, affecting **Langflow** and **vLLM**. In Langflow, `CVE-2026-33497` impacts versions before **1.7.1** and stems from improper filtering of `folder_name` and `file_name` in the `/profile_pictures/{folder_name}/{file_name}` endpoint. The path traversal flaw (`CWE-22`) allows unauthenticated attackers to read files across directories, including the application's `secret_key`, creating a direct risk of secret exposure and follow-on compromise. The issue is addressed in **Langflow 1.7.1** and tracked in GitHub advisory `GHSA-ph9w-r52h-28p7`. A separate flaw in vLLM, `CVE-2026-27893`, can lead to **remote code execution** by bypassing a user's attempt to disable remote code trust. In versions from **0.10.1** up to but not including **0.18.0**, two model implementation files hardcoded `trust_remote_code=True`, overriding the safer `--trust-remote-code=False` setting and allowing malicious model repositories to run code during model use. The vulnerability, classified as `CWE-693`, was patched in **vLLM 0.18.0**, underscoring supply-chain and configuration-bypass risks in AI infrastructure components.

1 months ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed. Before adversaries strike.