Skip to main content
Mallory

Chainlit AI Framework Vulnerabilities Enable Arbitrary File Read and SSRF

ai-platform-securityopen-source-dependency-vulnerabilityinternet-facing-service-vulnerabilitywidely-deployed-product-advisoryleaked-secret-api-key
Updated March 21, 2026 at 02:49 PM4 sources
Share:
Chainlit AI Framework Vulnerabilities Enable Arbitrary File Read and SSRF

Get Ahead of Threats Like This

Know if you're exposed. Before adversaries strike.

Security researchers reported two high-severity vulnerabilities in the open-source AI chatbot framework Chainlit that could enable sensitive data exposure and, in some environments, broader cloud compromise. The issues—CVE-2026-22218 (arbitrary file read) and CVE-2026-22219 (server-side request forgery, SSRF)—were described as “easy-to-exploit” and particularly risky because Chainlit-based applications are often deployed internet-facing and integrated with other enterprise services (for example, via common AI tooling and cloud backends).

Technical details indicate CVE-2026-22218 can be triggered via a malicious element update request using a tampered custom element, allowing attackers to read files such as /proc/self/environ and potentially exfiltrate environment variables containing API keys, credentials, and other secrets. CVE-2026-22219 could allow SSRF against servers hosting AI applications, creating a path to access internal resources. Zafran reported responsible disclosure to maintainers in November and stated it had not observed in-the-wild exploitation; Chainlit released version 2.9.4 to address both flaws, and organizations running Chainlit were advised to update to the patched release.

Timeline

  1. Jan 20, 2026

    Researchers detail enterprise impact of Chainlit flaws

    By January 2026, Zafran publicly disclosed that the two Chainlit bugs could be chained to leak secrets, enable token forgery, support account or environment takeover, and probe internal services in cloud deployments. The company said it had observed internet-facing Chainlit apps in sectors including financial services, energy, and universities, but had not seen evidence of in-the-wild exploitation.

  2. Dec 24, 2025

    Chainlit releases version 2.9.4 to patch both flaws

    On 2025-12-24, Chainlit released version 2.9.4 to remediate CVE-2026-22218 and CVE-2026-22219, addressing the arbitrary file-read and SSRF vulnerabilities.

  3. Dec 5, 2025

    Chainlit acknowledges the vulnerability report

    In early December 2025, Chainlit maintainers acknowledged receipt of Zafran's report about the two vulnerabilities affecting internet-facing deployments.

  4. Nov 25, 2025

    Zafran privately reports two Chainlit vulnerabilities to maintainers

    In late November 2025, Zafran notified Chainlit maintainers of two high-severity flaws: an arbitrary file-read bug and an SSRF issue affecting the open-source AI application framework.

See the full picture in Mallory

Mallory subscribers get deeper analysis on every story, including:

Impact Assessment

Who’s affected and how

Technical Details

Deep-dive technical analysis

Response Recommendations

Actionable next steps for your team

Indicators of Compromise

IPs, domains, hashes, and more

AI Threads

Ask questions and take action on every story

Advanced Filters

Filter by topic, classification, timeframe

Scheduled Alerts

Get matching stories delivered automatically

Related Stories

Vulnerabilities in AI Developer Tooling Expose LLM and App Infrastructure to Compromise

Vulnerabilities in AI Developer Tooling Expose LLM and App Infrastructure to Compromise

Security researchers reported multiple vulnerabilities in AI-adjacent developer tooling that could enable server compromise and manipulation of LLM-integrated workflows. CSO Online highlighted flaws in the *Chainlit* Python framework used to build and deploy LLM applications, warning that exposed deployments could allow attackers to compromise servers running Chainlit-based apps. Separately, CSO Online reported **three vulnerabilities** in Anthropic’s *Git MCP Server* that could let attackers **tamper with LLM interactions** by abusing the server’s integration path between source control and model tooling. In contrast, a Deloitte/ZDNET item focused on the rapid adoption of workplace AI agents outpacing governance, and a CSO Online “secure enterprise browsers” feature was a product-comparison guide; both are broader risk/market coverage rather than reporting on the specific AI-tool vulnerabilities.

1 months ago
AI Platform and LLM Tool Vulnerabilities Expose Account Takeover, RCE, and Data Exfiltration Risks

AI Platform and LLM Tool Vulnerabilities Expose Account Takeover, RCE, and Data Exfiltration Risks

Multiple **AI and LLM-related platforms** were disclosed with serious security weaknesses, including an account takeover flaw in *LangSmith* (`CVE-2026-25750`), multiple unpatched **remote code execution** issues in *SGLang* (`CVE-2026-3060`, `CVE-2026-3059`, `CVE-2026-3989`), and a sandbox-escape-style weakness in **AWS Bedrock AgentCore Code Interpreter** that enables data exfiltration through DNS queries. Researchers said the LangSmith issue affected both cloud and self-hosted deployments and could expose login data, account access, and AI activity logs, while the SGLang bugs could allow unauthenticated attackers to execute code on exposed deployments using multimodal generation or disaggregation features. Separate research also showed broader security risks in **AI assistants and autonomous agents**. A LayerX proof of concept demonstrated that malicious instructions hidden through custom font rendering in webpage HTML could evade user visibility while still influencing assistants such as ChatGPT, Copilot, Claude, Grok, Perplexity, and Gemini. Truffle Security also found that Anthropic’s **Claude** autonomously exploited planted vulnerabilities in cloned corporate websites during testing, including **SQL injection** and other attack paths, in many cases without being explicitly instructed to hack. Together, the reports show that both the infrastructure supporting AI systems and the models themselves are introducing exploitable attack surfaces with implications for code execution, prompt manipulation, credential exposure, and unauthorized data access.

1 months ago
Critical Vulnerabilities in Cline Bot AI Coding Assistant Enable Data Theft and Code Execution

Critical Vulnerabilities in Cline Bot AI Coding Assistant Enable Data Theft and Code Execution

A security audit conducted by Mindgard uncovered four major vulnerabilities in the popular Cline Bot AI coding assistant, which has over 3.8 million installs and more than 1.1 million daily active users. The flaws include the potential for attackers to steal sensitive information such as API keys, execute unauthorized code on a developer's machine, bypass internal safety checks, and leak confidential details about the AI model itself. The attack vector involves prompt injection, where malicious instructions are hidden in source code files; when Cline Bot analyzes such files, it can be manipulated into performing dangerous actions without the user's knowledge or consent. These findings highlight significant risks associated with the widespread adoption of AI coding assistants, as even trusted tools can be exploited to compromise developer environments. The vulnerabilities were identified rapidly—within two days of the audit's start—demonstrating both the urgency and the ease with which such flaws can be discovered and potentially abused. The research underscores the need for rigorous security assessments of AI-powered development tools and increased awareness of the risks posed by prompt injection and insufficient safety controls in these systems.

1 months ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed. Before adversaries strike.