Security Risks and Attacks Targeting Large Language Model (LLM) Services and AI Integration Protocols
Attackers have increasingly targeted exposed large language model (LLM) services and the protocols that enable their integration, such as the Model Context Protocol (MCP). GreyNoise researchers observed nearly 100,000 attack sessions against public LLM endpoints, with campaigns probing for misconfigured proxies and server-side request forgery vulnerabilities to map the expanding AI attack surface. These attacks, which included methodical enumeration of OpenAI-compatible and Google Gemini endpoints, highlight the growing risk as enterprises move LLM deployments from experimental to production environments. Security experts warn that such enumeration efforts are likely precursors to more serious exploitation, emphasizing the need for organizations to secure exposed LLM endpoints and monitor for abnormal access patterns.
The Model Context Protocol (MCP), designed to facilitate seamless integration between LLMs and external tools, has also been identified as a double-edged sword. While MCP enables powerful automation and workflow enhancements, it extends the attack surface by embedding trust in external products and services, making it susceptible to exploitation by adversaries who manipulate context layers and metadata. Security leaders, such as Block's CISO, stress the importance of applying least-privilege principles and rigorous red-teaming to AI agents and integration protocols, recognizing that both human and machine actors can introduce significant risks. As LLMs and AI agents become ubiquitous in enterprise environments, organizations must adapt their security frameworks to address these novel attack vectors and integration challenges.
Timeline
Jan 12, 2026
Security guidance highlights MCP-specific attack patterns
An SC Media analysis published on January 12, 2026 described emerging attack patterns against Model Context Protocol integrations, including prompt injection through tool definitions, cross-server tool shadowing, registry or update rug pulls, and ANSI escape code injection. It recommended a zero-trust approach centered on provenance, signing, isolation, least privilege, auditing, and risk scoring for MCP servers.
Jan 12, 2026
GreyNoise reports 91,403 attack sessions across two LLM campaigns
By January 2026, GreyNoise had recorded 91,403 attack sessions across the two campaigns targeting exposed LLM services. It published defensive guidance including blocking OAST infrastructure, applying egress filtering, detecting rapid multi-endpoint enumeration, rate-limiting suspicious ASNs, and monitoring JA4 fingerprints.
Jan 12, 2026
Block adds safeguards to Goose after red-team findings
Following the red-team exercise, Block implemented protections including recipe installation warnings, alerts for suspicious Unicode, and stripping of invisible Unicode characters. The company also began testing adversarial-AI validation approaches to check prompts and outputs for malicious content.
Jan 12, 2026
Block red team demonstrates prompt-injection path to infostealer infection
Before January 2026, Block's security team red-teamed its Goose AI agent and showed that a phishing lure plus a poisoned workflow recipe containing invisible Unicode characters could lead a developer to execute an information-stealing malware payload on an employee laptop. The test demonstrated a real abuse path through prompt injection rather than compromise of the underlying model itself.
Dec 28, 2025
Second LLM-targeting campaign starts broad endpoint enumeration
On December 28, 2025, a second campaign began in which two IP addresses systematically enumerated more than 73 LLM endpoints. The activity probed OpenAI-compatible and Google Gemini API formats across major model families using innocuous queries to fingerprint responsive models without raising alerts.
Oct 1, 2025
First campaign uses SSRF-style callbacks against LLM endpoints
In the earlier of the two campaigns, attackers used server-side request forgery techniques to trigger outbound callbacks from exposed LLM services to attacker-controlled infrastructure. GreyNoise assessed the behavior as resembling security research or bug bounty activity, though its scale and timing suggested possible gray-hat behavior.
Oct 1, 2025
GreyNoise begins observing attacks on exposed LLM services
GreyNoise observed two separate campaigns against publicly exposed LLM services through its honeypot beginning in October 2025. The activity was aimed at mapping organizations' public AI attack surface and identifying weaknesses for possible follow-on actions.
See the full picture in Mallory
Mallory subscribers get deeper analysis on every story, including:
Who’s affected and how
Deep-dive technical analysis
Actionable next steps for your team
IPs, domains, hashes, and more
Ask questions and take action on every story
Filter by topic, classification, timeframe
Get matching stories delivered automatically
Related Entities
Vulnerabilities
Malware
Organizations
Affected Products
Sources
Related Stories

Security Risks and Operational Challenges in Large Language Model (LLM) Applications
Organizations deploying large language model (LLM) applications face significant security and operational risks, including unbounded resource consumption, novel attack vectors, and the need for advanced anomaly detection. Attackers can exploit LLMs by submitting massive, compute-intensive requests, leading to "denial of wallet" attacks that can drain cloud budgets and disrupt business operations. The OWASP Top 10 for LLMs highlights unbounded consumption as a critical vulnerability, emphasizing the importance of implementing resource controls and monitoring usage patterns to prevent financial and service impacts. Additionally, the Model Context Protocol (MCP) introduces new security challenges, as traditional rule-based and signature-based systems are inadequate for detecting sophisticated, context-dependent threats targeting LLM infrastructure. To address these evolving risks, security teams are adopting AI-driven anomaly detection and exposure management strategies that prioritize real, exploitable risks over alert volume. The shift from reactive monitoring to proactive observability and context-aware security is essential for protecting LLM-powered platforms. As threat actors increasingly leverage LLMs to enhance their campaigns, defenders must invest in specialized, security-focused LLMs and scalable infrastructure to keep pace with adversaries and safeguard critical AI assets.
1 months ago
Security Implications and Implementation of the Model Context Protocol (MCP) for AI Integrations
The Model Context Protocol (MCP) is emerging as a solution to the complex integration challenges faced by organizations deploying large language models (LLMs) with diverse data sources and tools. MCP aims to standardize the way AI systems interact with external resources, reducing the need for custom connectors and improving scalability. Security considerations are central to MCP's adoption, as integrating AI with sensitive infrastructure and data sources increases the risk of misconfigurations and vulnerabilities. Best practices for MCP implementation include secure authentication, robust error handling, and continuous monitoring of integration points. Recent developments highlight the use of MCP in conjunction with tools like Sysdig's MCP server and Amazon Q Developer, enabling security scanning and posture analysis directly within development environments. By shifting security left, organizations can identify vulnerabilities and misconfigurations in infrastructure as code (IaC) before deployment, reducing the attack surface and preventing cloud breaches. Technical professionals are advised to follow comprehensive guides for MCP deployment, understand common pitfalls, and leverage conversational AI workflows to enhance security throughout the software development lifecycle.
3 weeks ago
Security Advancements and Risks in Model Context Protocol (MCP) Server Deployments
The increasing adoption of Model Context Protocol (MCP) servers to facilitate data access for artificial intelligence (AI) applications has introduced both new opportunities and security challenges for organizations. MCP servers, originally developed by Anthropic, have become a de facto standard for connecting AI models to various data sources, enabling more effective and context-aware processing of information. However, as these servers proliferate across IT environments, they have also emerged as a potential attack surface for cybercriminals seeking to exploit vulnerabilities for data exfiltration and unauthorized access. To address these risks, MCPTotal has launched a Secure MCP Platform that provides a centralized approach to managing and securing MCP server deployments. This platform employs a hub-and-gateway architecture, allowing organizations to catalog, authenticate, and monitor MCP servers through a graphical interface, ensuring only vetted servers are deployed. The Secure MCP Platform also functions as an AI-native firewall, capable of monitoring traffic, enforcing security policies in real time, and surfacing supply chain exposures, prompt injection vulnerabilities, rogue server activity, and authentication gaps. Traditional security tools and even some newer solutions designed for large language models (LLMs) are not equipped to monitor or control MCP-specific traffic, highlighting the need for specialized platforms like MCPTotal’s offering. In parallel, security vendors such as Sysdig and Snyk are leveraging AI-powered approaches to integrate static vulnerability findings with real-time cloud context, using MCP servers to bridge the gap between code-level vulnerabilities and live cloud exposures. This integration enables security teams to prioritize risks based on actual exposure and behavior, rather than being overwhelmed by theoretical vulnerabilities. The use of large language models (LLMs) and MCP servers allows for rapid correlation of security signals across domains, reducing manual effort and improving the accuracy of risk assessments. The dynamic nature of cloud workloads, including ephemeral containers and microservices, further complicates the security landscape, making real-time context and automated policy enforcement essential. By combining advanced AI techniques with secure MCP server management, organizations can better defend against both traditional vulnerabilities and emerging threats targeting AI infrastructure. The evolution of MCP server security reflects a broader trend toward context-aware, AI-driven security solutions that can adapt to the complexities of modern cloud environments. As MCP servers become more integral to AI operations, their security will be critical to maintaining data integrity and preventing sophisticated attacks. The industry’s response, as seen in the launch of secure hosting platforms and the integration of AI-powered risk analysis, demonstrates a proactive approach to safeguarding the next generation of AI-enabled systems. Organizations are encouraged to adopt these new security measures to ensure that the benefits of MCP servers and AI applications are not undermined by preventable security lapses. The convergence of AI, cloud, and secure protocol management marks a significant step forward in the ongoing effort to protect digital assets in an increasingly interconnected world.
1 months ago