Skip to main content
Mallory

Research on Defending and Exploiting LLMs via Jailbreak and Prompt-Manipulation Techniques

ai-platform-securityinitial-access-methoddefense-evasion-method
Updated March 21, 2026 at 02:53 PM2 sources
Share:
Research on Defending and Exploiting LLMs via Jailbreak and Prompt-Manipulation Techniques

Get Ahead of Threats Like This

Know if you're exposed. Before adversaries strike.

Recent research highlights how LLM jailbreak and prompt-manipulation attacks can bypass safety controls, especially in multi-turn conversations where adversaries gradually escalate requests to elicit harmful or policy-violating output. A proposed defense framework, HoneyTrap, aims to counter these attacks with a multi-agent approach that goes beyond static filtering or supervised fine-tuning by using adaptive, deceptive responses intended to slow attackers and deny actionable information rather than simply refusing requests.

Separately, technical analysis of the LLM input-processing pipeline (tokenization, embeddings, attention, and context-window behavior) explains why common guardrails like keyword filters can fail and how attackers can exploit architectural properties (including Query-Key-Value attention dynamics) to steer model behavior. The research describes common offensive techniques—prompt injection, jailbreaking, and adversarial suffixes—and frames them as practical risks for enterprise deployments, particularly public-facing chatbots and other systems where organizations cannot fully control user input.

Timeline

  1. Jan 13, 2026

    SentinelOne details how modern LLM attacks exploit transformer internals

    SentinelOne published an analysis explaining how attacks on large language models exploit tokenization, embeddings, context windows, and self-attention to bypass safeguards. The post described attack classes including prompt injection, jailbreaking, adversarial suffixes, and gradient-based methods such as GCG, and reviewed mitigations like randomized smoothing, suffix filtering, and adversarial training.

  2. Jan 13, 2026

    Researchers develop HoneyTrap to counter LLM jailbreak attacks

    Researchers from Shanghai Jiao Tong University, the University of Illinois at Urbana-Champaign, and Zhejiang University proposed HoneyTrap, a multi-agent defense framework designed to deceive and mislead jailbreak attackers rather than only block requests. Reported testing across GPT-4, GPT-3.5-turbo, Gemini-1.5-pro, and LLaMa-3.1 showed reduced attack success rates and increased attacker effort.

See the full picture in Mallory

Mallory subscribers get deeper analysis on every story, including:

Impact Assessment

Who’s affected and how

Technical Details

Deep-dive technical analysis

Response Recommendations

Actionable next steps for your team

Indicators of Compromise

IPs, domains, hashes, and more

AI Threads

Ask questions and take action on every story

Advanced Filters

Filter by topic, classification, timeframe

Scheduled Alerts

Get matching stories delivered automatically

Related Stories

LLM Guardrail Bypass and Prompt Injection Weaknesses

LLM Guardrail Bypass and Prompt Injection Weaknesses

Multiple writeups describe how **LLM safety controls can be bypassed through prompt-based attacks**, arguing that jailbreaks and prompt injection are a practical security problem rather than a novelty. The reporting highlights common defense layers—training-time alignment, system prompts, input classifiers, and output filters—and says each can fail because the same model that follows instructions is also asked to interpret and enforce them. One article frames jailbreaks as an attack on the trust architecture of enterprise AI deployments, while the other demonstrates the issue through Lakera’s *Gandalf* challenge, where progressively stronger controls are still defeated by prompt manipulation. The material is **not fluff** because it provides substantive security analysis of an emerging attack class affecting AI systems. Both references focus on the same topic: how prompts can subvert LLM defenses, expose protected information, and reveal architectural weaknesses in current guardrail designs. The practical takeaway for defenders is that natural-language controls alone are brittle, especially when secrets, policy enforcement, and user-controlled input share the same inference path, making prompt injection and jailbreak resistance a core application security concern for enterprise AI deployments.

1 months ago
Prompt Injection and Jailbreak Techniques Targeting LLM-Powered Applications

Prompt Injection and Jailbreak Techniques Targeting LLM-Powered Applications

Security researchers and vendors are warning that **prompt injection and jailbreak techniques** remain a leading risk for enterprise deployments of large language models (LLMs), enabling attackers to override system instructions, bypass safety controls, and potentially drive **data exposure** outcomes. Resecurity reports assisting a Fortune 100 organization where AI-powered banking and HR applications were targeted with prompt-injection attempts, emphasizing that these attacks exploit model behavior rather than traditional software flaws and can be used in scenarios such as extracting sensitive configuration data (for example, attempts to elicit content resembling `/etc/passwd`). Resecurity also cites OWASP’s 2025 Top 10 for LLM Applications, where prompt injection is ranked as the top issue, and frames continuous security testing (e.g., VAPT) as a key control for enterprise AI systems. Separate research highlighted by Kaspersky describes a **“poetry” jailbreak** technique in which prompts framed as rhyming verse increased the likelihood that chatbots would produce disallowed or unsafe responses; the study tested this approach across 25 models from multiple vendors (including Anthropic, OpenAI, Google, Meta, DeepSeek, and xAI). In contrast, OpenAI’s planned upgrade to *ChatGPT Temporary Chat* is primarily a product/privacy change—adding optional personalization while keeping temporary chats out of history and model training (with possible retention for up to 30 days)—and does not describe a specific security incident or vulnerability disclosure tied to prompt injection or jailbreak research.

3 days ago
Prompt Injection and Jailbreak Attacks on Large Language Models

Prompt Injection and Jailbreak Attacks on Large Language Models

Recent research has demonstrated that large language models (LLMs) such as GPT-5 and others are increasingly vulnerable to prompt injection and jailbreak attacks, which can be exploited to bypass built-in safety guardrails and leak sensitive information. Attackers use techniques like prompt injection—embedding malicious instructions within seemingly benign queries—to trick LLMs into revealing confidential data, including user credentials and internal documents. A notable study by Icaro Lab, in collaboration with Sapienza University and DEXAI, found that adversarial prompts written as poetry could successfully bypass safety mechanisms in 62% of tested cases across 25 frontier models, with some models exceeding a 90% success rate. These findings highlight the sophistication and creativity of new attack vectors targeting AI systems, raising significant concerns for organizations embedding LLMs into business operations. The widespread adoption of LLMs in handling sensitive business functions amplifies the risk of data exfiltration through these advanced attack methods. As organizations increasingly rely on AI for customer service, document processing, and other critical tasks, the potential for prompt injection and poetic jailbreaks to facilitate unauthorized data access becomes a pressing security issue. The research underscores the urgent need for improved AI safety measures, robust prompt filtering, and continuous monitoring to mitigate the risks posed by these evolving adversarial techniques.

1 months ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed. Before adversaries strike.

Research on Defending and Exploiting LLMs via Jailbreak and Prompt-Manipulation Techniques | Mallory