Skip to main content
Mallory

Anthropic Expands Claude With Enterprise Plugins and Integrated Security Capabilities

ai-platform-security
Updated March 21, 2026 at 02:17 PM3 sources
Share:
Anthropic Expands Claude With Enterprise Plugins and Integrated Security Capabilities

Get Ahead of Threats Like This

Know if you're exposed. Before adversaries strike.

Anthropic rolled out expanded Claude Cowork capabilities, adding enterprise workflow plugins intended to push agentic AI beyond software development into functions such as marketing, HR, legal, and finance, positioning Claude as a broader automation layer inside organizations. Coverage characterized the move as part of a wider shift toward AI-driven workflows in the enterprise, with implications for CIO governance, adoption patterns, and how teams operationalize AI outside engineering.

Separately but related to the same product-direction narrative, commentary highlighted Anthropic formalizing security-oriented features inside Claude—including a prominent “suggest fix” capability aimed at moving from vulnerability detection to automated or semi-automated remediation—prompting market speculation about pressure on certain security-tool segments (particularly code-vulnerability discovery and remediation tooling). Other items in the set were not incident- or vulnerability-driven: one was generic SAST remediation guidance, and several were general-interest or business-trend pieces (e.g., an AI model blogging, AI-native software market dynamics, and an AI-agent governance/AX article) without specific, actionable cybersecurity event details.

Timeline

  1. Feb 27, 2026

    Anthropic formalizes Claude security and code-fix capabilities

    Analysis published the next day said Anthropic had formalized security capabilities already being used informally by researchers, including loading repositories into Claude to find vulnerabilities and propose fixes. The report highlighted Claude's context-aware 'suggest fix' functionality as a notable step toward automated remediation.

  2. Feb 26, 2026

    Commentary highlights security and governance risks of Claude Cowork adoption

    Reporting on the announcement emphasized that rapidly created AI coworker workflows introduce security and governance risks, including prompt injection, malicious workflow logic, unsafe API access, abuse of MCP connectors, and malicious instructions in shared repositories. Recommended mitigations included treating plugin libraries like production code, applying access controls, vetting reusable workflows, and defining permission tiers.

  3. Feb 26, 2026

    Anthropic announces additional Claude Cowork plugins for enterprise workflows

    Anthropic announced new Claude Cowork plugins to extend agentic AI beyond software development into enterprise functions such as marketing, HR, legal, and finance. The plugins were described as configurable natural-language skills that can connect to systems like Salesforce and execute multi-step business processes.

See the full picture in Mallory

Mallory subscribers get deeper analysis on every story, including:

Impact Assessment

Who’s affected and how

Technical Details

Deep-dive technical analysis

Response Recommendations

Actionable next steps for your team

Indicators of Compromise

IPs, domains, hashes, and more

AI Threads

Ask questions and take action on every story

Advanced Filters

Filter by topic, classification, timeframe

Scheduled Alerts

Get matching stories delivered automatically

Sources

February 26, 2026 at 12:00 AM
February 26, 2026 at 12:00 AM

Related Stories

Anthropic Expands Claude’s Agentic Coding Capabilities and Adds Embedded Vulnerability Scanning

Anthropic Expands Claude’s Agentic Coding Capabilities and Adds Embedded Vulnerability Scanning

Anthropic announced **Claude Code Security**, an embedded capability in *Claude Code* that scans customer codebases for vulnerabilities and suggests patches, initially rolling out to a limited set of enterprise/team customers for testing. The company said the feature was stress-tested via internal red-teaming, Capture-the-Flag exercises, and collaboration with **Pacific Northwest National Laboratory**, and positioned it as a way to reduce reliance on manual security reviews as AI-assisted “vibe coding” increases and attackers also use AI to accelerate weakness discovery. In parallel, Anthropic released **Claude Sonnet 4.6**, emphasizing improved coding performance, stronger “computer use” capabilities, and expanded developer tooling (e.g., adaptive/extended thinking modes, beta context compaction, and API tools for web search/fetch and code execution). Separate commentary highlighted the security risk of **agentic coding assistants** (e.g., *Claude Code*, *Cursor*, *GitHub Copilot*) operating with broad privileges—file access, shell execution, and secret handling—and argued that the emerging **Model Context Protocol (MCP)** ecosystem needs stronger, future-proof identity controls; additional industry guidance promoted **MLSecOps** as a way to integrate security into AI/ML development lifecycles, though it did not report a specific incident or vulnerability.

3 weeks ago
AI Adoption and Agentic AI Features Raise Security and Governance Concerns

AI Adoption and Agentic AI Features Raise Security and Governance Concerns

U.S. public-sector and industry reporting highlighted that **security confidence and workforce constraints** are emerging as major blockers to scaling artificial intelligence. A survey commissioned by *Google Public Sector* found most federal respondents are already using or planning to use AI, but only a small minority report completed AI adoption plans; respondents cited declining confidence in their agencies’ digital security posture, legacy technology exposure, procurement friction, and skills shortages as key impediments to moving beyond pilots. Separately, *Anthropic* introduced a research-preview “agentic” capability, **Cowork for Claude**, built on *Claude Code*, which can execute multi-step tasks with access to local folders and optional connectors (including browser-based workflows). Anthropic warned that ambiguous instructions or misinterpretation could result in **potentially destructive actions** (e.g., deleting local files) despite confirmation prompts for “significant actions,” underscoring the need for tighter controls when granting AI tools operational access. Other items in the set focused on broader AI discourse and geopolitics—Nvidia CEO Jensen Huang disputing “god AI” narratives and a Lawfare analysis of China’s AI capacity-building diplomacy—rather than specific cybersecurity events or actionable security findings.

1 months ago
Industry Debate and Reporting on Agentic AI in Cybersecurity

Industry Debate and Reporting on Agentic AI in Cybersecurity

Security and technology commentary is increasingly focused on **agentic AI**—autonomous or semi-autonomous AI systems that can execute multi-step workflows—and what that means for both defenders and attackers. One perspective argues the market is moving past broad “autonomous SOC” promises toward **purpose-built AI agents** designed for narrowly scoped, measurable security tasks (e.g., phishing detection, incident simulation, SOC triage), emphasizing operational deployment and clear success metrics rather than demos. Separately, a vendor blog post claims Anthropic disclosed what it describes as the **first autonomous AI-driven cyberattack**, in which attackers allegedly impersonated a cybersecurity firm and used *Claude Code* and the **Model Context Protocol (MCP)** with a custom orchestration framework to decompose and execute multi-stage intrusion activity, with AI completing most tasks and humans intervening only at a few decision points. A ZDNET piece is largely a high-level discussion about generative AI’s impact on thinking and leadership, with only general references to “machine-speed cyber threats,” and does not materially add incident-level or technical detail to the agentic-AI-in-cybersecurity narrative.

1 months ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed. Before adversaries strike.

Anthropic Expands Claude With Enterprise Plugins and Integrated Security Capabilities | Mallory