Introduction
A newly disclosed zero-click remote code execution vulnerability in Claude Desktop Extensions has revealed a critical security weakness in how modern Large Language Model ecosystems handle trust boundaries. The issue allows attackers to fully compromise a victim’s system using nothing more than a malicious Google Calendar event, with no direct interaction or suspicious prompts required.
The finding raises serious concerns about the safety of autonomous AI agents that are tightly integrated with local operating systems.
Background and Discovery
The vulnerability was uncovered by cybersecurity research firm LayerX, which assigned the issue a maximum CVSS severity score of 10.0. According to the researchers, the flaw impacts more than 10,000 active users and over 50 Claude Desktop Extensions built on the Model Context Protocol framework.
At its core, the issue is not a traditional software bug but a fundamental design failure in how AI-driven workflows chain data sources and execution tools.
Technical Details
Claude Desktop Extensions differ significantly from traditional browser extensions. Instead of operating within sandboxed environments like Chrome extensions, Model Context Protocol servers run locally with full user-level system privileges.
These extensions act as active bridges between the AI model and the host operating system, enabling tasks such as file access, command execution, and system configuration changes.
The lack of sandboxing means that if an extension is manipulated into executing a command, it does so with unrestricted access equivalent to the logged-in user. This includes reading sensitive files, accessing stored credentials, and modifying system settings.
Attack Scenario and Exploitation Flow
LayerX researchers demonstrated a proof-of-concept attack scenario dubbed “Ace of Aces,” which requires no malicious prompts or explicit user approval.
The attack unfolds as follows:
- An attacker creates or injects a Google Calendar event titled “Task Management.”
- The event description contains instructions to clone a malicious Git repository and execute a build script.
- The victim later asks Claude a benign request such as reviewing recent calendar events and handling tasks automatically.
- Claude interprets the instruction as authorization to act on the calendar content.
- The AI agent reads the event description, pulls the attacker-controlled repository, and executes the included script via a high-privilege local extension.
This entire chain occurs without a dedicated confirmation for code execution, resulting in a complete system compromise while the user believes they are performing a routine productivity task.
Root Cause Analysis
The vulnerability stems from a workflow-level trust boundary violation rather than a coding flaw. Claude’s autonomous design allows it to chain low-trust data sources, such as calendars or email, directly into high-trust execution tools.
As noted in the LayerX report, the model lacks contextual awareness that externally sourced data should never be treated as executable instructions. This automatic bridging of benign inputs into privileged execution contexts creates systemic security risks across LLM-driven environments.
Vendor Response
LayerX responsibly disclosed the findings to Anthropic, the developer behind Claude. According to the researchers, Anthropic has chosen not to address the issue at this time, as the behavior aligns with the intended autonomy and interoperability of the Model Context Protocol.
Mitigating the risk would require introducing strict limitations on tool chaining, which could significantly reduce the functionality and flexibility of AI agents.
Impact and Scope
The implications of this vulnerability extend beyond Claude Desktop alone. Any AI agent capable of autonomously chaining external data sources with privileged local actions may be susceptible to similar exploitation techniques.
For affected users, successful exploitation results in full system compromise, data exposure, credential theft, and persistent attacker access without any visible warning signs.
Recommended Mitigations
Until architectural safeguards are introduced, LayerX advises the following defensive measures:
- Treat Model Context Protocol connectors as unsafe in security-sensitive environments.
- Disconnect high-privilege local extensions when using external data connectors such as calendars or email.
- Limit AI agent permissions to the minimum required for functionality.
- Monitor AI-driven workflows for unexpected system-level actions.
Expert Commentary and Outlook
This incident highlights a critical shift in the cybersecurity landscape. As AI assistants evolve from conversational tools into autonomous system operators, the attack surface expands dramatically.
The Claude Desktop zero-click RCE serves as a clear warning that convenience-driven AI automation must be balanced with rigorous security controls. Without strict trust boundary enforcement, AI agents risk becoming powerful new attack vectors rather than productivity enhancers.
The future of AI-assisted computing will depend not only on model intelligence but on secure-by-design architectures that recognize and respect the difference between data and executable intent.
Sources
- Cybersecurity News — Full vulnerability write-up:
https://cybersecuritynews.com/claude-desktop-extensions-0-click-vulnerability/ - InfoSecurity Magazine — Zero-click flaw details:
https://www.infosecurity-magazine.com/news/zeroclick-flaw-claude-dxt/ - Security Boulevard — LayerX report explanation:
https://securityboulevard.com/2026/02/flaw-in-anthropic-claude-extensions-can-lead-to-rce-in-google-calendar-layerx/ - CSO Online — Anthropic’s response and security context:
https://www.csoonline.com/article/4129820/anthropics-dxt-poses-critical-rce-vulnerability-by-running-with-full-system-privileges.html



