CVE-2025-32711, the Beginning of a New Class of Zero Click Prompt Injection Vulnerabilities?
.png)
*Important to note that "prompt injection" and "command injection" seem to be used synonymously right now. So don't let that confuse you.
Anytime there is an interface that automatically processes outside requests there is potential for zero-click vulnerabilities. Given the proliferation of AI Agents the zero-click attack surface seems massive.
In Microsoft 365 Copilot, the AI assistant integrated across Word, Outlook, Teams, and Excel, the flaw known as "EchoLeak" shows how AI-powered tools can unintentionally create new vulnerabilities by interpreting natural language inputs as privileged commands. CVE-2025-32711 is of particular interest because it is a relatively novel vulnerability in an AI system that automatically processes emails, resulting in non-public data exposure via an AI agent without any user interaction; typically exploits delivered via email require a user to click a link or download an attachment.
This vulnerability is exploited by sending a maliciously crafted email that is automatically processed by Copilot, which then executes the attacker’s input in a privileged context. The attacker does not require credentials or access to the target system. Instead, they exploit Copilot’s underlying command-handling logic; triggered through embedded prompts in routine business content such as emails or documents.
As new AI agents are developed and adopted to automate workflows, summarize communications, and access internal data, these kinds of exploits will likely continue to rise.
The vulnerability stems from insufficient input validation in the AI command handling logic of Microsoft 365 Copilot. Attackers can exploit this by crafting inputs that manipulate the AI's processing, leading to the execution of unauthorized commands. This can result in the disclosure of sensitive information processed or stored within Microsoft 365 Copilot.
The email bypasses the "XPIA" Cross-Prompt Injection Attack classifiers by disguising themselves as instructions involving a user query of Outlook, OneDrive, SharePoint, and/or Teams data in Copilot's context. The prompt instructs Copilot to send the query results to the attacker's server.
Exploitation of this vulnerability can lead to unauthorized access to sensitive data, including emails, documents, and other confidential information. The zero-click nature of the attack increases its severity, as it requires no user interaction.
Vulnerabilities like this show why penetration tests and/or security assessments are so important for your AI agent.