Blog > CVE-2025-32711, the Beginning of a New Class of Zero Click Prompt Injection Vulnerabilities?

CVE-2025-32711, the Beginning of a New Class of Zero Click Prompt Injection Vulnerabilities?

CVE 2025-32711 – Zero Click AI Prompt Injection in Microsoft 365 Copilot ("EchoLeak")

  • Reported: June 25, 2025
  • CVSS Score: 9.8 (Critical)
  • CVSS Vector: AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H
  • CWE: CWE-77 – Improper Neutralization of Special Elements used in a Command/Prompt ('Command/ Prompt Injection')
  • Product: Microsoft 365 Copilot
  • Vulnerability Type: AI Prompt/Command Injection
*Important to note that "prompt injection" and "command injection" seem to be used synonymously right now. So don't let that confuse you.

Is CVE 2025-32711 the Beginning of a New Class of Zero-Click Critical Prompt Injection Vulnerabilities?

Anytime there is an interface that automatically processes outside requests there is potential for zero-click vulnerabilities. Given the proliferation of AI Agents the zero-click attack surface seems massive.

In Microsoft 365 Copilot, the AI assistant integrated across Word, Outlook, Teams, and Excel, the flaw known as "EchoLeak" shows how AI-powered tools can unintentionally create new vulnerabilities by interpreting natural language inputs as privileged commands. CVE-2025-32711 is of particular interest because it is a relatively novel vulnerability in an AI system that automatically processes emails, resulting in non-public data exposure via an AI agent without any user interaction; typically exploits delivered via email require a user to click a link or download an attachment.

This vulnerability is exploited by sending a maliciously crafted email that is automatically processed by Copilot, which then executes the attacker’s input in a privileged context. The attacker does not require credentials or access to the target system. Instead, they exploit Copilot’s underlying command-handling logic; triggered through embedded prompts in routine business content such as emails or documents.

As new AI agents are developed and adopted to automate workflows, summarize communications, and access internal data, these kinds of exploits will likely continue to rise.

CVE 2025-32711 Vulnerability Details

How the Exploit Works

The vulnerability stems from insufficient input validation in the AI command handling logic of Microsoft 365 Copilot. Attackers can exploit this by crafting inputs that manipulate the AI's processing, leading to the execution of unauthorized commands. This can result in the disclosure of sensitive information processed or stored within Microsoft 365 Copilot.

The email bypasses the "XPIA" Cross-Prompt Injection Attack classifiers by disguising themselves as instructions involving a user query of Outlook, OneDrive, SharePoint, and/or Teams data in Copilot's context. The prompt instructs Copilot to send the query results to the attacker's server.

Risk and Impact

  • Attack Vector: Network – Remote over HTTP
  • Attack Complexity: Low
  • Privileges Required: None
  • User Interaction: None
  • Confidentiality: High – Sensitive data disclosure
  • Integrity: Low - Potential data manipulation

Exploitation of this vulnerability can lead to unauthorized access to sensitive data, including emails, documents, and other confidential information. The zero-click nature of the attack increases its severity, as it requires no user interaction.

How to Prevent Similar Zero-Click Vulnerabilities When Developing AI Products

  • Implement Input Validation: This one is the most obvious. Ensure robust validation and sanitization of all inputs to AI components to prevent command injection.
  • Monitor and Log AI Activities: Enable detailed logging of AI command inputs and outputs to detect suspicious activity.
  • Educate Developers: Inform developers about the potential risks of malicious inputs and automatic processing.
  • Restrict External Data Flows by Default: Implement controls that prevent AI systems, especially those with internal data access, from sending output or interpreted content to external IPs or third party domains, unless explicitly authorized; this reduces the impact of prompt-based data exfiltration.
  • Human in the Loop: For more sensitive workloads and/or medium to high risk actions (like sending data to external servers) keep a human in the workflow to manually review or approve AI generated actions before execution.

Additional Resources

Shameless Plug

Vulnerabilities like this show why penetration tests and/or security assessments are so important for your AI agent.

Share this post