Security
Headlines
HeadlinesLatestCVEs

Headline

EchoLeak Zero-Click AI Attack in Microsoft Copilot Exposes Company Data

Aim Labs uncovers EchoLeak, a zero-click AI flaw in Microsoft 365 Copilot that allows data theft via email. Learn how this vulnerability enables sensitive information exfiltration without user interaction and its implications for AI security.

HackRead
#vulnerability#microsoft

Cybersecurity firm Aim Labs has uncovered a serious new security problem, named EchoLeak, affecting Microsoft 365 (M365) Copilot, a popular AI assistant. This flaw is a zero-click vulnerability, meaning attackers can steal sensitive company information without user interaction.

Aim Labs has shared details of this vulnerability and how it can be exploited with Microsoft’s security team, and so far, it is not aware of any customers being affected by this new threat.

****How “EchoLeak” Works: A New Kind of AI Attack****

For your information, M365 Copilot is a RAG-based chatbot, which means it gathers information from a user’s company environment like emails, files on OneDrive, SharePoint sites, and Teams chats to answer questions. While Copilot is designed to only access files the user has permission for, these files can still hold private or secret company data.

The main issue with EchoLeak is a new type of attack Aim Labs calls LLM Scope Violation. This happens when an attacker’s instructions, sent in an untrusted email, make the AI (the Large Language Model, or LLM) wrongly access private company data. It essentially makes the AI break its own rules of what information it should be allowed to touch. Aim Labs describes this as an “underprivileged email” somehow being able to “relate to privileged data.”

The attack simply starts when the victim receives an email, cleverly so written that it looks like instructions for the person receiving it, not for the AI. This trick helps it get past Microsoft’s security filters, called XPIA classifiers, which stop harmful AI instructions. Once the email is read by Copilot, it can then be tricked into sending sensitive information out of the company’s network.

Attack Flow (Source: Aim Labs)

Aim Labs explained that to get the data out, they had to find ways around Copilot’s defences, like its attempts to hide external links and control what data could be sent out. They found clever methods using how links and images are handled, and even how SharePoint and Microsoft Teams manage URLs, to secretly send data to the attacker’s server. For example, they found a way where a specific Microsoft Teams URL could be used to fetch secret information without any user action.

****Why This Matters****

This discovery shows that general design problems exist in many AI chatbots and agents. Unlike earlier research, Aim Labs has shown a practical way this attack could be used to steal very sensitive data. The attack doesn’t even need the user to engage in a conversation with Copilot.

Aim Labs also discussed RAG spraying for attackers to get their malicious emails picked up by Copilot more often, even when users ask about different topics, by sending very long emails broken into many pieces, increasing the chance one piece will be relevant to a user’s query. For now, organizations using M365 Copilot should be aware of this new type of threat.

Ensar Seker, CISO at SOCRadar, warns that Aim Labs’ EchoLeak findings reveal a major AI security gap. The exploit shows how attackers can exfiltrate data from Microsoft 365 Copilot with just an email, requiring no user interaction. By bypassing filters and exploiting LLM scope violations, it highlights deeper risks in AI agent design.

Seker urges organizations to treat AI assistants like critical infrastructure, apply stricter input controls, and disable features like external email ingestion to prevent abuse.

HackRead: Latest News

EchoLeak Zero-Click AI Attack in Microsoft Copilot Exposes Company Data