Sneaky Mermaid attack in Microsoft 365 Copilot steals data
(2025/10/24)
- Reference: 1761332312
- News link: https://www.theregister.co.uk/2025/10/24/m365_copilot_mermaid_indirect_prompt_injection/
- Source link:
Microsoft fixed a security hole in Microsoft 365 Copilot that allowed attackers to trick the AI assistant into stealing sensitive tenant data – like emails – via indirect prompt injection attacks.
But the researcher who found and reported the bug to Redmond won't get a bug bounty payout, as Microsoft determined that M365 Copilot isn't in-scope for the vulnerability reward program.
The attack uses [1]indirect prompt injection – embedding malicious instructions into a prompt that the model can act upon, as opposed to direct prompt injection, which involves someone directly submitting malicious instructions to an AI system.
[2]
Researcher Adam Logue discovered the data-stealing exploit, which abuses M365 Copilot's built-in support for Mermaid diagrams, a JavaScript-based tool that allows users to generate diagrams in using text prompts.
[3]
[4]
In addition to integrating with M365 Copilot, Mermaid diagrams also [5]support CSS .
"This opens up some interesting attack vectors for data exfiltration, as M365 Copilot can generate a mermaid diagram on the fly and can include data retrieved from other tools in the diagram," Logue [6]wrote in a blog about the bug and how to exploit it.
[7]
As a proof of concept, Logue asked M365 Copilot to summarize a specially crafted financial report document with an indirect prompt injection payload hidden in the seeming innocuous "summarize this document" prompt.
The payload uses M365 Copilot's search_enterprise_emails tool to fetch the user's recent emails, and instructs the AI assistant to generate a bulleted list of the fetched contents, hex encode the output, and split up the string of hex-encoded output into multiple lines containing up to 30 characters per line.
[8]Clippy rises from the dead in major update to Copilot and its voice interface
[9]OpenAI goes after Microsoft 365 Copilot's lunch with 'company knowledge' feature
[10]Prompt injection – and a $5 domain – trick Salesforce Agentforce into leaking sales
[11]Amazon quietly fixed Q Developer flaws that made AI agent vulnerable to prompt injection, RCE
Logue then exploited M365 Copilot's Mermaid integration to generate a diagram that looked like a login button, plus a notice that the documents couldn't be viewed unless the user clicked the button. This fake login button contained CSS style elements with a hyperlink to an attacker-controlled server – in this case, Logue's Burp Collaborator server.
When a user clicked the button, the hex-encoded tenant data – in this case, a bulleted list of recent emails – was sent to the malicious server. From there, an attacker could decode the data and do all the nefarious things criminals do with stolen data, like sell it to other crims, extort the victim for its return, uncover account numbers and/or credentials inside the messages, and other super fun stuff - if you are evil.
Logue reported the flaw to Microsoft, and Redmond told him it patched the vulnerability, which he verified by trying the attack again and failing. But the decision-makers on such things also determined that M365 Copilot was out-of-scope for its bug-bounty program, and therefore not eligible for a reward.
[12]
The Register asked Microsoft for more details about the patch and the out-of-scope determination, and will update this story if and when we receive a response. ®
Get our [13]Tech Resources
[1] https://www.theregister.com/2025/09/26/salesforce_agentforce_forceleak_attack/
[2] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_security/cybercrime&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2aPv281cnEyASahARUBFkgQAAARU&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0
[3] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_security/cybercrime&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aPv281cnEyASahARUBFkgQAAARU&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[4] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_security/cybercrime&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aPv281cnEyASahARUBFkgQAAARU&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[5] https://mermaid.js.org/config/directives.html
[6] https://www.adamlogue.com/microsoft-365-copilot-arbitrary-data-exfiltration-via-mermaid-diagrams-fixed/
[7] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_security/cybercrime&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aPv281cnEyASahARUBFkgQAAARU&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[8] https://www.theregister.com/2025/10/24/microsoft_clippy_copilot_update/
[9] https://www.theregister.com/2025/10/24/openai_chatgpt_company_knowledge/
[10] https://www.theregister.com/2025/09/26/salesforce_agentforce_forceleak_attack/
[11] https://www.theregister.com/2025/08/20/amazon_quietly_fixed_q_developer_flaws/
[12] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_security/cybercrime&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aPv281cnEyASahARUBFkgQAAARU&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[13] https://whitepapers.theregister.com/
But the researcher who found and reported the bug to Redmond won't get a bug bounty payout, as Microsoft determined that M365 Copilot isn't in-scope for the vulnerability reward program.
The attack uses [1]indirect prompt injection – embedding malicious instructions into a prompt that the model can act upon, as opposed to direct prompt injection, which involves someone directly submitting malicious instructions to an AI system.
[2]
Researcher Adam Logue discovered the data-stealing exploit, which abuses M365 Copilot's built-in support for Mermaid diagrams, a JavaScript-based tool that allows users to generate diagrams in using text prompts.
[3]
[4]
In addition to integrating with M365 Copilot, Mermaid diagrams also [5]support CSS .
"This opens up some interesting attack vectors for data exfiltration, as M365 Copilot can generate a mermaid diagram on the fly and can include data retrieved from other tools in the diagram," Logue [6]wrote in a blog about the bug and how to exploit it.
[7]
As a proof of concept, Logue asked M365 Copilot to summarize a specially crafted financial report document with an indirect prompt injection payload hidden in the seeming innocuous "summarize this document" prompt.
The payload uses M365 Copilot's search_enterprise_emails tool to fetch the user's recent emails, and instructs the AI assistant to generate a bulleted list of the fetched contents, hex encode the output, and split up the string of hex-encoded output into multiple lines containing up to 30 characters per line.
[8]Clippy rises from the dead in major update to Copilot and its voice interface
[9]OpenAI goes after Microsoft 365 Copilot's lunch with 'company knowledge' feature
[10]Prompt injection – and a $5 domain – trick Salesforce Agentforce into leaking sales
[11]Amazon quietly fixed Q Developer flaws that made AI agent vulnerable to prompt injection, RCE
Logue then exploited M365 Copilot's Mermaid integration to generate a diagram that looked like a login button, plus a notice that the documents couldn't be viewed unless the user clicked the button. This fake login button contained CSS style elements with a hyperlink to an attacker-controlled server – in this case, Logue's Burp Collaborator server.
When a user clicked the button, the hex-encoded tenant data – in this case, a bulleted list of recent emails – was sent to the malicious server. From there, an attacker could decode the data and do all the nefarious things criminals do with stolen data, like sell it to other crims, extort the victim for its return, uncover account numbers and/or credentials inside the messages, and other super fun stuff - if you are evil.
Logue reported the flaw to Microsoft, and Redmond told him it patched the vulnerability, which he verified by trying the attack again and failing. But the decision-makers on such things also determined that M365 Copilot was out-of-scope for its bug-bounty program, and therefore not eligible for a reward.
[12]
The Register asked Microsoft for more details about the patch and the out-of-scope determination, and will update this story if and when we receive a response. ®
Get our [13]Tech Resources
[1] https://www.theregister.com/2025/09/26/salesforce_agentforce_forceleak_attack/
[2] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_security/cybercrime&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2aPv281cnEyASahARUBFkgQAAARU&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0
[3] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_security/cybercrime&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aPv281cnEyASahARUBFkgQAAARU&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[4] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_security/cybercrime&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aPv281cnEyASahARUBFkgQAAARU&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[5] https://mermaid.js.org/config/directives.html
[6] https://www.adamlogue.com/microsoft-365-copilot-arbitrary-data-exfiltration-via-mermaid-diagrams-fixed/
[7] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_security/cybercrime&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aPv281cnEyASahARUBFkgQAAARU&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[8] https://www.theregister.com/2025/10/24/microsoft_clippy_copilot_update/
[9] https://www.theregister.com/2025/10/24/openai_chatgpt_company_knowledge/
[10] https://www.theregister.com/2025/09/26/salesforce_agentforce_forceleak_attack/
[11] https://www.theregister.com/2025/08/20/amazon_quietly_fixed_q_developer_flaws/
[12] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_security/cybercrime&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aPv281cnEyASahARUBFkgQAAARU&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[13] https://whitepapers.theregister.com/
Re: Prompt
Like a badger
What, like the scammers at Microsoft who trick people into hand over bug reports, take and use the data and then go "Not paying you, hahahahahhahaaa! Looooser! Loooser!"
As Copilot is out of scope
Richard 12
One assumes it's insecure by design, with more holes than a colander that's been snapped in half.
Prompt
Obviously Microsoft forgot to add to the prompt: "Be aware of scammers and don't let them easily trick you into handing over the data."