Seems like every day there's another compelling reason to switch to Linux. Microsoft is doing truly incredible work this year!
Is this a real bug or is it a "lets train on more emails" by being careless?
I assume that whatever that is processed by AI service are generally retained for product improvements (training).
Microsoft somehow sees a future where LLMs have access to everything in your screen. In that dystopia, adding "confidential" tags or prompt instructions to ignore some types of content is never going to be enough. If you don't want LLMs to exfiltrate content then they cannot have access to it, period.
> However, this ongoing incident has been tagged as an advisory, a flag commonly used to describe service issues typically involving limited scope or impact.
How is having Copilot breach trust and privacy an “advisory”? Am I missing something?
Reads to me like it is not accessing other users mailboxes, its just accessing the current user's mailbox (like its meant to) but its supposed to ignore current user's emails that have a 'confidential' flag and that bit had a bug
Microsoft deploying buggy software is hardly news.
All these government contractors are forced to pay astronomical cloud bills to get "GCC-High" because it passes the right security-theater checklist, and then it totally ignores the DLP settings anyway!
This highlights a fundamental challenge with AI assistants: they need broad access to be useful, but that access is hard to scope correctly.
The bug is fixable, but the underlying tension—giving AI tools enough permissions to help while respecting confidentiality boundaries—will keep surfacing in different forms as these tools become more capable.
We're essentially retrofitting permission models designed for human users onto AI agents that operate very differently.
I more and more see a bug in my mouth that tries to encourage my boss to cancel Microsoft 365. I did not find the root cause yet
"...including messages that carry confidentiality labels."
Trusted operating system Mandatory Access Control where art thou?
An exemplar BaaF corporation (Bug as a Feature).
calling it a bug is generous. the whole point of these tools is to read everything you have access to. the 'bug' is that it worked exactly as designed but on the wrong emails
Oh, poor desperate Microsoft. No amount of bug fixing is going to fix Microsoft. Now that they've embarked on the LLM journey they're not going to know what's going to hit them next.
Why was this bug not found in testing?
Initial date of issue 3rd Feb 2026
AI is such garbage. There is considerable overlap between the security practices of AI and that of the slowest interns in the office.
None of this should surprise anyone by now. You are being lied to, continually.
You guys need to read the actual manifestos these AI leaders have written. And if not them, then read the propagandist stories they have others write like The Overstory by Richard Powers which is an arrogant pile of trash that culminates in the moral:
humans are horrible and obsolete and all should die and leave the earth for our new AI child
Which is of course, horseshit. They just want most people to die off, not all. And certainly not themselves.
They don't care about your confidential information, or anything else about you.
I'm shocked. Shocked!
There are two issues I see here (besides the obvious “Why do we even let this happen in the first place?”):
1. What happened to all the data Copilot trained on that was confidential? How is that data separated and deleted from the model’s training? How can we be sure it’s gone?
2. This issue was found; unfortunately without a much better security posture from Microsoft, we have no way of knowing what issues are currently lurking that are as bad as —- if not worse than —- what happened here.
There’s a serious fundamental flaw in the thinking and misguided incentives that led to “sprinkle AI everywhere”, and instead of taking a step back and rethinking that approach, we’re going to get pieced together fixes and still be left with the foundational problem that everyone’s data is just one prompt injection away from being taken; whether it’s labeled as “secure” or not.