Yesterday ProPublica and ArsTechnica published a takedown of Azure: "Federal cyber experts called Microsoft’s cloud a “pile of shit,” approved it anyway" ...
https://arstechnica.com/information-technology/2026/03/feder...
> Having done a fair bit of logging to databases with various scripts, I believe this was a simple matter of overflowing the SQL column length for a field, causing the entire INSERT to fail. This is a common beginner mistake when you first start to work with databases.
I'm not sure if I understand this part. I'm trying to put it into my own words. Is the following correct? The attacker provided an input that was so long, that it was rejected by the database. And the program that submitted the SQL query to the database did not have any logic for handling a query failure, which is why there is no trace of the login attempt in the log or elsewhere.
IIRC, (& I don't remember if I reported it), but Azure's audit logs don't reflect reality when you delete a client secret from the UI, either.
If I remember the issue right, we lost a client secret (it just vanished!) and I went to the audit logs to see who dun it. According to the logs, I had done it. And yet, I also knew that I had not done it.
I eventually reconstructed the bug to an old page load. I had the page loaded when there were just secrets "A" & "B". When I then clicked the delete icon for "B", Azure deleted secrets "B" and "C" … which had been added since the page load. Essentially, the UI said "delete this row" but the API was "set the set of secrets to {A}". The audit log then logged the API "correctly" in the sense of, yes, my credentials did execute that API call, I suppose, but utterly incorrectly in the sense of any reasonable real-world view as to what I had done.
Thankfully we got it sorted, but it sort of shook my faith in Azure's logs in particular, and a little bit of audit logs in general. You have to make sure you've actually audited what the human did. Or, conversely, if you're trying to reason with audit logs, … you'd best understand how they were generated.
I don't think I would ever accept audit logs in court, if I were on a jury. Audit logs being hot lies is within reasonable doubt.
Bypassing logging feels relatively unimportant compared to some of the recent EntraID vulns we’ve seen
There's a big tradeoff here though: IT admins really love buying Microsoft. And when the dog tries to complain about the dogfood, the dogfood purchaser tends to not understand very well.
> It's not often that you see a demo of an actual Azure vulnerability, as they get patched and are gone forever. However, because Microsoft was having trouble replicating this complicated bypass, and asked for a video, I come bearing receipts.
Absolutely savage lol
[If you didn't read the thing, it's one curl command.]
Maybe I can use one of these to get in to my organization azure account from my alma mater. The email was deleted right after I graduated, but Microsoft has been trying to bill me (for a reserved IP or something) for close to a decade. Support is useless of course.
Azure Entra is an example of making a system so complex that nobody can understand it entirely. I'm fairly experienced in access control systems, OIDC, crypto, etc. but I was not able to understand how it all fits together.
Google Cloud is simplistic in comparison. AWS is full of legacy complexity (IAM policies, sigh) but it's fairly self-contained and can be worked around by splitting stuff into accounts.
I have not looked at Oracle cloud yet. Is it any better than MS?
It is shocking how absolutely garbage azure is.
Reminds me of an Azure Support ticket I submitted a few years ago when some developer clicked the "Fix this now" button in Application Insights, which then proceeded to double the scale of an already too-large App Service Plan. [1]
The Audit log showed the service identity of Application Insights, not the user that pressed the button! The cloud ops team changed the size back, and then the mysterious anonymous developer... changed it back. We had to have an "all hands" meeting to basically yell at the whole room to cut that out. Nobody fessed up, so we still don't know who it was.
The Azure Support tech argued with me vehemently that this was by design, that Azure purposefully obscures the identity of users in audit logs!!! He mumbled something about GDPR, which is nonsense, because we're on the opposite side of the planet from Europe.
At first I was absolutely flabbergasted that anyone even remotely associated with a security audit log design could be this stupid, but then something clicked for me and it all started making sense:
Entra Id logs are an evolution of Office 365 logs.
Microsoft developed Entra ID (original Azure Active Directory) initially for Microsoft 365, with the Azure Public Cloud platform a mere afterthought.They have a legitimate need to protect customer PII, hence the logs don't contain their customers' private information when this isn't strictly necessary. I.e.: Microsoft's subcontractors and outsourced support staff don't need and shouldn't see some of this information!
The problem was that they re-used the same code, the same architecture decisions, the same security tradeoffs for what are essentially 100% private systems. We need to see who on our payroll is monkeying around with our servers! There is NO expectation of privacy for staff! GDPR does NOT apply to non-European government departments! Etc...
To this day I still see gaps in their logging where some Microsoft dev just "oops" forgot to log the identity of the account triggering the action. The most frustrating one for me is that Deployments don't log the identity of the user. It's one of only three administrative APIs that they have!
[1] As an aside: The plan had a 3-year Reservation on it, which meant that we were now paying for the original plan and something twice the size and non-Reserved! This was something like 5x the original cost, with no warning and no obvious way to see from the Portal UI that you're changing away from a Reserved size.
[dead]
[dead]
[dead]
[dead]
Puts me in mind of this scathing report from CISA on how a state-sponsored group broke into Microsoft and then into the State Department and a bunch of other agencies. Reads like a heist movie.
https://www.cisa.gov/sites/default/files/2024-03/CSRB%20Revi...
What I found most incredible about the story is that it wasn't Microsoft who found the intrusion. It was some sysadmin at State who saw that some mail logs did not look right and investigated.