It's notable that they blame "our upstream provider" when it's quite literally the same company. I can't imagine GitHub engineers are very happy about the forced migration to Azure.
In the Bad Old Days before Github (before Sourceforge even) building and package sucked because of the hundred source tarballs you had to fetch, on any given day 3 would be down (this is why Debian does the "_orig" tarballs the way they do). Now it sucks because on any given day either all of them are available or none of them are.
As an isolated event, this is not great, but when you see the stagnation (if not downwards trajectory) of GitHub as a whole, it‘s even worse in my opinion.
edit: Before someone says something. I do understand that the underlying issue is some issue with Azure.
This is why I come to hacker news. Sanity check on why my jobs are failing.
Copilot being down probably increased code quality
Getting the monthly GitHub outage out of the way early, good work.
50% of code written by AI, now let the AI handle this outage.
It is always a config problem. somewhere somplace in the mess of permissioning issues.
Will paid users be credited for the wasted Actions minutes?
Tay.ai and Zoe AI Agents probably running infra operations at GitHub and still arguing about how to deploy to production without hallucinating a config file and deploying a broken fix to address the issue.
Since there is no GitHub CEO, (Satya is not bothered anymore) and human employees not looking, Tay and Zoe are at the helm ruining GitHub with their broken AI generated fixes.
With linkedin down, I wonder if this is an azure thing ? IIRC github is being moved to azure, maybe the azure piece was partially enabled ?
Jobs get stuck. Minutes are being consumed. The problem isn't just it being unavailable.
Some more earlier: https://news.ycombinator.com/item?id=46860544
Looks like Azure as a platform just killed the ability for VM scale operations, due to a change on a storage account ACL that hosted VM extensions. Wow... We noticed when github actions went down, then our self hosted runners because we can't scale anymore.
Information
Active - Virtual Machines and dependent services - Service management issues in multiple regions
Impact statement: As early as 19:46 UTC on 2 February 2026, we are aware of an ongoing issue causing customers to receive error notifications when performing service management operations - such as create, delete, update, scaling, start, stop - for Virtual Machines (VMs) across multiple regions. These issues are also causing impact to services with dependencies on these service management operations - including Azure Arc Enabled Servers, Azure Batch, Azure DevOps, Azure Load Testing, and GitHub. For details on the latter, please see https://www.githubstatus.com.
Current status: We have determined that these issues were caused by a recent configuration change that affected public access to certain Microsoft‑managed storage accounts, used to host extension packages. We are actively working on mitigation, including updating configuration to restore relevant access permissions. We have applied this update in one region so far, and are assessing the extent to which this mitigates customer issues. Our next update will be provided by 22:30 UTC, approximately 60 minutes from now.
https://azure.status.microsoft/en-us/status