logoalt Hacker News

GitHub experience various partial-outages/degradations

160 pointsby bhoustonyesterday at 9:28 PM41 commentsview on HN

Comments

llama052yesterday at 10:02 PM

Looks like Azure as a platform just killed the ability for VM scale operations, due to a change on a storage account ACL that hosted VM extensions. Wow... We noticed when github actions went down, then our self hosted runners because we can't scale anymore.

Information

Active - Virtual Machines and dependent services - Service management issues in multiple regions

Impact statement: As early as 19:46 UTC on 2 February 2026, we are aware of an ongoing issue causing customers to receive error notifications when performing service management operations - such as create, delete, update, scaling, start, stop - for Virtual Machines (VMs) across multiple regions. These issues are also causing impact to services with dependencies on these service management operations - including Azure Arc Enabled Servers, Azure Batch, Azure DevOps, Azure Load Testing, and GitHub. For details on the latter, please see https://www.githubstatus.com.

Current status: We have determined that these issues were caused by a recent configuration change that affected public access to certain Microsoft‑managed storage accounts, used to host extension packages. We are actively working on mitigation, including updating configuration to restore relevant access permissions. We have applied this update in one region so far, and are assessing the extent to which this mitigates customer issues. Our next update will be provided by 22:30 UTC, approximately 60 minutes from now.

https://azure.status.microsoft/en-us/status

show 2 replies
guywithabikeyesterday at 10:59 PM

It's notable that they blame "our upstream provider" when it's quite literally the same company. I can't imagine GitHub engineers are very happy about the forced migration to Azure.

show 3 replies
bandramitoday at 2:00 AM

In the Bad Old Days before Github (before Sourceforge even) building and package sucked because of the hundred source tarballs you had to fetch, on any given day 3 would be down (this is why Debian does the "_orig" tarballs the way they do). Now it sucks because on any given day either all of them are available or none of them are.

fbnszbyesterday at 10:11 PM

As an isolated event, this is not great, but when you see the stagnation (if not downwards trajectory) of GitHub as a whole, it‘s even worse in my opinion.

edit: Before someone says something. I do understand that the underlying issue is some issue with Azure.

show 3 replies
maddmannyesterday at 10:34 PM

This is why I come to hacker news. Sanity check on why my jobs are failing.

show 2 replies
booiyesterday at 10:07 PM

Copilot being down probably increased code quality

fishgoesblubyesterday at 11:33 PM

Getting the monthly GitHub outage out of the way early, good work.

show 2 replies
falloutxyesterday at 10:25 PM

50% of code written by AI, now let the AI handle this outage.

show 1 reply
suriya-ganeshyesterday at 10:49 PM

It is always a config problem. somewhere somplace in the mess of permissioning issues.

levkkyesterday at 11:05 PM

This happens routinely every other Monday or so.

show 1 reply
focusgroup0yesterday at 11:54 PM

Will paid users be credited for the wasted Actions minutes?

rvzyesterday at 10:53 PM

Tay.ai and Zoe AI Agents probably running infra operations at GitHub and still arguing about how to deploy to production without hallucinating a config file and deploying a broken fix to address the issue.

Since there is no GitHub CEO, (Satya is not bothered anymore) and human employees not looking, Tay and Zoe are at the helm ruining GitHub with their broken AI generated fixes.

show 1 reply
jmclnxyesterday at 9:36 PM

With linkedin down, I wonder if this is an azure thing ? IIRC github is being moved to azure, maybe the azure piece was partially enabled ?

show 1 reply
re-thcyesterday at 11:04 PM

Jobs get stuck. Minutes are being consumed. The problem isn't just it being unavailable.