I've just shifted to Hetzner, no regret
Well, their comment section is fore sure not running on premises, but on the cloud:
"An error occurred: API rate limit already exceeded for installation ID 73591946."
This is a great solution for a very specific type of team but I think most companies with consistent GPU workloads will still just rent dedicated servers and call it a day.
The observation about incentives is underappreciated here. When your compute is fixed, engineers optimize code. When compute is a budget line, engineers optimize slide decks. That's not really a cloud vs on-prem argument, it's a psychology-of-engineering argument.
Not long ago Railway moved from GCP to their own infrastructure since it was very expensive for them. [0] Some go for a Oxide rack [1] for a full stack solution (both hardware and software) for intense GPU workloads, instead of building it themselves.
It's very expensive and only makes sense if you really need infrastructure sovereignty. It makes more sense if you're profitable in the tens of millions after raising hundreds of millions.
It also makes sense for governments (including those in the EU) which should think about this and have the compute in house and disconnected from the internet if they are serious about infrastructure sovereignty, rather than depending on US-based providers such as AWS.
In case anyone from comma.ai reads this: "CTO @ comma.ai" the link at the end is broken, it’s relative instead of absolute.
One thing I don't really understand here is why they're incurring the costs of having this physically in San Diego, rather than further afield with a full-time server tech essentially living on-prem, especially if their power numbers are correct. Is everyone being able to physically show up on site immediately that much better than a 24/7 pair of remote hands + occasional trips for more team members if needed?
I like Hotz’s style: simply and straightforwardly attempting the difficult and complex. I always get the impression: “You don’t need to be too fancy or clever. You don’t need permission or credentials. You just need to go out and do the thing. What are you waiting for?”
Am I the only one that is simply scared of running your own cloud? What happens if your administrator credentials get leaked? At least with Azure I can phone microsoft and initiate a recovery. Because of backups and soft deletion policies quite a lot is possible. I guess you can build in these failsafe scenarios locally too? But what if a fire happens like in South Korea? Sure most companies run more immediate risks such as going bankrupt, but at least Cloud relieves me from the stuff of nightmares.
Except now I have nightmares that the USA will enforce the patriot act and force Microsoft to hand over all their data in European data centers and then we have to migrate everything to a local cloud provider. Argh...
Chatgpt:
# don’t own the cloud, rent instead
the “build your own datacenter” story is fun (and comma’s setup is undeniably cool), but for most companies it’s a seductive trap: you’ll spend your rarest resource (engineer attention) on watts, humidity, failed disks, supply chains, and “why is this rack hot,” instead of on the product. comma can justify it because their workload is huge and steady, they’re willing to run non-redundant storage, and they’ve built custom GPU boxes and infra around a very specific ML pipeline. ([comma.ai blog][1])
## 1) capex is a tax on flexibility
a datacenter turns “compute” into a big up-front bet: hardware choices, networking choices, facility choices, and a depreciation schedule that does not care about your roadmap. cloud flips that: you pay for what you use, you can experiment cheaply, and you can stop spending the minute a strategy changes. the best feature of renting is that quitting is easy.
## 2) scaling isn’t a vibe, it’s a deadline
real businesses don’t scale smoothly. they spike. they get surprise customers. they do one insane training run. they run a migration. owning means you either overbuild “just in case” (idle metal), or you underbuild and miss the moment. renting means you can burst, use spot/preemptible for the ugly parts, and keep steady stuff on reserved/committed discounts.
## 3) reliability is more than “it’s up most days”
comma explicitly says they keep things simple and don’t need redundancy for ~99% uptime at their scale. ([comma.ai blog][1]) that’s a perfectly valid trade—if your business can tolerate it. many can’t. cloud providers sell multi-zone, multi-region, managed backups, managed databases, and boring compliance checklists because “five nines” isn’t achieved by a couple heroic engineers and a PID loop.
## 4) the hidden cost isn’t power, it’s people
comma spent ~$540k on power in 2025 and runs up to ~450kW, plus all the cooling and facility work. ([comma.ai blog][1]) but the larger, sneakier bill is: on-call load, hiring niche operators, hardware failures, spare parts, procurement, security, audits, vendor management, and the opportunity cost of your best engineers becoming part-time building managers. cloud is expensive, yes—because it bundles labor, expertise, and economies of scale you don’t have.
## 5) “vendor lock-in” is real, but self-lock-in is worse
cloud lock-in is usually optional: you choose proprietary managed services because they’re convenient. if you’re disciplined, you can keep escape hatches: containers, kubernetes, terraform, postgres, object storage abstractions, multi-region backups, and a tested migration plan. owning your datacenter is also lock-in—except the vendor is past you, and the contract is “we can never stop maintaining this.”
## the practical rule
*if you have massive, predictable, always-on utilization, and you want to become good at running infrastructure as a core competency, owning can win.* that’s basically comma’s case. ([comma.ai blog][1]) *otherwise, rent.* buy speed, buy optionality, and keep your team focused on the thing only your company can do.
if you want, tell me your rough workload shape (steady vs spiky, cpu vs gpu, latency needs, compliance), and i’ll give you a blunt “rent / colo / own” recommendation in 5 lines.
[1]: https://blog.comma.ai/datacenter/ "Owning a $5M data center - comma.ai blog"
Having worked only with the cloud I really wonder if these companies don't use other software with subscriptions. Even though AWS is "expensive" its a just another line item compared to most companies overall SaaS spend. Most businesses don't need that much compute or data transfer in the grand scheme of things.
Stopped reading at "Our main storage arrays have no redundancy". This isn't a data center, it's a volatile AI memory bank.
Or better; write your software such that you can scale to tens of thousands of concurrent users on a single machine. This can really put the savings into perspective.
capex vs opex the Opera.
[dead]
[dead]
[dead]
[dead]
> In a future blog post I hope I can tell you about how we produce our own power and you should too.
Rackmounted fusion reactors, I hope. Would solve my homelab wattage issues too.