Feels like I’ve lived through a full infrastructure fashion cycle already. I started my career when cloud was the obvious answer and on-prem was “legacy.”
Now on-prem is cool again.
Makes me wonder whether we’re already setting up the next cycle 10 years from now, when everyone rediscovers why cloud was attractive in the first place and starts saying “on-prem is a bad idea” again.
This is cool. Yet, there are levels of insanity and those depend on your inability to estimate things.
When I'm launching a project it's easier for me to rent $250 worth of compute from AWS. When the project consumes $30k a month, it's easier for me to rent a colocation.
My point is that a good engineer should know how to calculate all the ups and downs here to propose a sound plan to the management. That's the winning thing.
The own vs rent calculus for compute is starting to mirror the market value vs replacement cost divergence we see in physical assets. Cloud is convenient because it lowers OpEx initially, but you lose control over the long-term CapEx efficiency. Once you reach a certain scale, paying the premium for AWS flexibility stops making sense compared to the raw horsepower of owned metal.
I would suggest to use both on-premise hardware and cloud computing. Which is probably what comma is doing.
For critical infrastructure, I would rather pay a competent cloud provider than being responsible for reliability issues. Maintaining one server room in the headquarters is something, but two servers rooms in different locations, with resilient power and network is a bit too much effort IMHO.
For running many slurm jobs on good servers, cloud computing is very expensive and you sometimes save money in a matter of months. And who cares if the server room is a total loss after a while, worst case you write some more YAML and Terraform and deploy a temporary replacement in the cloud.
Another thing between is colocation, where you put hardware you own in a managed data center. It’s a bit old fashioned, but it may make sense in some cases.
I can also mention that research HPCs may be worth considering. In research, we have some of the world fastest computers at a fraction of the cost of cloud computing. It’s great as long as you don’t mind not being root and having to use slurm.
I don’t know in USA, but in Norway you can run your private company slurm AI workloads on research HPCs, though you will pay quite a bit more than universities and research institutions. But you can also have research projects together with universities or research institutions, and everyone will be happy if your business benefits a lot from the collaboration.
At scale (like comma.ai), it's probably cheaper. But until then it's a long term cost optimization with really high upfront capital expenditure and risk. Which means it doesn't make much sense for the majority of startup companies until they become late stage and their hosting cost actually becomes a big cost burden.
There are in between solutions. Renting bare metal instead of renting virtual machines can be quite nice. I've done that via Hetzner some years ago. You pay just about the same but you get a lot more performance for the same money. This is great if you actually need that performance.
People obsess about hardware but there's also the software side to consider. For smaller companies, operations/devops people are usually more expensive than the resources they manage. The cost to optimize is that cost. The hosting cost usually is a rounding error on the staffing cost. And on top of that the amount of responsibilities increases as soon as you own the hardware. You need to service it, monitor it, replace it when it fails, make sure those fans don't get jammed by dust puppies, deal with outages when they happen, etc. All the stuff that you pay cloud providers to do for you now becomes your problem. And it has a non zero cost.
The right mindset for hosting cost is to think of it in FTEs (full time employee cost for a year). If it's below 1 (most startups until they are well into scale up territory), you are doing great. Most of the optimizations you are going to get are going to cost you in actual FTEs spent doing that work. 1 FTE pays for quite a bit of hosting. Think 10K per month in AWS cost. A good ops person/developer is more expensive than that. My company runs at about 1K per month (GCP and misc managed services). It would be the wrong thing to optimize for us. It's not worth spending any amount of time on for me. I literally have more valuable things to do.
This flips when you start getting into the multiple FTEs per month in cost for just the hosting. At that point you probably have additional cost measured in 5-10 FTE in staffing anyway to babysit all of that. So now you can talk about trading off some hosting FTEs for modest amount of extra staffing FTEs and make net gains.
The reason companies don’t go with on premises even if cloud is way more expensive is because of the risk involved in on premises.
You can see it quite clearly here that there’s so many steps to take. Now a good company would concentrate risk on their differentiating factor or the specific part they have competitive advantage in.
It’s never about “is the expected cost in on premises less than cloud”, it’s about the risk adjusted costs.
Once you’ve spread risk not only on your main product but also on your infrastructure, it becomes hard.
I would be vary of a smallish company building their own Jira in house in a similar way.
Goes for small business and individuals as well. Sure, there are times that cloud makes sense, but you can and should do a lot on your own hardware.
> The cloud requires expertise in company-specific APIs and billing systems.
This is one reason I hate dealing with AWS. It feels like a waste of time in some ways. Like learning a fly-by-night javascript library - maybe I'm better off spending that time writing the functionality on my own, to increase my knowledge and familiarity?
I think this is how IBM is making tons of money on mainframes. A lot of what people are doing with cloud can be done on premises with the right levels of virtualization.
https://intellectia.ai/news/stock/ibm-mainframe-business-ach...
60% YoY growth is pretty excellent for an "outdated" technology.
Naive comment from a hobbyist with nothing close to $5M: I'm curious about the degree to which you build a "home lab" equivalent. I mean if "scaling" turned out to be just adding another Raspberry Pi to the rack (where is Mr. Geerling when you need him?) I could grow my mini-cloud month by month as spending money allowed.
(And it would be fun too.)
Datacenters need cool dry air? <45%
No, low isn't good perse. I worked in a datacenter which in winters had less than 40%, ram was failing all over the place. Low humidity causes static electricity.
Cloud, in terms of "other company's infrastructure" always implies losing the competence to select, source and operate hardware. Treating hardware as commodity will eventually treat your very own business as commodity: Someone can just copy your software/IP and ruin your business. Every durable business needs some kind of intellectual property and human skills that are not replaceable easily. This sounds binary, but isn't. You can build long-lasting partnerships. German Mittelstand did that over decades.
It would be interesting to hear their contingency plan for any kind of disaster (most commonly a fire) that hits their data center.
This also depends so much on your scaling needs. If you need 3 mid-sized ECS/EC2 instances, a load balancer, and a database with backups, renting those from AWS isn’t going to be significantly more expensive for a decent-sized company than hiring someone to manage a cluster for you and dealing with all the overhead of keeping it maintained and secure.
If you’re at the scale of hundreds of instances, that math changes significantly.
And a lot of it depends on what type of business you have and what percent of your budget hosting accounts for.
> Self-reliance is great, but there are other benefits to running your own compute. It inspires good engineering.
It's easy to inspire people when you have great engineers in the first place. That's a given at a place like comma.ai, but there are many companies out there where administering a datacenter is far beyond their core competencies.
I feel like skilled engineers have a hard time understanding the trade-offs from cloud companies. The same way that comma.ai employees likely don't have an in-house canteen, it can make sense to focus on what you are good at and outsource the rest.
I’m impressed that San Diego electrical power manages to be even more expensive than in the UK. That takes some doing.
Same thing. I was previously spending 5-8K on DigitalOcean, supposedly a "budget" cloud. Then the company was sold, and I started a new company on entirely self-hosted hardware. Cloudflare tunnel + CC + microk8s made it trivial! And I spend close to nothing other than internet that I already am spending on. I do have solar power too.
Working at a non-tech regional bigco, where ofc cloud is the default, I see everyday how AWS costs get out of hand, it's a constant struggle just to keep costs flat. In our case, the reality is that NONE of our services require scalability, and the main upside of high uptime is nice primarily for my blood pressure.. we only really need uptime during business hours, nobody cares what happens at night when everybody is sleeping.
On the other hand, there's significant vendor lockin, complexity, etc. And I'm not really sure we actually end up with less people over time, headcount always expands over time, and there's always cool new projects like monitoring, observability, AI, etc.
My feeling is, if we rented 20-30 chunky machines and ran Linux on them, with k8s, we'd be 80% there. For specific things I'd still use AWS, like infinite S3 storage, or RDS instances for super-important data.
If I were to do a startup, I would almost certainly not base it off AWS (or other cloud), I'd do what I write above: run chunky servers on OVH (initially just 1-2), and use specific AWS services like S3 and RDS.
A bit unrelated to the above, but I'd also try to keep away from expensive SaaS like Jira, Slack, etc. I'd use the best self-hosted open source version, and be done with it. I'd try Gitea for git hosting, Mattermost for team chat, etc.
And actually, given the geo-political situation as an EU citizen, maybe I wouldn't even put my data on AWS at all and self-host that as well...
Is there a client to sell on your own unused private cloud?
Don't even have to go this far. Colocating in a couple regions will give you most of the logistical thrills at a fraction of the cost!
I love this article. Great write up. Gave me the same feeling when I would read about Stackoverflows handful of servers that ran all of the sites.
This quote is gold:
The cloud requires expertise in company-specific APIs and billing systems. A data center requires knowledge of Watts, bits, and FLOPs. I know which one I rather think about.
Even at the personal blog level, I'd argue it's worth it to run your own server (even if it's just an old PC in a closet). Gets you on the path to running a home lab.
mark my words. cloud will fall out of fashion, but it will come back in fashion under another name in some amount of years. its cyclical.
The #1 reason I would advocate for using AWS today is the compliance package they bring to the party. No other cloud provider has anything remotely like Artifact. I can pull Amazon's PCI-DSS compliance documentation using an API call. If you have a heavily regulated business (or work with customers who do), AWS is hard to beat.
If you don't have any kind of serious compliance requirement, using Amazon is probably not ideal. I would say that Azure AD is ok too if you have to do Microsoft stuff, but I'd never host an actual VM on that cloud.
Compliance and "Microsoft stuff" covers a lot of real world businesses. Going on prem should only be done if it's actually going to make your life easier. If you have to replicate all of Azure AD or Route53, it might be better to just use the cloud offerings.
TLDR:
> In comma’s case I estimate we’ve spent ~5M on our data center, and we would have spent 25M+ had we done the same things in the cloud.
IMO, that's the biggie. It's enough to justify paying someone to run their datacenter. I wish there was a bit more detail to justify those assumptions, though.
That being said, if their needs grow by orders of magnitude, I'd anticipate that they would want to move their servers somewhere with cheaper electricity.
> Cloud companies generally make onboarding very easy, and offboarding very difficult. If you are not vigilant you will sleepwalk into a situation of high cloud costs and no way out. If you want to control your own destiny, you must run your own compute.
Cost and lock-in are obvious factors, but "sovereignty" has also become a key factor in the sales cycle, at least in Europe.
Handing health data, Juvoly is happy to run AI work loads on premise.
If it were me, instead of writing all these bespoke services to replicate cloud functionality, I'd just buy oxide.computer systems.
The company I work for used to have a hybrid where 95% was on-prem, but became closer to 90% in the cloud when it became more expensive to do on-prem because of VMware licensing. There are alternatives to VMware, but not officially supported with our hardware configuration, so the switch requires changing all the hardware, which still drives it higher than the cloud. Almost everything we have is cloud agnostic, and for anything that requires resilience, it sits in two different providers.
Now the company is looking at doing further cost savings as the buildings rented for running on-prem are sitting mostly unused, but also the prices of buildings have gone up in recent years, notably too, so we're likely to be saving money moving into the cloud. This is likely to make the cloud transition permanent.
This was one of the coolest job ads that I've ever read :). Congrats for what you have done with your infrastructure, team and product!
I used to colocate a 2U server that I purchased with a local data center. It was a great learning experience for me. Im curious why a company wouldn't colocate their own hardware? Proximity isnt an issue when you can have the datacenter perform physical tasks. Bravo to the comma team regardless. It'll be a great learning experience and make each person on their team better.
Ps... bx cable instead of conduit for electrical looks cringe.
I'm thinking about doing a research project at my university looking into distributed "data centers" hosted by communities instead of centralized cloud providers.
The trick is in how to create mostly self-maintaining deployable/swappable data centers at low cost...
> Cloud companies generally make onboarding very easy, and offboarding very difficult.
I reckon most on-prem deployments have significantly worse offboarding than the cloud providers. As a cloud provider you can win business by having something for offboarding, but internally you'd never get buy-in to spend on a backup plan if you decide to move to the cloud.
> Maintaining a data center is much more about solving real-world challenges. The cloud requires expertise in company-specific APIs and billing systems. A data center requires knowledge of Watts, bits, and FLOPs. I know which one I rather think about.
I find this to be applicable on a smaller scale too! I'd rather setup and debug a beefy Linux VPS via SSH than fiddle with various propietary cloud APIs/interfaces. Doesn't go as low-level as Watts, bits and FLOPs but I still consider knowledge about Linux more valuable than knowing which Azure knobs to turn.
15-years ago or so a spreadsheet was floating around where you could enter server costs, compute power, etc and it would tell you when you would break-even by buying instead of going with AWS. I think it was leaked from Amazon because it was always three-years to break-even even as hardware changed over time.
There's the HN I know and love
I love articles like this and companies with this kind of openness. Mad respect to them for this article and for sharing software solutions!
Realistically, it's the speed with which you can expand and contract. The cloud gives unbounded flexibility - not on the per-request scale or whatever, but on the per-project scale. To try things out with a bunch of EC2s or GCEs is cheap. You have it for a while and then you let it go. I say this as someone with terabytes of RAM in servers, and a cabinet I have in the Bay Area.
> We use SSDs for reliability and speed.
Hey, how do SSDs fail lately? Do they ... vanish off the bus still? Or do they go into read only mode?
I cancelled my digital ocean server of almost a decade late last year and replaced it with a raspberry pi 3 that was doing nothing. We can do it, we should do it.
> San Diego power cost is over 40c/kWh, ~3x the global average. It’s a ripoff, and overpriced simply due to political dysfunction.
Mind anyone elaborate? Always thought this is was a direct cause of the free market. Not sure if by dysfunction the op means lack of intervention.
Microsoft made the TCO argument and won. Self-hosting is only an option if you can afford expensive SysOps/DevOps/WhateverWeAreCalledTheseDays to manage it.
I just read about Railway doing something similar, sadly their prices are still high compared to other bare metal providers and even VPS such as Hetzner with Dokploy, very similar feature set yet for the same 5 dollars you get way more CPU, storage and RAM.
I've just shifted to Hetzner, no regret
Look the bottom of that page:
An error occurred: API rate limit already exceeded for installation ID 73591946.
Error from https://giscus.app/
Fellow says one thing and uses another.
The observation about incentives is underappreciated here. When your compute is fixed, engineers optimize code. When compute is a budget line, engineers optimize slide decks. That's not really a cloud vs on-prem argument, it's a psychology-of-engineering argument.
Well, their comment section is fore sure not running on premises, but on the cloud:
"An error occurred: API rate limit already exceeded for installation ID 73591946."
This is a great solution for a very specific type of team but I think most companies with consistent GPU workloads will still just rent dedicated servers and call it a day.
This is an industry we're[0] in. Owning is at one end of the spectrum, with cloud at the other, and a broadly couple of options in-between:
1 - Cloud – This is minimising cap-ex, hiring, and risk, while largely maximising operational costs (its expensive) and cost variability (usage based).
2 - Managed Private Cloud - What we do. Still minimal-to-no cap-ex, hiring, risk, and medium-sized operational cost (around 50% cheaper than AWS et al). We rent or colocate bare metal, manage it for you, handle software deployments, deploy only open-source, etc. Only really makes sense above €$5k/month spend.
3 - Rented Bare Metal – Let someone else handle the hardware financing for you. Still minimal cap-ex, but with greater hiring/skilling and risk. Around 90% cheaper than AWS et al (plus time).
4 - Buy and colocate the hardware yourself – Certainly the cheapest option if you have the skills, scale, cap-ex, and if you plan to run the servers for at least 3-5 years.
A good provider for option 3 is someone like Hetzner. Their internal ROI on server hardware seems to be around the 3 year mark. After which I assume it is either still running with a client, or goes into their server auction system.
Options 3 & 4 generally become more appealing either at scale, or when infrastructure is part of the core business. Option 1 is great for startups who want to spend very little initially, but then grow very quickly. Option 2 is pretty good for SMEs with baseline load, regular-sized business growth, and maybe an overworked DevOps team!
[0] https://lithus.eu, adam@