logoalt Hacker News

ggregoiretoday at 5:20 AM2 repliesview on HN

> scaled up by increasing the instance size

I always wondered what kind of instance companies at that level of scalability are using. Anyone here have some ideas? How much cpu/ram? Do they use the same instance types available to everyone, or does AWS and co offer custom hardware for these big customers?


Replies

jiggawattstoday at 5:27 AM

The major hyperscalers all offer a plethora of virtual machines SKUs that are essentially one entire two-socket box with many-core CPUs.

For example, Azure Standard_E192ibds_v6 is 96 cores with 1.8 TB of memory and 10 TB of local SSD storage with 3 million IOPS.

Past those "general purpose" VMs you get the enormous machines with 8, 16, or even 32 sockets.[1] These are almost exclusively used for SAP HANA in-memory databases or similar ERP workloads.

Azure Standard_M896ixds_24_v3 provides 896 cores, 32 TB of memory, and 185 Gbps Ethernet networking. This is generally available, but you have to allocate the quota through a support ticket and you may have to wait and/or get your finances "approved" by Microsoft. Something like this will set you back [edited] $175K per month[/edited]. (I suspect OpenAI is getting a huge effective discount.)

Personally, I'm a fan of "off label" use of the High Performance Compute (HPC) sizes[2] for database servers.

The Standard_HX176rs HPC VM size gives you 176 cores and 1.4 TB of memory. That's similar to the E-series VM above, but with a higher compute-to-memory ratio. The memory throughput is also way better because it has some HBM chips for L3 (or L4?) cache. In my benchmarks it absolutely smoked the general-purpose VMs at a similar price point.

[1] https://learn.microsoft.com/en-us/azure/virtual-machines/siz...

[2] https://learn.microsoft.com/en-us/azure/virtual-machines/siz...

show 3 replies