Can someone explain why OpenAI is buying DDR5 RAM specifically? I thought LLMs typically ran on GPUs with specialised VRAM, not on main system memory. Have they figured out how to scale using regular RAM?
They didn't buy DDR5 - they bought raw wafer capacity and a ton of it at that.
They're not. They are buying wafers / production capacity to make HBM so there is less DDR5 supply.