I think the OpenAI deal to lock wafers was a wonderful coup. OpenAI is more and more losing ground against the regularity[0] of the improvements coming from Anthropic, Google and even the open weights models. By creating a chock point at the hardware level, OpenAI can prevent the competition from increasing their reach because of the lack of hardware.
[0]: For me this is really an important part of working with Claude, the model improves with the time but stay consistent, its "personality" or whatever you want to call it, has been really stable over the past versions, this allows a very smooth transition from version N to N+1.
Could this generate pressure to produce less memory hungry models?
Is anyone else deeply perturbed by the realization that a single unprofitable corporation can basically buy out the entire world's supply of computing hardware so nobody else can have it?
How did we get here? What went so wrong?
> By creating a chock point at the hardware level, OpenAI can prevent the competition from increasing their reach because of the lack of hardware
I already hate OpenAI, you don't have to convince me
This became very clear with the outrage, rather than excitement, of forcing users to upgrade to ChatGPT-5 over 4o.
Please explain to me like I am five: Why does OpenAI need so much RAM?
2024 production was (according to openai/chatgpt) 120 billion gigabytes. With 8 billion humans that's about 15 GB per person.
Sure, but if the price is being inflated by inflated demand, then the suppliers will just build more factories until they hit a new, higher optimal production level, and prices will come back down, and eventually process improvements will lead to price-per-GB resuming its overall downtrend.
I don't see this working for Google though, since they make their own custom hardware in the form of the TPUs. Unless those designs include components that are also susceptible?