I think the OpenAI deal to lock wafers was a wonderful coup. OpenAI is more and more losing ground against the regularity[0] of the improvements coming from Anthropic, Google and even the open weights models. By creating a chock point at the hardware level, OpenAI can prevent the competition from increasing their reach because of the lack of hardware.
[0]: For me this is really an important part of working with Claude, the model improves with the time but stay consistent, its "personality" or whatever you want to call it, has been really stable over the past versions, this allows a very smooth transition from version N to N+1.
Think the article should also mention how OpenAI is likely responsible for it. Good article I found from another thread here yesterday: https://www.mooreslawisdead.com/post/sam-altman-s-dirty-dram...
Perhaps we'll have to start optimizing software for performance and RAM usage again.
I look at MS Teams currently using 1.5GB of RAM doing nothing.
Anyone want to start a fab with me? We can buy an ASML machine and figure out the rest as we go. Toronto area btw
I wonder if these RAM shortages are going to cause the Steam Machine to be dead on arrival. Valve is probably not a big enough player to have secured production guarantees like Sony or Nintendo would have. If they try to launch with a price tag over $750, they're probably not going sell a lot.
> And those companies all realized they can make billions more dollars making RAM just for AI datacenter products, and neglect the rest of the market.
> So they're shutting down their consumer memory lines, and devoting all production to AI.
Okay this was the missing piece for me. I was wondering why AI demand, which should be mostly HBM, would have such an impact on DDR prices, which I’m quite sure are produced on separate lines. I’d appreciate a citation so I could read more.
Red chip supply problems in your factory are usually caused by insufficient plastic bars, which is usually caused by oil production backing up because you're not consuming your heavy oil and/or petroleum fast enough.
Crack heavy oil to light, and turn excess petroleum into solid fuel. As a further refinement, you can put these latter conversions behind pumps, and use the circuit network to only turn the pumps on when the tank storage of the respective reagent is higher than ~80%.
hth, glhf
I updated a $330 new HP laptop (it flexes like cardboard) from 8GB to 32GB in May. Cost back then: $44. Today, the same kit costs a ridiculous $180.
https://tomverbeure.github.io/2025/03/12/HP-Laptop-17-RAM-Up...
"dig into that pile of old projects you never finished instead of buying something new this year."
You don't need a new PC. Just use the old one.
I wonder if Apple will budge. The margins on their RAM upgrades were so ludicrous before that they're probably still RAM-profitable even without raising their prices, but do they want to give up those fat margins?
Not a bad time for the secondary market to be created. We keep buying everything new, when the old stuff works just as well. There is a ton of e-waste. The enthusiast market can benefit, while the enterprise market can just eat the cost.
Also, a great incentive to start writing efficient software. Does Chrome really need 5GB to run a few tabs?
Can someone explain why OpenAI is buying DDR5 RAM specifically? I thought LLMs typically ran on GPUs with specialised VRAM, not on main system memory. Have they figured out how to scale using regular RAM?
> And those companies all realized they can make billions more dollars making RAM just for AI datacenter products, and neglect the rest of the market.
I wouldn't ascribe that much intent. More simply, datacenter builders have bought up the entire supply (and likely future production for some time), hence the supply shortfall.
This is a very simple supply-and-demand situation, nothing nefarious about it.
My understanding is that this is primarily hitting DDR5 RAM (or better). With prices so inflated, is there an argument to revert and downgrade systems to DDR4 RAM technology in many use cases (which is not so inflated)?
I know this is mostly paranoid thinking on my behalf, but it almost feels like this is a conscious effort to attempt to destroy "personal" computing.
I've been a huge advocate for local, open, generative AI as the best resistance to massive take-over by large corporations controlling all of this content creation. But even as it is (or "was" I should say), running decent models at home is prohibitively expensive for most people.
Micron has already decided to just eliminate the Crucial brand (as mentioned in the post). It feels like if this continues, once our nice home PCs start to break, we won't be able to repair them.
The extreme version of this is that even dumb terminals (which still require some ram) will be as expensive as laptops today. In this world, our entire computing experience is connecting a dumb terminal to a ChatGPT interface where the only way we can interact with anything is through "agents" and prompts.
In this world, OpenAI is not overvalued, and there is no bubble because the large LLM companies become computing.
But again, I think this is mostly a dystopian sci-fi fiction... but it does sit a bit too close to the realm of possible for my tastes.
I think I paid like $500 ($1300 today) in 1989ish to upgrade from 2MB to 5MB of ram (had to remove 1MB to add 4MB)
I just gathered enough money to build my new PC. I'll even go to another country to pay less taxes, and this spike hit me hard. I'll buy anyway because I don't believe it will slow down so soon. But yeah, for me is a lot of money
> maybe it's a good time to dig into that pile of old projects you never finished instead of buying something new this year.
Always good advice.
Bullwhip effect on this will be funny. At least, we are in for some cheap ram in like… a dozen months or so.
I wonder how much of that RAM is sitting in GPUs in warehouses waiting for datacenters to be built or powered?
Called it! About a year ago (or more?) I thought nVidia was overpriced and if AI was coming to PCs RAM would be important and it might be good to invest in DRAM makers. As usual I didn't do anything with my insight, and here we are. Micron has more than doubled since summer.
I like this chart which shows multiple generations from September and it's only worsened.
The article suggests that because the power and cooling are customized, it would take a ton of effort to run the new AI servers in a home environment, but I'm skeptical of that. Home-level power and cooling are not difficult these days. I think when the next generation of AI hardware comes out (in 3-5 years), there will be a large supply of used AI hardware that we'll probably be able to repurpose. Maybe we'll sell them as parts. It won't be plug-and-play at first, but companies will spring up to figure it out.
If not, what would these AI companies do with the huge supply of hardware they're going to want to get rid of? I think a secondary market is sure to appear.
anybody care to speculate on how long this is likely to last? is this a blip that will resolve itself in six months, or is this demand sustainable and we are talking years to build up new manufacturing facilities to meet demand?
The big 3 memory manufacturers (SK Hynix, Samsung, Micron) are essentially all moving upmarket. They have limited capacity and want to use it for high margin HBM for GPUs and ddr5 for servers. At the same time CXMT, Winbond and Nanya are stepping in at the lower end of the market.
I don't think there is a conspiracy or price fixing going on here. Demand for high profit margin memory is insatiable (at least until 2027 maybe beyond) and by the time extra capacity comes online and the memory crunch eases the minor memory players will have captured such a large part of the legacy/consumer market that it makes little sense for the big 3 to get involved anymore.
Add to that scars from overbuilding capacity during previous super memory super cycles and you end up with this perfect storm.
If the OpenAI Hodling Company buys and warehouses 40% of global memory production or 900,000 memory wafers (i.e. not yet turned into DDR5/DDR6 DIMMs) per month at price X in October 2025, leading to supply shortages and tripling of price, they have the option of later un-holding the warehoused memory wafers for a profit.
https://news.ycombinator.com/item?id=46142100#46143535
Had Samsung known SK Hynix was about to commit a similar chunk of supply — or vice-versa — the pricing and terms would have likely been different. It’s entirely conceivable they wouldn’t have both agreed to supply such a substantial part of global supply if they had known more...but at the end of the day - OpenAI did succeed in keeping the circles tight, locking down the NDAs, and leveraging the fact that these companies assumed the other wasn’t giving up this much wafer volume simultaneously…in order to make a surgical strike on the global RAM supply chain..
What's the economic value per warehoused and insured cubic inch of 900,000 memory wafers? Grok response:> As of late 2025, 900,000 finished 300 mm 3D NAND memory wafers (typical high-volume inventory for a major memory maker) are worth roughly $9 billion and occupy about 104–105 million cubic inches when properly warehoused in FOUPs. → Economic value ≈ $85–90 per warehoused cubic inch.
Hopefully Apple uses their volume as leverage to avoid getting affected by this for as long as possible. I can ride it out if they manage to.
I can't help but be the pessimist angle. RAM production will need to increase to supply AI data centers. When the AI bubble bursts (and I do believe it will), the whole computing supply chain, which has been built around it, will take a huge hit too. Excess production capacity.
Wonder what would happen if it really takes a dive. The impact on the SF tech scene will be brutal. Maybe I'll go escape on a sailboat for 3 years or something.
Anyway, tangential, but something I think about occasionally.
I wonder if we'll start to see instance type shortages or price increases from EC2 and GCP if they can't get enough DRAM for latest gen servers.
> The reason for all this, of course, is AI datacenter buildouts. I have no clue if there's any price fixing going on like there was a few decades ago—that's something conspiracy theorists can debate—but the problem is there's only a few companies producing all the world's memory supplies.
LE should investigate as this concerns us all. However, I really don't have faith this current administration would criminally investigate this. Maybe the next one will, if there's going to be one.
Companies are adamant about RAMming AI down our throats, it seems.
I think we kiss of deathed the article haha. Here's an archive https://archive.is/6QD8c
I'm way ahead of all of you, I'm hoarding DDR2.
Is this a shortage of every type of RAM simultaneously?
panem et circenses
But what will happen when people are priced out from the circus?
time to stop using python boys, it's zig from here on out
1. Buy up 40% of global fab capacity
2. Resell wafers at huge markup to competitors
3. Profit
Also I feel that when this bubble bursts, we'll have much bigger problems than some expensive ram. More like the big 2007 crisis.
Only a matter of time before supply catches up and then likely overshoots (maybe combined with AI / datacenter bubble popping), and RAM becomes dirt cheap. Sucks for those who need it now though.
> But I've already put off some projects I was gonna do for 2026, and I'm sure I'm not the only one.
Let's be honest here - the projects I'm going to do in 2026, I bought the parts for those back in 2024. But this is definitely going to make me put off some projects that I might have finally gotten around to in 2028.
I am very excited for a few years when the bubble bursts and all this hardware is on the market for cheap like back in the early to mid 2000's after that bubble burst and you had tons of old servers available for homelabs. I can't wait to fill a room with 50kW of bulk GPUs on a pallet and run some cool shit.
Disingenuous title and no mention of those directly responsible, Sam Altman/OpenAI. Once again Jeff is desperate for clicks
Ha! Maybe Javascript developers will finally drop memory usage! You need to display the multiplication table? Please allocate 1GB of RAM. Oh, you want alternate row coloring? Here is another 100MB of CSS to do that.
edit: this is a joke
I grabbed a framework desktop with 128GB due to this. I can't imagine they can keep the price down for the next batches. If you bought 128GB of ram with "close" specs to the one used just that would be 1200 EUR at retail (who are obviously taking advantage).
Every shortage is followed by a glut. Wait and see for RAM prices to go way down. This will happen because RAM makers are racing to produce units to reap profits from the higher price. That overproduction will cause prices to crash.
32GB should be more than enough.
You can go 16GB if you go native and throw some assembly in the mix. Use old school scripting languages. Debloat browsers.
It has been long delayed.
This reminds me of the recent LaurieWired video presenting a hypothetical of, "what if we stopped making CPUs": https://www.youtube.com/watch?v=L2OJFqs8bUk
Spoiler, but the answer is basically that old hardware rules the day because it lasts longer and is more reliable of timespans of decades.
DDR5 32GB is currently going for ~$330 on Amazon
DDR4 32GB is currently going for ~$130 on Amazon
DDR3 32GB is currently going for ~50 on Amazon (4x8GB)
For anyone where cost is a concern, using older hardware seems like a particularly easy choice, especially if a person is comfortable with a Linux environment, since the massive droves of recently retired Windows 10 incompatible hardware works great with your Linux distro of choice.