They may be trying to sell through the existing CPU before a launch (soft or not) of the M5-based versions (though I've heard the rumor is there will be no M5 Ultra and we might be looking at an M6 Ultra later in the year).
I think it's unlikely that Apple is paying the spot price for memory. They almost certainly negotiate delivery/price contracts in advance. Maybe the contract for the chips used in the 512GB model will expire soon?
Not related, but why is the word "quiet" or "quietly" suddenly everywhere?
I was just looking to buy a Raspberry Pi 5: the 8GB one is now 58% more expensive than last year; that's more than what I'm willing to pay.
> Pricing for the 256GB configuration has also increased, from $1,600 to $2,000
Inventory is tight too, if you look at delivery/shipping times for Mac Studio and Mac Mini, I'm seeing April/May
I'm trying to work out if I should buy a 48GB M4 Pro Mac Mini now, or wait for M5 Pro ones later this year. For AI/ML purposes, mostly. As far as I can tell, the new M5 MacBooks didn't go up much or any for the same amount of RAM?
I tried finding a 128 - 256GB Mac Studio online and most options would not be shipped for at least "8-10 weeks".
Just got a strix halo ROG Z13 this month with soldered UMA memory, 128GB LPDDR5X-8000. It cost ~$3k.
Amazon is selling 128GB memory kits @ 5600MHZ for $3k.
I think there might be a market failure guys.
So, the question I wonder: is that it for this tier? Will we even see a 512GB variant of the next model?
I wonder if the war with Iran could actually fix the RAM shortage. If this continues it really could put a damper on datacenter rollout.
The Mac Studio highest config was a great value for AI workloads though at least for inference and no one is reporting this….
More likely that the M5 Max Studio is coming out. the M5 Max Macbook Pros just came out.
Also the 512 GB ssd version has a slower SSD than anything 1 TB and up. The new SSDs on the M5 I believe are much faster and what's coming likely will receive that.
There's no doubt there's a ram shortage, and price increases, and the biggest companies in the world lock in their pricing well in advance, and the remaining leftovers are where the consumers experience shortages.
The rumor from Gurman is that the M5 Ultra Mac Studio ships in the first half of this year.
This may just be a sign that the M5 Ultra Mac Studio is shipping sooner rather than later, as it's common for Apple to push out ship dates for soon to be replaced products.
We do have leaked benchmarks showing that the M5 Max outperforms the M3 Ultra currently shipping in the Mac Studio, so buying an M3 Ultra Studio right now would be a terrible idea.
Why can't RAM be done inhouse? It's probably simpler than a CPU.. right?
I’ve been in tech for a long time and have seen RAM shortages and price spikes before. This one’s fairly bad but they resolve in 1-3 years.
Apple recently introduced rdma support in mac os. They are probably trying to push those people buying the 512gb configuration towards buying more of the 256gb configuration and clustering them together.
Now how am I supposed to develop Electron apps and use Chrome?
In all seriousness, though, as one of the uninitiated, what would be the value of hosting LLMs on a machine like this that has a lot of memory that you pay for up front versus some sort of VPC-based approach?
[dead]
You can keep your 512gb mac studio i'm running qwen3.5:0.8b on an Orangepi zero 2w and learning just as much as they are.
IMO its more nuanced. They're likely in production ramp-up of the M5 Ultra Mac Studio, for release in the next ~3 months; they have pre-purchased bins of memory from the supply-constrained major memory supplies; and they need as much as they can get because they want to push an M5 Ultra config to 768gb to continue the "you can run local models" story that the M5 Max Macbook Pro started telling last week.
Going beyond 512gb and into 768gb memory is something of a threshold that will allow Apple to claim local capability for significantly more models. Qwen3-235B, Minimax M2.5, and GLM 4.7 could kind of run with no quantization on 512gb, but they'll comfortably run at 768gb. DeepSeek-V3.2 and GLM 5 may also work at some level of quantization.