I have a fairly maxed out M2 Ultra (24 cores, 192GB RAM), and still cannot get this machine to choke on anything.
I have not once felt the need to upgrade in years, and that’s with doing pretty demanding 3D and LLM work.
I've found current-generation Macs so capable that I've switched to using a Macbook Air. Would strongly recommend - it's still a powerful machine and it's significantly lighter and cheaper.
I have a powerful older Mac that doesn’t really “choke” on anything, but I could always use more speed.
The high memory Macs have been great for being able to run LLMs, but the prompt processing has always been on the slow side. The new AI acceleration in these should help with that.
There are also workloads like compiling code where I’ll take all the extra speed I can get. Every little bit of reduced cycle time helps me finish earlier in the day.
And then there’s gaming. I don’t game much, but the M1 and M2 era Apple Silicon feels sluggish relative to what I have on the nVidia side.
and that’s with doing pretty demanding 3D and LLM work.
It definitely chokes with larger models that can fit the 192GB of RAM. Prompt processing is a big bottleneck before M5.AI video generation can fairly easily choke anything that's not NVIDIA's flagship model. Even the latest local image gen models are so large that they can be frustratingly slow with non-optimal hardware even if they fit in the VRAM. IIRC when I had an M2, it was about 4x slower at running the venerable Stable Diffusion (and SDXL) than my meager RTX 3060.
Sounds pretty beefy. What kind of local LLM is that thing capable of running? Does it open up real alternatives to cloud providers like OpenAI and Claude, or are the local models this hardware is capable of running still pretty far behind?
Yeah I have an M1 Max, and I really want to upgrade, but there’s no reason to.
You might have confused Hacker News with your e-mail inbox again. This is an Apple press release, directed to everybody in the world who might be interested in a new computer or their first computer.
If there’s anything this past three years has taught me, it’s that modern cpus can performantly do every task except for streaming text over the internet.