logoalt Hacker News

Aurornisyesterday at 3:09 PM1 replyview on HN

> With Apple devices you get very fast predictions once it gets going but it is inferior to nvidia precisely during prefetch (processing prompt/context) before it really gets going

I have a Mac and an nVidia build and I’m not disagreeing

But nobody is building a useful nVidia LLM box for the price of a $500 Mac Mini

You’re also not getting as much RAM as a Mac Studio unless you’re stacking multiple $8,000 nVidia RTX 6000s.

There is always something faster in LLM hardware. Apple is popular for the price points of average consumers.


Replies

kristianptoday at 2:42 AM

Not many are getting useful inference out of a $500 mac mini, due to only having 16GB of RAM.

show 1 reply