logoalt Hacker News

tomrod01/20/20252 repliesview on HN

Can you recommend hardware needed to run these?


Replies

simonw01/21/2025

I'm using an M2 64GB MacBook Pro. For the Llama 8B one I would expect 16GB to be enough.

I don't have any experience running models on Windows or Linux, where your GPU VRAM becomes the most important factor.

show 2 replies
yencabulator01/21/2025

ollama runs deepseek-r1:7b on AMD 8945HS, CPU-only, at ~12 tokens/s. You can get started pretty easily in the ~7B model range, for learning purposes.