Can you recommend hardware needed to run these?
I'm using an M2 64GB MacBook Pro. For the Llama 8B one I would expect 16GB to be enough.
I don't have any experience running models on Windows or Linux, where your GPU VRAM becomes the most important factor.
ollama runs deepseek-r1:7b on AMD 8945HS, CPU-only, at ~12 tokens/s. You can get started pretty easily in the ~7B model range, for learning purposes.
I'm using an M2 64GB MacBook Pro. For the Llama 8B one I would expect 16GB to be enough.
I don't have any experience running models on Windows or Linux, where your GPU VRAM becomes the most important factor.