logoalt Hacker News

jaredsyesterday at 6:07 PM3 repliesview on HN

What's the current situation for coding with Local LLM's on decent hardware? I have an M3 Max with 64 gb of ram and am thinking I should start looking at Ollama and Opencode? Is this a useful stack for smaller personal projects?


Replies

speedgooseyesterday at 7:53 PM

It’s getting there. You could give a try with qwen 3.6. It’s worth paying for better models in the cloud, but local models are now better than nothing.

pohlyesterday at 6:33 PM

One nice development recently was ollama's support for MLX optimization on Mac hardware. It's not obvious how to know you're using a model that works with it, yet, so it's rough around the edges.

https://ollama.com/blog/mlx

satvikpendemyesterday at 8:43 PM

Use llama.cpp or better yet Unsloth Studio