logoalt Hacker News

ein0p01/20/20253 repliesview on HN

Downloaded the 14B, 32B, and 70B variants to my Ollama instance. All three are very impressive, subjectively much more capable than QwQ. 70B especially, unsurprisingly. Gave it some coding problems, even 14B did a pretty good job. I wish I could collapse the "thinking" section in Open-WebUI, and also the title for the chat is currently generated wrong - the same model is used by default as for generation, so the title begins with "<thinking>". Be that as it may, I think these will be the first "locally usable" reasoning models for me. URL for the checkpoints: https://ollama.com/library/deepseek-r1


Replies

atlacatl_sv01/29/2025

Thanks for sharing your experience with the 14B, 32B, and 70B variants! I'm curious, what hardware setup are you using to run these models on your Ollama instance?

buyucu01/20/2025

I don't think asking coding problems to a model by itself is fair. Almost all commercial models are combining a RAG and web-search. I find that most correct answers come from that, not from the actual model.

show 1 reply
Havoc01/21/2025

Librechat handled artifact like sections better than openwebui so suspect it’ll have support to collapse it first

Feels much heavier/slower as an app though

show 1 reply