logoalt Hacker News

thih9today at 1:49 PM3 repliesview on HN

How much does it cost to run these?

I see mentions of Claude and I assume all of these tools connect to a third party LLM api. I wish these could be run locally too.


Replies

kube-systemtoday at 6:51 PM

You can run openclaw locally against ollama if you want. But the models that are distilled/quantized enough to run on consumer hardware can have considerably poorer quality than full models.

show 1 reply
zozbot234today at 1:57 PM

You need very high-end hardware to run the largest SOTA open models at reasonable latency for real-time use. The minimum requirements are quite low, but then responses will be much slower and your agent won't be able to browse the web or use many external services.

hu3today at 2:36 PM

$3k Ryzen ai-max PCs with 128GB of unified ram is said to run this reasonably well. But don't quote me on it.