How much does it cost to run these?
I see mentions of Claude and I assume all of these tools connect to a third party LLM api. I wish these could be run locally too.
You need very high-end hardware to run the largest SOTA open models at reasonable latency for real-time use. The minimum requirements are quite low, but then responses will be much slower and your agent won't be able to browse the web or use many external services.
$3k Ryzen ai-max PCs with 128GB of unified ram is said to run this reasonably well. But don't quote me on it.
You can run openclaw locally against ollama if you want. But the models that are distilled/quantized enough to run on consumer hardware can have considerably poorer quality than full models.