"LLM backends: Anthropic, OpenAI, OpenRouter."
And here I was hoping that this was local inference :)
haha well I got something ridiculous coming soon for zclaw that will kinda work on board.. will require the S3 variant tho, needs a little more memory. Training it later today.
right, 888 kB would be impossible for local inference
however, it is really not that impressive for just a client
Sure. Why purchase a H200 if you can go with an ESP32 ^^