I'm curious, what is the LLM cost of the website?
I’m curious, too. But it could probably run locally with a small model, right? The performance is stellar, so that suggests some hardware acceleration is being used, but that could all be a local system.
I’m curious, too. But it could probably run locally with a small model, right? The performance is stellar, so that suggests some hardware acceleration is being used, but that could all be a local system.