Atomic supports any generic openAI compatible LLM provider, including ollama, LM studio, etc.
But local-first !== defaults to local inference, right?
But local-first !== defaults to local inference, right?