> LLMs: Ollama for local models (also private models for now)
Incidentally, I decided to try to Ollama macOS app yesterday, and the first thing it tries to do upon launch is try to connect to some google domain. Not very private.
But can be audited which I'd buy everyday. It's probably not to hard to find network calls in a codebase if this task must be automated on update.
Yep, and I’ve noticed the same thing with in vscode with both the cline plugin and the copilot plugin.
I configure them both to use local ollama, block their outbound connections via little snitch, and they just flat out don’t work without the ability to phone home or posthog.
Super disappointing that Cline tries to do so much outbound comms, even after turning off telemetry in the settings.
Automatic update checks https://github.com/ollama/ollama/blob/main/docs/faq.md