So I can use this in claude code with `ollama run claude`?
More like `ollama launch claude --model qwen3.6:latest`
Also you need to check your context size, Ollama default to 4K if <24 Gb of VRAM and you need 64K minimum if you want claude to be able to at least lift a finger.
https://sleepingrobots.com/dreams/stop-using-ollama/
have you found a model that does this with usable speeds on an M2/M3?
More like `ollama launch claude --model qwen3.6:latest`
Also you need to check your context size, Ollama default to 4K if <24 Gb of VRAM and you need 64K minimum if you want claude to be able to at least lift a finger.