Believe me when I say that I want to run local models, and I do. But in my testing, 24 GB doesn't get you much brainpower.
Have you tried the latest qwen3.6 models?
For most of my questions and 8-9b model works great. Upshot is not having chatgpt/meta sell my data or target me with random thoughts later.
Have you tried the latest qwen3.6 models?
For most of my questions and 8-9b model works great. Upshot is not having chatgpt/meta sell my data or target me with random thoughts later.