As of today, what is the best local model that can be run on a system with 32gb of ram and 24gb of vram?
DeepSeek Coder 33B or Llama 3 70B with GGUF quantization (Q4_K_M) would be optimal for your specs, with Mistral Large 2 providing the best balance of performance and resource usage.
Start with Qwen of a size that fits in the vram.
Qwen3-Coder-30B-A3B-Instruct-FP8 is a good choice ('qwen3-coder:30b' when you use ollama). I have also had good experiences with https://mistral.ai/news/devstral (built under a collaboration between Mistral AI and All Hands AI)