I agree about Kimi 2.5. Also, MiniMax M2.7 that just dropped is amazing, and it is just a 200G MOE model and inference is very fast. I tried using MiniMax M2.7 twice today as the backend for Claude Code and it did very well for both existing Python and Common Lisp projects. I will try MiniMax M2.7 next as the backend for OpenCode.