That’s the nature of LLMs. They can’t really think ahead to „know“ whether reasoning is required. So if it’s tuned to spit out reasoning first then that’s what it’ll do