logoalt Hacker News

Havoc01/21/20250 repliesview on HN

That’s the nature of LLMs. They can’t really think ahead to „know“ whether reasoning is required. So if it’s tuned to spit out reasoning first then that’s what it’ll do