logoalt Hacker News

wasabi991011today at 3:43 PM1 replyview on HN

There was an interesting substack that went through the logic of this type of failure[1].

The tl;dr is that phrasing the question as a Yes/No forces the answer into, well, a yes or a no. Without pre-answer reasoning trace, the LLM is forced to make a decision based on it's training data, which here is more likely to not be from 2025, so it picks no. Any further output cannot change the previous output.

[1] https://ramblingafter.substack.com/p/why-does-chatgpt-think-...


Replies

bradlytoday at 3:51 PM

That does make sense given the prompt "What is the current year and is 2026 next year?" provides the correct answer.