This is why LLMs seem to work best in a loop with tests. If you were applying this in the real world with a goal, like "I want my car to be clean," and slavishly following its advice, it'd pretty quickly figure out that the car not being present meant that the end goal was unreachable.
They're not AGI, but they're also not stochastic parrots. Smugly retreat into either corner at your own peril.