> Never ask a model for confirmation; the tool agrees with everyone.
Ditto. LLMs will somehow find fault in code that I know is correct when I tell it there’s something arbitrarily wrong with it.
Problem is LLMs often take things literally. I’ve never successfully had LLMs design entire systems (even with planning) autonomously.
It's also wrong advice. After an LLM produces code, asking it if it's correct (in a variety of other ways) can often find actual problems with it.