logoalt Hacker News

vachinatoday at 5:44 PM1 replyview on HN

> Never ask a model for confirmation; the tool agrees with everyone.

Ditto. LLMs will somehow find fault in code that I know is correct when I tell it there’s something arbitrarily wrong with it.

Problem is LLMs often take things literally. I’ve never successfully had LLMs design entire systems (even with planning) autonomously.


Replies

wahnfriedentoday at 5:51 PM

It's also wrong advice. After an LLM produces code, asking it if it's correct (in a variety of other ways) can often find actual problems with it.

show 1 reply