logoalt Hacker News

jnovekyesterday at 7:36 PM2 repliesview on HN

The AI can't really describe its reasoning, though. It can only look at its context history and find a justification (which it will then present as reasoning). In my experience asking the model "why did you do that" carries substantial hallucination risk.


Replies

0gsyesterday at 7:39 PM

True, though I have found that forcing (I use an agent skill to do this) an LLM's agent to document the reasoning behind each "decision" it makes seems to lead to better decision-making. Or at least, more justifiable decisions (even if the justification is bad).

dalmo3yesterday at 8:27 PM

While you're technically correct, I found that a simple "give me the strongest arguments for and against this, cite your sources" works wonders.