logoalt Hacker News

mpalmertoday at 5:55 PM1 replyview on HN

    At that point, asking the model to e.g. note any ambiguities about the task at hand is exactly equivalent to asking it to evaluate any input
This point is load-bearing for your position, and it is completely wrong.

Prompt P at state S leads to a new state SP'. The "common jumping off point" you describe is effectively useless, because we instantly diverge from it by using different prompts.

And even if it weren't useless for that reason, LLMs don't "query" their "state" in the way that humans reflect on their state of mind.

The idea that hallucinations are somehow less likely because you're asking meta-questions about LLM output is completely without basis


Replies

alexwebb2today at 6:06 PM

> The idea that hallucinations are somehow less likely because you're asking meta-questions about LLM output is completely without basis

Not sure who you're replying to here – this is not a claim I made.

show 1 reply