logoalt Hacker News

csomaryesterday at 9:01 AM1 replyview on HN

Honestly, I think many hallucinations are the LLM way of "moving forward". For example, the LLM will try something, not ask me to test (and it can't test it, itself) and then carry on to say "Oh, this shouldn't work, blabla, I should try this instead.

Now that LLMs can run commands themselves, they are able to test and react on feedback. But lacking that, they'll hallucinate things (ie: hallucinate tokens/API keys)


Replies

braeboyesterday at 9:50 AM

Refusing to give up is a benchmark optimization technique with unfortunate consequences.

show 1 reply