logoalt Hacker News

holriyesterday at 5:28 PM1 replyview on HN

> The argument that computational complexity has something to do with this could have merit but the article certainly doesn’t give indication as to why.

OP says it is because that predicting the next token can be correct or not, but it always looks plausible because that is what it calculates. Therefore it is dangerous and can not be fixed because it is how it works in essence.


Replies

dangusyesterday at 5:31 PM

I just want to point out a random anecdote.

Literally yesterday ChatGPT hallucinated an entire feature of a mod for a video game I am playing including making up a fake console command.

It just straight up doesn’t exist, it just seemed like a relatively plausible thing to exist.

This is still happening. It never stopped happening. I don’t even see a real slowdown in how often it happens.

It sometimes feels like the only thing saving LLMs are when they’re forced to tap into a better system like running a search engine query.

show 9 replies