logoalt Hacker News

adampunktoday at 1:42 PM2 repliesview on HN

LLMs will make mistakes on every turn. The mistakes will have little to no apparent connection to "difficulty" or what may or may not be prevalent in the training data. They will be mistakes at all levels of operation, from planning to code writing to reporting. Whether those mistakes matter and whether you catch them is mostly up to you.

I have yet to find a model that does not make mistakes each turn. I suspect that this kind of error is fundamentally incorrigible.

The most interesting thing about LLMs is that despite the above (and its non-determinism) they're still useful.


Replies

simonwtoday at 2:37 PM

> I have yet to find a model that does not make mistakes each turn

What kind of mistakes are you talking about here?

pyrolisticaltoday at 1:49 PM

As a human I make typos all the time

show 3 replies