logoalt Hacker News

rob74today at 9:25 AM2 repliesview on HN

> no living thing can get away with making so many mistakes before it's learned anything

If you consider that LLMs have already "learned" more than any one human in this world is able to learn, and still make those mistakes, that suggests there may be something wrong with this approach...


Replies

ben_wtoday at 11:07 AM

Not so: "Per example" is not "per wall clock".

To a limited degree, they can compensate for being such slow learners (by example) due to the transistors doing this learning being faster (by the wall clock) than biological synapses to the same degree to which you walk faster than continental drift. (Not a metaphor, it really is that scale difference).

However, this doesn't work on all domains. When there's not enough training data, when self-play isn't enough… well, this is why we don't have level-5 self-driving cars, just a whole bunch of anecdotes about various different self-driving cars that work for some people and don't work for other people: it didn't generalise, the edge cases are too many and it's too slow to learn from them.

So, are LLMs bad at… I dunno, making sure that all the references they use genuinely support the conclusions they make before declaring their task is complete, I think that's still a current failure mode… specifically because they're fundamentally different to us*, or because they are really slow learners?

* They *definitely are* fundamentally different to us, but is this causally why they make this kind of error?

quantummagictoday at 9:48 AM

But humans do the same thing. How many eons did we make the mistake of attributing everything to God's will, without a scientific thought in our heads? It's really easy to be wrong, when the consequences don't lead to your death, or are actually beneficial. The thinking machines are still babies, whose ideas aren't honed by personal experience; but that will come, in one form or another.

show 1 reply