logoalt Hacker News

cpt_sobelyesterday at 3:21 PM2 repliesview on HN

> Neither can humans. We also just brute force "autocompletion"

I have to disagree here. When you are tasked with dividing 2 big numbers you most certainly don't "autocomplete" (with the sense of finding the most probable next tokens, which is what an LLM does), rather you go through set of steps you have learned. Same as with the strawberry example, you're not throwing guesses until something statistically likely to be correct sticks.


Replies

slightwinderyesterday at 4:08 PM

Humans first start with recognizing the problem, then search through their list of abilities to find the best skill for solving it, thus "autocomplete" their inner shell's commandline, before they start execution, to stay with that picture. Common AIs today are not much different from this, especially with reasoning-modes.

> you're not throwing guesses until something statistically likely to be correct sticks.

What do you mean? That's exactly how many humans are operating with unknown situations/topics. If you don't know, just throw punches and look what works. Of course, not everyone is ignorant enough to be vocal about this in every situation.

empath75yesterday at 4:09 PM

> I have to disagree here. When you are tasked with dividing 2 big numbers you most certainly don't "autocomplete" (with the sense of finding the most probable next tokens, which is what an LLM does), rather you go through set of steps you have learned.

Why do you think that this is the part that requires intelligence, rather than a more intuitive process? Because they have had machines that can do this mechanically for well over a hundred years.

There is a whole category of critiques of AI of this type: "Humans don't think this way, they mechanically follow an algorithm/logic", but computers have been able to mechanically follow algorithms and perform logic from the beginning! That isn't thinking!

show 1 reply