logoalt Hacker News

Kichererbsenyesterday at 7:22 AM1 replyview on HN

Sure. But neither do you. So are you really thinking or are you just autocompleting?

When was the last time you sat down and solved an original problem for which you had no prior context to "complete" an approximated solution with? When has that ever happened in human history? All the great invention-moment stories that come to mind seem to have exactly that going on in the background: Prior context being auto-completed in an Eureka! moment.


Replies

notepad0x90yesterday at 3:22 PM

I think (hah) you're understimating what goes on when living things (even small animals) think. We use auto-compleition for some tasks, but it is a component of what we do.

Let's say your visual system auto-completes some pattern and detects a snake while you're walking, that part is auto-completion. You will probably react by freezing or panicing, that part is not auto-compleition, it is a deterministic algorithm. But then you process the detected object, auto-compleiting again to identify it as just a long cucumber. But again, the classification part is auto-completion. What will you do next? "Hmm, free cucumber, i can cook with it for a meal" and you pick it up. auto-completion is all over that simple decision, but you're using results of auto-completion to derive association (food), check your hunger level (not auto-completion), determine that the food is desirable and safe to eat (some auto-compleition), evalute what other options you have for food (evaluate auto-complete outputs), and then instruct your nervous system to pick it up.

We use auto-compleition all the time as an input, we don't reason using auto-compleition in other words. You can argue that if all your input is from auto-completion (it isn't) then it makes no difference. But we have deterministic reasoning logical systems that evaluate auto-completion outputs. if your cucumber detection identified it as rotten cucumber, then decision that it is not safe to eat is not done by auto-completion but a reasoning logic that is using auto-completion output. You can approximate the level of rot but once you recognize it as rotten, you make decision based on that information. You're not approximating a decision, you're evaluating a simple logic of: if(safe()){eat();}.

Now amp that up to solving very complex problems. what experiments will you run, what theories will you develop, what R&D is required for a solution,etc.. these too are not auto-completions. an LLM would auto complete these and might arrive at the same conclusion most of the time. but our brains are following algorithms we developed and learned over time where an LLM is just expanding on auto-completion but with a lot more data. In contrast, our brains are not trained on all the knowledge available on the public internet, we retain a tiny miniscule of that. we can arrive at similar conclusions as the LLM because we are reasoning and following algorithms matured and perfected over time.

The big take away should be that, as powerful as LLMs are now, if they could reason like we do, they'd dominate us and become unstoppable. Because their auto-completion is many magnitudes better than ours, if they can write new and original code based on an understanding of problem solving algorithms, that would be gen ai.

We can not just add 1 + 1 but prove that the addition operation is correct mathematically. and understand that when you add to a set one more object, the addition operation always increments. We don't approximate that, we always, every single time , increment because we are following an algorithm instead of choosing the most likely correct answer.