logoalt Hacker News

notepad0x90yesterday at 7:13 AM11 repliesview on HN

I don't get why you would say that. it's just auto-completing. It cannot reason. It won't solve an original problem for which it has no prior context to "complete" an approximated solution with. you can give it more context and more data,but you're just helping it complete better. it does not derive an original state machine or algorithm to solve problems for which there are no obvious solutions. it instead approximates a guess (hallucination).

Consciousness and self-awareness are a distraction.

Consider that for the exact same prompt and instructions, small variations in wording or spelling change its output significantly. If it thought and reasoned, it would know to ignore those and focus on the variables and input at hand to produce deterministic and consistent output. However, it only computes in terms of tokens, so when a token changes, the probability of what a correct response would look like changes, so it adapts.

It does not actually add 1+2 when you ask it to do so. it does not distinguish 1 from 2 as discrete units in an addition operation. but it uses descriptions of the operation to approximate a result. and even for something so simple, some phrasings and wordings might not result in 3 as a result.


Replies

slightwinderyesterday at 10:55 AM

> It won't solve an original problem for which it has no prior context to "complete" an approximated solution with.

Neither can humans. We also just brute force "autocompletion" with our learned knowledge and combine it to new parts, which we then add to our learned knowledge to deepen the process. We are just much, much better at this than AI, after some decades of training.

And I'm not saying that AI is fully there yet and has solved "thinking". IMHO it's more "pre-thinking" or proto-intelligence.. The picture is there, but the dots are not merging yet to form the real picture.

> It does not actually add 1+2 when you ask it to do so. it does not distinguish 1 from 2 as discrete units in an addition operation.

Neither can a toddler nor an animal. The level of ability is irrelevant for evaluating its foundation.

show 5 replies
akoyesterday at 7:32 AM

An LLM by itself is not thinking, just remembering and autocompleting. But if you add a feedback loop where it can use tools, investigate external files or processes, and then autocomplete on the results, you get to see something that is (close to) thinking. I've seen claude code debug things by adding print statements in the source and reasoning on the output, and then determining next steps. This feedback loop is what sets AI tools apart, they can all use the same LLM, but the quality of the feedback loop makes the difference.

show 3 replies
lossyalgoyesterday at 2:11 PM

Furthermore regarding reasoning, just ask any LLM how many "r letters are in strawberry" - repeat maybe 3 times just to get a feeling for how much variance in answers you can get. And this "quirk" of the inability to get the right answer is something that after 2 years making fun of LLMs online on various forums is still an issue. The models aren't getting smarter, and definitely aren't thinking, they are still token generators with a few tricks on top to make them seem more intelligent than predecessors.

show 2 replies
IanCalyesterday at 7:27 AM

> it's just auto-completing. It cannot reason

Auto completion just means predicting the next thing in a sequence. This does not preclude reasoning.

> I don't get why you would say that.

Because I see them solve real debugging problems talking through the impact of code changes or lines all the time to find non-obvious errors with ordering and timing conditions on code they’ve never seen before.

show 1 reply
xanderlewisyesterday at 7:19 AM

> I don't get why you would say that.

Because it's hard to imagine the sheer volume of data it's been trained on.

show 1 reply
Kichererbsenyesterday at 7:22 AM

Sure. But neither do you. So are you really thinking or are you just autocompleting?

When was the last time you sat down and solved an original problem for which you had no prior context to "complete" an approximated solution with? When has that ever happened in human history? All the great invention-moment stories that come to mind seem to have exactly that going on in the background: Prior context being auto-completed in an Eureka! moment.

show 1 reply
logicchainsyesterday at 2:52 PM

>I don't get why you would say that. it's just auto-completing. It cannot reason. It won't solve an original problem for which it has no prior context to "complete" an approximated solution with. you can give it more context and more data,but you're just helping it complete better. it does not derive an original state machine or algorithm to solve problems for which there are no obvious solutions. it instead approximates a guess (hallucination).

I bet you can't give an example such written problem that a human can easily solve but no LLM can.

madaxe_againyesterday at 7:21 AM

The vast majority of human “thinking” is autocompletion.

Any thinking that happens with words is fundamentally no different to what LLMs do, and everything you say applies to human lexical reasoning.

One plus one equals two. Do you have a concept of one-ness, or two-ness, beyond symbolic assignment? Does a cashier possess number theory? Or are these just syntactical stochastic rules?

I think the problem here is the definition of “thinking”.

You can point to non-verbal models, like vision models - but again, these aren’t hugely different from how we parse non-lexical information.

show 2 replies
naaskingyesterday at 4:24 PM

> don't get why you would say that. it's just auto-completing.

https://en.wikipedia.org/wiki/Predictive_coding

> If it thought and reasoned, it would know to ignore those and focus on the variables and input at hand to produce deterministic and consistent output

You only do this because you were trained to do this, eg. to see symmetries and translations.

jiggawattsyesterday at 11:13 AM

You wrote your comment one word at a time, with the next word depending on the previous words written.

You did not plan the entire thing, every word, ahead of time.

LLMs do the same thing, so... how is your intelligence any different?

show 2 replies