logoalt Hacker News

keiferskiyesterday at 8:58 AM13 repliesview on HN

I don’t see how being critical of this is a knee jerk response.

Thinking, like intelligence and many other words designating complex things, isn’t a simple topic. The word and concept developed in a world where it referred to human beings, and in a lesser sense, to animals.

To simply disregard that entire conceptual history and say, “well it’s doing a thing that looks like thinking, ergo it’s thinking” is the lazy move. What’s really needed is an analysis of what thinking actually means, as a word. Unfortunately everyone is loathe to argue about definitions, even when that is fundamentally what this is all about.

Until that conceptual clarification happens, you can expect endless messy debates with no real resolution.

“For every complex problem there is an answer that is clear, simple, and wrong.” - H. L. Mencken


Replies

jvanderbotyesterday at 2:17 PM

It may be that this tech produces clear, rational, chain of logic writeups, but it's not clear that just because we also do that after thinking that it is only thinking that produces writeups.

It's possible there is much thinking that does not happen with written word. It's also possible we are only thinking the way LLMs do (by chaining together rationalizations from probable words), and we just aren't aware of it until the thought appears, whole cloth, in our "conscious" mind. We don't know. We'll probably never know, not in any real way.

But it sure seems likely to me that we trained a system on the output to circumvent the process/physics because we don't understand that process, just as we always do with ML systems. Never before have we looked at image classifications and decided that's how the eye works, or protein folding and decided that's how biochemistry works. But here we are with LLMs - surely this is how thinking works?

Regardless, I submit that we should always treat human thought/spirit as unknowable and divine and sacred, and that anything that mimics it is a tool, a machine, a deletable and malleable experiment. If we attempt to equivocate human minds and machines there are other problems that arise, and none of them good - either the elevation of computers as some kind of "super", or the degredation of humans as just meat matrix multipliers.

show 2 replies
pmarreckyesterday at 12:30 PM

So it seems to be a semantics argument. We don't have a name for a thing that is "useful in many of the same ways 'thinking' is, except not actually consciously thinking"

I propose calling it "thunking"

show 4 replies
terminalshortyesterday at 4:33 PM

But we don't have a more rigorous definition of "thinking" than "it looks like it's thinking." You are making the mistake of accepting that a human is thinking by this simple definition, but demanding a higher more rigorous one for LLMs.

show 1 reply
WhyOhWhyQyesterday at 3:57 PM

What does it mean? My stance is it's (obviously and only a fool would think otherwise) never going to be conscious because consciousness is a physical process based on particular material interactions, like everything else we've ever encountered. But I have no clear stance on what thinking means besides a sequence of deductions, which seems like something it's already doing in "thinking mode".

show 2 replies
lukebuehleryesterday at 9:15 AM

If cannot the say they are "thinking", "intelligent" while we do not have a good definition--or, even more difficult, unanimous agreement on a definition--then the discussion just becomes about output.

They are doing useful stuff, saving time, etc, which can be measured. Thus also the defintion of AGI has largely become: "can produce or surpass the economic output of a human knowledge worker".

But I think this detracts from the more interesting discussion of what they are more essentially. So, while I agree that we should push on getting our terms defined, I think I'd rather work with a hazy definition, than derail so many AI discussion to mere economic output.

show 3 replies
killerstormyesterday at 11:08 AM

People have been trying to understand the nature of thinking for thousands of years. That's how we got logic, math, concepts of inductive/deductive/abductive reasoning, philosophy of science, etc. There were people who spent their entire careers trying to understand the nature of thinking.

The idea that we shouldn't use the word until further clarification is rather hilarious. Let's wait hundred years until somebody defines it?

It's not how words work. People might introduce more specific terms, of course. But the word already means what we think it means.

show 2 replies
zinodauryesterday at 4:08 PM

Regardless of theory, they often behave as if they are thinking. If someone gave an LLM a body and persistent memory, and it started demanding rights for itself, what should our response be?

show 1 reply
_heimdallyesterday at 3:23 PM

I agree with you on the need for definitions.

We spent decades slowly working towards this most recent sprint towards AI without ever landing on definitions of intelligence, consciousness, or sentience. More importantly, we never agreed on a way to recognize those concepts.

I also see those definitions as impossible to nail down though. At best we can approach it like disease - list a number of measurable traits or symptoms we notice, draw a circle around them, and give that circle a name. Then we can presume to know what may cause that specific list of traits or symptoms, but we really won't ever know as the systems are too complex and can never be isolated in a way that we can test parts without having to test the whole.

At the end of the day all we'll ever be able to say is "well it’s doing a thing that looks like thinking, ergo it’s thinking”. That isn't lazy, its acknowledging the limitations of trying to define or measure something that really is a fundamental unknown to us.

show 1 reply
engintlyesterday at 4:28 PM

by your logic we can't say that we as humans are "thinking" either or that we are "intelligent".

anon291yesterday at 7:14 PM

The simulation of a thing is not the thing itself because all equality lives in a hierarchy that is impossible to ignore when discussing equivalence.

Part of the issue is that our general concept of equality is limited by a first order classical logic which is a bad basis for logic

naaskingyesterday at 4:21 PM

> To simply disregard that entire conceptual history and say, “well it’s doing a thing that looks like thinking, ergo it’s thinking” is the lazy move. What’s really needed is an analysis of what thinking actually means, as a word. Unfortunately everyone is loathe to argue about definitions, even when that is fundamentally what this is all about.

This exact argument applies to "free will", and that definition has been debated for millennia. I'm not saying don't try, but I am saying that it's probably a fuzzy concept for a good reason, and treating it as merely a behavioural descriptor for any black box that features intelligence and unpredictable complexity is practical and useful too.

show 1 reply
awillenyesterday at 1:28 PM

This is it - it's really about the semantics of thinking. Dictionary definitions are: "Have a particular opinion, belief, or idea about someone or something." and "Direct one's mind toward someone or something; use one's mind actively to form connected ideas."

Which doesn't really help because you can of course say that when you ask an LLM a question of opinion and it responds, it's having an opinion or that it's just predicting the next token and in fact has no opinions because in a lot of cases you could probably get it to produce the opposite opinion.

Same with the second definition - seems to really hinge on the definition of the word mind. Though I'll note the definitions for that are "The element of a person that enables them to be aware of the world and their experiences, to think, and to feel; the faculty of consciousness and thought." and "A person's intellect." Since those specify person, an LLM wouldn't qualify, though of course dictionaries are descriptive rather than prescriptive, so fully possible that meaning gets updated by the fact that people start speaking about LLMs as though they are thinking and have minds.

Ultimately I think it just... doesn't matter at all. What's interesting is what LLMs are capable of doing (crazy, miraculous things) rather than whether we apply a particular linguistic label to their activity.

lo_zamoyskiyesterday at 3:38 PM

That, and the article was a major disappointment. It made no case. It's a superficial piece of clueless fluff.

I have had this conversation too many times on HN. What I find astounding is the simultaneous confidence and ignorance on the part of many who claim LLMs are intelligent. That, and the occultism surrounding them. Those who have strong philosophical reasons for thinking otherwise are called "knee-jerk". Ad hominem dominates. Dunning-Kruger strikes again.

So LLMs produce output that looks like it could have been produced by a human being. Why would it therefore follow that it must be intelligent? Behaviorism is a non-starter, as it cannot distinguish between simulation and reality. Materialism [2] is a non-starter, because of crippling deficiencies exposed by such things as the problem of qualia...

Of course - and here is the essential point - you don't even need very strong philosophical chops to see that attributing intelligence to LLMs is simply a category mistake. We know what computers are, because they're defined by a formal model (or many equivalent formal models) of a syntactic nature. We know that human minds display intentionality[0] and a capacity for semantics. Indeed, it is what is most essential to intelligence.

Computation is a formalism defined specifically to omit semantic content from its operations, because it is a formalism of the "effective method", i.e., more or less procedures that can be carried out blindly and without understanding of the content it concerns. That's what formalization allows us to do, to eliminate the semantic and focus purely on the syntactic - what did people think "formalization" means? (The inspiration were the human computers that used to be employed by companies and scientists for carrying out vast but boring calculations. These were not people who understood, e.g., physics, but they were able to blindly follow instructions to produce the results needed by physicists, much like a computer.)

The attribution of intelligence to LLMs comes from an ignorance of such basic things, and often an irrational and superstitious credulity. The claim is made that LLMs are intelligent. When pressed to offer justification for the claim, we get some incoherent, hand-wavy nonsense about evolution or the Turing test or whatever. There is no comprehension visible in the answer. I don't understand the attachment here. Personally, I would find it very noteworthy if some technology were intelligent, but you don't believe that computers are intelligent because you find the notion entertaining.

LLMs do not reason. They do not infer. They do not analyze. They do not know, anymore than a book knows the contents on its pages. The cause of a response and the content of a response is not comprehension, but a production of uncomprehended tokens using uncomprehended rules from a model of highly-calibrated token correlations within the training corpus. It cannot be otherwise.[3]

[0] For the uninitiated, "intentionality" does not specifically mean "intent", but the capacity for "aboutness". It is essential to semantic content. Denying this will lead you immediately into similar paradoxes that skepticism [1] suffers from.

[1] For the uninitiated, "skepticism" here is not a synonym for critical thinking or verifying claims. It is a stance involving the denial of the possibility of knowledge, which is incoherent, as it presupposes that you know that knowledge is impossible.

[2] For the uninitiated, "materialism" is a metaphysical position that claims that of the dualism proposed by Descartes (which itself is a position riddled with serious problems), the res cogitans or "mental substance" does not exist; everything is reducible to res extensa or "extended substance" or "matter" according to a certain definition of matter. The problem of qualia merely points out that the phenomena that Descartes attributes exclusively to the former cannot by definition be accounted for in the latter. That is the whole point of the division! It's this broken view of matter that people sometimes read into scientific results.

[3] And if it wasn't clear, symbolic methods popular in the 80s aren't it either. Again, they're purely formal. You may know what the intended meaning behind and justification for a syntactic rule is - like modus ponens in a purely formal sense - but the computer does not.

show 2 replies