logoalt Hacker News

imiricyesterday at 11:53 PM3 repliesview on HN

The difference is that we know how LLMs work. We know exactly what they process, how they process it, and for what purpose. Our inability to explain and predict their behavior is due to the mind-boggling amount of data and processing complexity that no human can comprehend.

In contrast, we know very little about human brains. We know how they work at a fundamental level, and we have vague understanding of brain regions and their functions, but we have little knowledge of how the complex behavior we observe actually works. The complexity is also orders of magnitude greater than what we can model with current technology, but it's very much an open question whether our current deep learning architectures are even the right approach to model this complexity.

So, sure, emergent behavior is neat and interesting, but just because we can't intuitively understand a system, doesn't mean that we're on the right track to model human intelligence. After all, we find the patterns of the Game of Life interesting, yet the rules for such a system are very simple. LLMs are similar, only far more complex. We find the patterns they generate interesting, and potentially very useful, but anthropomorphizing this technology, or thinking that we have invented "intelligence", is wishful thinking and hubris. Especially since we struggle with defining that word to begin with.


Replies

intulltoday at 3:13 AM

I think what comment-OP above means to point at is - given what we know (or, lack thereof) about awareness, consciousness, intelligence, and the likes, let alone the human experience of it all, today, we do not have a way to scientifically rule out the possibility that LLMs aren't potentially self-aware/conscious entities of their own; even before we start arguing about their "intelligence", whatever that may be understood of as.

What we do know and have so far, across and cross disciplines, and also from the fact that neural nets are modeled after what we've learned about the human brain, is, it isn't an impossibility to propose that LLMs _could_ be more than just "token prediction machines". There can be 10000 ways of arguing how they are indeed simply that, but there also are a few of ways of arguing that they could be more than what they seem. We can talk about probabilities, but not make a definitive case one way or the other yet, scientifically speaking. That's worth not ignoring or dismissing the few.

adleyjuliantoday at 12:14 AM

At no point did I say LLMs have human intelligence nor that they model human intelligence. I also didn't say that they are the correct path towards it, though the truth is we don't know.

The point is that one could similarly be dismissive of human brains, saying they're prediction machines built on basic blocks of neuro chemistry and such a view would be asinine.

stevenhuangtoday at 1:28 AM

> The difference is that we know how LLMs work. We know exactly what they process, how they process it, and for what purpose

All of this is false.