logoalt Hacker News

almostherelast Monday at 10:52 PM1 replyview on HN

But LLMs are effectively a very complex if/else if tree:

if the user types "hi" respond with "hi" or "bye" or "..." you get the point. It's basically storing the most probably following words (tokens) given the current point and its history.

That's not a brain and it's not thinking. It's similar to JIRA because it's stored information and there are if statements (admins can do this, users can do that).

Yes it is more complex, but it's nowhere near the complexity of the human or bird brain that does not use clocks, does not have "turing machines inside", or any of the other complete junk other people posted in this thread.

The information in Jira is just less complex, but it's in the same vein of the data in an LLM, just 10^100 times more complex. Just because something is complex does not mean it thinks.


Replies

iainmerricklast Tuesday at 3:46 PM

This is a pretty tired argument that I don't think really goes anywhere useful or illuminates anything (if I'm following you correctly, it sounds like the good old Chinese Room, where "a few slips of paper" can't possibly be conscious).

Yes it is more complex, but it's nowhere near the complexity of the human or bird brain that does not use clocks, does not have "turing machines inside", or any of the other complete junk other people posted in this thread.

The information in Jira is just less complex, but it's in the same vein of the data in an LLM, just 10^100 times more complex. Just because something is complex does not mean it thinks.

So, what is the missing element that would satisfy you? It's "nowhere near the complexity of the human or bird brain", so I guess it needs to be more complex, but at the same time "just because something is complex does not mean it thinks".

Does it need to be struck by lightning or something so it gets infused with the living essence?

show 1 reply