logoalt Hacker News

marcus_holmesyesterday at 4:59 AM3 repliesview on HN

Yes, I've seen the same things.

But; they don't learn. You can add stuff to their context, but they never get better at doing things, don't really understand feedback. An LLM given a task a thousand times will produce similar results a thousand times; it won't get better at it, or even quicker at it.

And you can't ask them to explain their thinking. If they are thinking, and I agree they might, they don't have any awareness of that process (like we do).

I think if we crack both of those then we'd be a lot closer to something I can recognise as actually thinking.


Replies

theptipyesterday at 5:20 AM

> But; they don't learn

If we took your brain and perfectly digitized it on read-only hardware, would you expect to still “think”?

Do amnesiacs who are incapable of laying down long-term memories not think?

I personally believe that memory formation and learning are one of the biggest cruces for general intelligence, but I can easily imagine thinking occurring without memory. (Yes, this is potentially ethically very worrying.)

show 2 replies
trenchpilgrimyesterday at 7:50 AM

> You can add stuff to their context, but they never get better at doing things, don't really understand feedback.

I was using Claude Code today and it was absolutely capable of taking feedback to change behavior?

show 1 reply
jatorayesterday at 5:26 AM

This is just wrong though. They absolutely learn in-context in a single conversation within context limits. And they absolutely can explain their thinking; companies just block them from doing it.