logoalt Hacker News

criley2yesterday at 9:05 PM2 repliesview on HN

Advanced reasoning LLM's simulate many parts of AGI and feel really smart, but fall short in many critical ways.

- An AGI wouldn't hallucinate, it would be consistent, reliable and aware of its own limitations

- An AGI wouldn't need extensive re-training, human reinforced training, model updates. It would be capable of true self-learning / self-training in real time.

- An AGI would demonstrate real genuine understanding and mental modeling, not pattern matching over correlations

- It would demonstrate agency and motivation, not be purely reactive to prompting

- It would have persistent integrated memory. LLM's are stateless and driven by the current context.

- It should even demonstrate consciousness.

And more. I agree that what've we've designed is truly impressive and simulates intelligence at a really high level. But true AGI is far more advanced.


Replies

waffletoweryesterday at 10:08 PM

Humans can fail at some of these qualifications, often without guile: - being consistent and knowing their limitations - people do not universally demonstrate effective understanding and mental modeling.

I don't believe the "consciousness" qualification is at all appropriate, as I would argue that it is a projection of the human machine's experience onto an entirely different machine with a substantially different existential topology -- relationship to time and sensorium. I don't think artificial general intelligence is a binary label which is applied if a machine rigidly simulates human agency, memory, and sensing.

lysaceyesterday at 9:27 PM

Thanks for humoring my stupid question with a great answer. I was kind of hoping for something like this :).