logoalt Hacker News

blackhazyesterday at 8:27 AM2 repliesview on HN

Is it widely accepted that LLMs won't lead to AGI? I've asked Gemini, so it came up with four primary arguments for this claim, commenting on them briefly:

1) LLMs as simple "next token predictors" so they just mimicry thinking: But can it be argued that current models operate on layers of multiple depth and are able to actually understand by building concepts and making connections on abstract levels? Also, don't we all mimicry?

2) Grounding problem: Yes, models build their world models on text data, but we have models operating on non-textual data already, so this appears to be a technical obstacle rather than fundamental.

3) Lack of World Model. But can anyone really claim they have a coherent model of reality? There are flat-earthers, yet I still wouldn't deny them having AGI. People hallucinate and make mistakes all the time. I'd argue hallucinations is in fact the sign of an emerging intelligence.

4) Fixed learning data sets. Looks like this is now being actively solved with self-improving models?

I just couldn't find a strong argument supporting this claim. What am I missing?


Replies

globnomulousyesterday at 8:50 AM

Why on earth would you copy and paste an LLM's output into a comment? What does that accomplish or provide that just a simply stated argument doesn't accomplish more succinctly? If you don't know something, simply don't comment on it -- or ask a question.

show 1 reply
welferkjyesterday at 9:36 AM

Fur future reference, pasting llm slop feels exactly as patronizing as back when people pasted links to google searches in response to questions they considered beneath their dignity to answer. Except in this case, no-one asked to begin with.