logoalt Hacker News

tstrimpletoday at 10:02 AM2 repliesview on HN

Literally the only thing I've encountered regarding LLMS and AGI is morons stating that LLMs will never become AGI. I literally have no idea where the AGI arguments are coming from. No one who I've ever worked with who uses LLMs is talking about AGI. It's just a fucking distraction from actually usable tools right now. Is there anything except a strawman for LLM AGI?


Replies

cherryteastaintoday at 10:54 AM

Sam Altman [1] certainly seems to talk about AGI quite a bit

[1] https://blog.samaltman.com/reflections

ACCount37today at 10:30 AM

Honestly, I wouldn't be surprised if a system that's an LLM at its core can attain AGI. With nothing but incremental advances in architecture, scaffolding, training and raw scale.

Mostly the training. I put less and less weight on "LLMs are fundamentally flawed" and more and more of it on "you're training them wrong". Too many "fundamental limitations" of LLMs are ones you can move the needle on with better training alone.

The foundation of LLM is flexible and capable, and the list of "capabilities that are exclusive to human mind" is ever shrinking.