It’s pretty staggering that a core algorithm simple enough to be expressed in 200 lines of Python can apparently be scaled up to achieve AGI.
Yes with some extra tricks and tweaks. But the core ideas are all here.
LLMs won’t lead to AGI. Almost by definition, they can’t. The thought experiment I use constantly to explain this:
Train an LLM on all human knowledge up to 1905 and see if it comes up with General Relativity. It won’t.
We’ll need additional breakthroughs in AI.
1000 lines??
What is going on in this thread
LLMs won’t lead to AGI. Almost by definition, they can’t. The thought experiment I use constantly to explain this:
Train an LLM on all human knowledge up to 1905 and see if it comes up with General Relativity. It won’t.
We’ll need additional breakthroughs in AI.