logoalt Hacker News

gslepakyesterday at 5:59 PM3 repliesview on HN

That's like saying we should wait for positive proof of AGI from combustion engines. That'll never happen, no matter how much you tweak the engine. It's just not possible.

The negative proof is there in the definition itself. Transformers are not AGI, they're frozen human intelligence of the autocomplete variety. That can never be AGI and anyone who says otherwise doesn't understand transformers or AGI.


Replies

xscottyesterday at 8:30 PM

This kind of proof isn't really as water tight as you claim. It's a lot like saying state machines are limited to processing regular expressions, and then completely ignoring how easy it is to add a stack or linear memory to a state machine to make it a PDA or Turing machine.

So yes, the LLMs can be trivialized as just randomized autocomplete, but if you add a database or memory to the side very basic MLPs can become a Turing machine. It's going to take a lot more proof to say a Turing machine could never be intelligent. And you can do more than just give the LLM side memory - you can have them invoked recursively, use message passing as coroutines, and so on...

You might be technically correct if you ignore anything other than the very restrictive definitions you're using, but even there I'm not certain. If you had a LLM with a trillion token window, is that good enough to act as a memory? Human brains aren't infinite either.

show 1 reply
scoopdewoopyesterday at 8:43 PM

You are super positive that transformers can't become AGI, wow. Care to explain how atoms _can_ become AGI?

refulgentisyesterday at 8:30 PM

Oh! would you mind explaining that out a bit? :)

show 1 reply