logoalt Hacker News

MyOutfitIsVaguetoday at 4:45 AM1 replyview on HN

I'm certain it wouldn't, and you're certain it would, and we have the same amount of evidence (and probably roughly the same means for running such an expensive experiment). I think they're more likely to go slowly mad, degrading their reasoning to nothing useful rather than building something real, but that could be different if they weren't detached from sensory input. Human minds looping for generations without senses, a world, or bodies might also go the same way.

> Also, I think there is a very high chance that given an existing LLM architecture there exists a set of weights that would manifest a true intelligence immediately upon instantiation (with anterograde amnesia).

I don't see why that would be the case at all, and I regularly use the latest and most expensive LLMs and am aware enough of how they work to implement them on the simplest level myself, so it's not just me being uninformed or ignorant.


Replies

coppsilgoldtoday at 4:50 AM

The attention mechanism is capable of computing, in my thought experiment where you can magically pluck a weights-set from a trillion-dimensional space the tokens the machine will predict will only have a tiny subset dedicated to language. We have no capability of training such a system at this time, much like we have no way of training a non-differentiable architecture.