logoalt Hacker News

icedchaitoday at 12:58 AM1 replyview on HN

Most humans aren't making new scientific discoveries either, are they? Does that mean they don't have AGI?

Intelligence is mostly about pattern recognition. All those model weights represent patterns, compressed and encoded. If you can find a similar pattern in a new place, perhaps you can make a new discovery.

One problem is the patterns are static. Sooner or later, someone is going to figure out a way to give LLMs "real" memory. I'm not talking about keeping a long term context, extending it with markdown files, RAG, etc. like we do today for an individual user, but updating the underlying model weights incrementally, basically resulting in a learning, collective memory.


Replies

A_D_E_P_Ttoday at 2:24 AM

Virtually all humans of average intelligence are capable of making scientific discoveries -- admittedly minor ones -- if they devote themselves to a field, work at its frontiers, and apply themselves. They are also capable of originality in other domains, in other ways.

I am not at all sure that the same thing is even theoretically possible for LLMs.

Not to be facetious, but you need to spend more time playing with Suno. It really drives home how limited these models are. With text, there's a vast conceptual space that's hard to probe; it's much easier when the same structure is ported to music. The number of things it can't do absolutely outweighs the number of things it can do. Within days, even mere hours, you'll become aware of its peculiar rigidity.