logoalt Hacker News

A_D_E_P_Tyesterday at 5:54 PM2 repliesview on HN

> One can argue that they have already achieved this.

No, because they're hugely reliant on their training data and can't really move beyond their training data. This is why you haven't seen an explosion of new LLM-aided scientific discoveries, why Suno can't write a song in a new genre (even if you explain it to Suno in detail and give it actual examples,) etc.

This should tell you something enormous about (1) their future potential and (2) how their "intelligence" is rooted in essentially baseline human communications.

Admittedly LLMs are superhuman in the performance of tasks which are, for want of a better term, "conventional" -- and which are well-represented in their training data.


Replies

nradovyesterday at 7:27 PM

Sam Altman keeps claiming that ChatGPT is going to cure cancer. So far its contribution to novel medical research has been approximately zero.

matricksyesterday at 6:09 PM

> can’t really move beyond their training data

I don’t even think humans can “move beyond” their sensory data. They generalize using it, which is amazing, but they are still limited by it.* So why is this a reasonable standard for non-biological intelligence?

We have compelling evidence that both can learn in unsupervised settings. (I grant one has to wrap a transformer model with a training harness, but how can anyone sincerely consider this as a disqualifier while admitting that an infant cannot raise itself from birth!)

I’m happy to discuss nuance like different architectures (carbon versus silicon, neurons versus ANNs, etc), but the human tendency to move the goalposts is not something to be proud of. We really need to stop doing this.

* Jeff Hawkins describes the brain as relentlessly searching for invariants from its sensory data. It finds patterns in them and generalizes.

show 1 reply