logoalt Hacker News

uejfiweunyesterday at 7:49 PM2 repliesview on HN

There is a certain logic to it though. If the scaling approaches DO get us to AGI, that's basically going to change everything, forever. And if you assume this is the case, then "our side" has to get there before our geopolitical adversaries do. Because in the long run the expected "hit" from a hostile nation developing AGI and using it to bully "our side" probably really dwarfs the "hit" we take from not developing the infrastructure you mentioned.


Replies

A_D_E_P_Tyesterday at 8:14 PM

Any serious LLM user will tell you that there's no way to get from LLM to AGI.

These models are vast and, in many ways, clearly superhuman. But they can't venture outside their training data, not even if you hold their hand and guide them.

Try getting Suno to write a song in a new genre. Even if you tell it EXACTLY what you want, and provide it with clear examples, it won't be able to do it.

This is also why there have been zero-to-very-few new scientific discoveries made by LLM.

show 3 replies
mylifeandtimesyesterday at 8:02 PM

Here's hoping you are chinese, then.

show 2 replies