A lot of this is pretty intuitive but I’m glad to hear it from a prestigious researcher. It’s a little annoying to hear people quote Hinton’s opinion as the “godfather” of AI as if there’s nothing more we need to know.
On a related note, I think there is a bit of nuance to superintelligence. The following are all notable landmarks on the climb to superintelligence:
1. At least as good as any human at a single cognitive task.
2. At least as good as any human on all cognitive tasks.
3. Better than any human on a single cognitive task.
4. Better than any individual human at all cognitive tasks.
5. Better than any group of humans at all cognitive tasks.
We are not yet at point 4 yet. But even after that point, a group of humans may still outperform the AI.
Why this matters is if part of the “group” is performing empirical experiments to conduct scientific research, an AI on its own won’t outperform your group unless the AI can also perform those experiments or find some way to avoid doing them. This is another way of restating the original Twitter post.
[dead]
Are we even at point #3 for anything besides structured games like Go or Chess? Not that those tasks aren't valuable but there is a difference between a rigidly structured and scored task like Chess and something free-form like "fold this towel" or "write this program".