The LLMs are the philosophical "box of all conversation" trick, that's not intelligence, it just went from a neat philosophical device to explain why Turing's test doesn't do what you think intuitively it would do to a real world thing that is a mix of fun toy, useful technology and dangerous new problem.
Ah, yes, the program that's "not intelligent" yet somehow turns in gold-medal results at international math and programming competitions designed to identify and test the smartest human students. Is that sentiment supposed to make us feel smart?
If anything, the closest thing we have to Byte in 1975 is /r/localllama in 2026. Believe me, there was no shortage of old men in 1975 who didn't get it, either.
I think that is a valid opinion, but don't think there is any conclusive evidence to make it a valid fact (while of course not disagreeing with "fun toy, useful technology and dangerous new problem" part). Would be happy to learn otherwise.