Can you give concrete examples of "something impossible happens based on known physics"? I have followed the AI debate for a long time but I can't think of what those might be.
Not the person you are responding to, but much of the conclusions drawn by Bostrom (and most of EY’s ideas are credited to Bostrom) depend on infinities. The orthogonality thesis being series from AIXI, for example.
EY’s assertions regarding a fast “FOOM” have been empirically discredited by the very fact that ChatGPT was created in 2022, it is now 2025, and we still exist. But goal posts are moved. Even ignoring that error, the logic is based on, essentially, “AI is a magic box that can solve any problem by thought alone.” If you can define a problem, the AI can solve it. This is part of the analysis done by AI x-risk people of the MIRI tradition. Which ignores entirely that there are very many problems (including AI recursive improvement itself) which are computationally infeasible to solve in this way, no matter how “smart” you are.
Optimal learning is an interesting problem in computer science because it is fundamentally bound by geometric space complexity rather than computational complexity. You can bend the curve but the approximations degrade rapidly and still have a prohibitively expensive exponential space complexity. We have literature for this; a lot of the algorithmic information theory work in AI was about characterizing these limits.
The annoying property of prohibitively exponential (ignoring geometric) space complexity is that it places a severe bound on computational complexity per unit time. The exponentially increasing space implies an increase in latency for each sequentially dependent operation, bounded at the limit by the speed of light. Even if you can afford the insane space requirements, your computation can’t afford the aggregate latency for anything useful even for the most trivial problems. With highly parallel architectures this can be turned into a latency-hiding problem to some extent but this also has limits.
This was thoroughly studied by the US defense community decades ago.
The tl;dr is that efficient learning scales extremely poorly, more poorly than I think people intuit. All of the super-intelligence hard-takeoff scenarios? Not going to happen, you can’t make the physics work without positing magic that circumvents the reality of latencies when your state space is unfathomably large even with unimaginably efficient computers.
I harbor a suspicion that the cost of this scaling problem, and the limitations of wetware, has bounded intelligence in biological systems. We can probably do better in silicon than wetware in some important ways but there is not enough intrinsic parallelism in the computation to adequately hide the latency.
Personally, I find these “fundamental limits of computation” things to be extremely fascinating.