Not the person you are responding to, but much of the conclusions drawn by Bostrom (and most of EY’s ideas are credited to Bostrom) depend on infinities. The orthogonality thesis being series from AIXI, for example.
EY’s assertions regarding a fast “FOOM” have been empirically discredited by the very fact that ChatGPT was created in 2022, it is now 2025, and we still exist. But goal posts are moved. Even ignoring that error, the logic is based on, essentially, “AI is a magic box that can solve any problem by thought alone.” If you can define a problem, the AI can solve it. This is part of the analysis done by AI x-risk people of the MIRI tradition. Which ignores entirely that there are very many problems (including AI recursive improvement itself) which are computationally infeasible to solve in this way, no matter how “smart” you are.
As far as I understand ChatGPT is not capable of self-improvement, so EY's argument is not applicable to it. (At least based on this https://intelligence.org/files/IEM.pdf from 2013.)
The FOOM argument starts with some kind of goal-directed agent (that escapes and then it) starts building a more capable version of itself (and then goal drift might set in might not)
If you tell ChatGPT to build ChatGPT++ and leave currently there's no time horizon within it would accomplish either that or escape, or anything, because now it gives you tokens rendered on some website.
The argument is not that AI is a magic box.
- The argument is that if there's a process that improves AI. [1]
- And if during that process AI becomes so capable that it can materially contribute to the process, and eventually continue (un)supervised. [2]
- Then eventually it'll escape and do whatever it wants, and then eventually the smallest misalignment means we become expendable resources.
I think the argument might be valid logically, but the constant factors are very important to the actual meaning and obviously we don't know them. (But the upper and lower estimates are far. Hence the whole debate.)
[1] Look around, we have a process that's like that. However gamed and flawed we have METR scores and ARC-AGI benchmarks, and thousands of really determined and skillful people working on it, hundreds of billions of capital deployed to keep this process going.
[2] We are not there yet, but decades after peak oil arguments we are very good at drawing various hockey stick curves.