The fact is that, if there were only one AGI that were ever to be created, then yes it would be quite unlikely for that to happen. Instead, what we are seeing now is you get an agent, you get an agent, etc. Oprah style. Now just imagine that a single one of those agents winds up evil - you remember that an OpenAI worker did that by accident from leaving out a minus sign, right? If it's a superintelligence, and it becomes evil due to a whoopsie, then human extinction is now very likely.