> I don't see why being able to do this would necessitate being able to cure all diseases or a comparable good outcome.
Yes, but neither do I see why an AGI should do the opposite. The arguments about an AGI that drives us to extinction do sound like projection to me. People extrapolate from human behaviour how a superintelligence will behave, assuming that what seems rational to us is also rational to AI. A lot of the described scenarios of malicious AI do more read like a natural history of human behaviour.