that's kind of optimistic. for example a misaligned super AI might engineer a virus that wipes out most of the 7 billion humans. that would put a damper on the adaptability of the human race...
and then might overfit the lack of danger in that aftermath, leading to those fragmented humans doing something to overthrow it. For all we know this AI might get bored and decide to make a cure, or turn itself off, or anything really.
and then might overfit the lack of danger in that aftermath, leading to those fragmented humans doing something to overthrow it. For all we know this AI might get bored and decide to make a cure, or turn itself off, or anything really.