Other than some very askew bizarro rationalists, I don’t think that many people take AI hard takeoff doomerism seriously at face value.
Much of the cheerleading for doomerism was large AI companies trying to get regulatory moats erected to shut down open weights AI and other competitors. It was an effort to scare politicians into allowing massive regulatory capture.
Turns out AI models do not have strong moats. Making models is more akin to the silicon fab business where your margin is an extreme power law function of how bleeding edge you are. Get a little behind and you are now commodity.
General wide breadth frontier models are at least partly interchangeable and if you have issues just adjust their prompts to make them behave as needed. The better the model is the more it can assist in its own commodification.