The author fundamentally doesn't understand complexity theory. So many processes in our universe are chaotic in the formal sense, requiring exponentially more compute to simulate a linear amount of extra time into the future. No amount of poorly defined "intelligence" can get around the fact that such things would take more compute than is available in the entire universe to simulate a few seconds ahead. An AI would hence need to make scientific experiments to obtain information just as humans do, many of which have an unavoidable time component (cannot be sped up), so there's no way an AI could just suddenly cure all diseases no matter how "intelligent" it was. These singularity types are basically medieval woo merchants trying to tell you convince you that it's possible to magically sort an arbitrary array in O(1) time.
Consider weather prediction. Fluid dynamics are chaotic, so that's a good example of something where no amount of compute is sufficient in the general case. An ASI, not being dumb, will of course immediately recognize this, and realize it is has to solve for the degenerate case. It therefore implements the much easier sub-goal of removing the atmosphere. Humans will naturally object to this if they find out, so it logically proceeds with the sub-sub-goal of killing all humans. What's the weather next month? Just a moment, releasing autonomous murder drone swarm...