I don't know, I think nuclear weapons are scarier. And also probably a useful parallel: they're so dangerous that we coined the term "mutually assured destruction" and everyone recognized that it was so dangerous to use them that they've only ever been used once.
I see the flood of PR from AI firms as an attempt to make sure we don't build the appropriate safeguards this time around, because there's too much money to be made.
By any quantifiable measure, yes, and not by small numbers either.
Until someone can demonstrate a quantitative measure of intelligence - with the same stability of measurement as "meters" or "joules" - any discussion of "Super-AI" as "the most dangerous X" is at best qualitative/speculative risk narratology, at worst discursive distractions. The architecture of the "social web" amplifies discursion to a harmful degree in an open population of agents, something I think we could probably prove mathematically. I am more suspicious of this social principle than I am scared of Weakly Godlike Intelligence at this moment in history; I am more scared of nuclear weapons than literally anything else.
People think we are out of the woods with nuclear weapons, but I don't think we've even seen the forest yet. We are Homo Erectus, puffing on a flame left by a lightning strike, carrying this magic fire back to our cave.
Everyone recognized that it was so dangerous to use them after the first two mass casualty events. At the time and even into the 50s it was not universally obvious, and the arguments in favor of nuclear weapons use were quite similar to arguments I often see with regards to AI: bombing cities into rubble is not a new concept, traditional explosives well within the supply capacity of large militaries are capable of it, so what are we even talking about when we say that there's scary new capabilities?
Nuclear weapons have rarely been used kinetically. Their real force multiplier is the fear.
A.I. is being used by so many people for so many diabolical things, hidden, unknown things that we may never fully understand it's purpose. But that doesn't mean it's purpose won't destroy us in the end.
The expression "Drinking the Koolaid" is used to explain the Jonestown mass suicide. It is an information hazard, aka, a cult that created the end result: 900 people drinking poisoned flavoraid. That's just one example of a human caused information hazard. What happens when someone with similar thinking applies that to A.I.? Will we even be able to sleuth out who did it?
I remind you of why nuclear weapons exist.
They exist because human minds conceived them, and human hands made them.
One of the major dangers of advanced AI is being able to implement something not unlike Manhattan project with synthetic intelligence, in a single datacenter.