I don't disagree that we need regulation, but I also think citing literal fiction isn't a good argument. We're also very, very far away from anything approaching AGI, so the idea of it becoming evil seems a bit far fetched.
Autonomous sentry turrets have already been a thing since the 2000s. If we assume that military technology is always at least some 5-10 years ahead of civilian, it is likely that some if not all of the "defense" contractors have far more terrifying autonomous weapons.
Did you catch the news about Grok wanting to kill the jews last week? All you need for AI or AGI to be evil is a prompt saying be evil.
We don't need AGI in order for AI to destroy humanity.
I agree fiction is a bad argument.
On the other hand, firstly every single person disagrees what the phrase AGI means, varying from "we've had it for years already" to "the ability to do provably impossible things like solve the halting problem"; and secondly we have a very bad track record for knowing how long it will take to invent anything in the field of AI with both positive and negative failures, for example constantly thinking that self driving cars are just around the corner vs. people saying an AI that could play Go well was "decades" away a mere few months before it beat the world champion.