>An AI that's perfectly obedient to humans is still a huge risk when used as a force multiplier by a malevolent human.
"Obedient" is anthropomorphizing too much (as there is no volition), but even then, it only matters according to how much agency the bot is extended. So there is also risk from neglectful humans who opt to present BS as fact due to an expectation of receiving fact and a failure to critique the BS.