AI driven cars have better risk profiles than humans.
Why do you think the same will not also be true for AI steerers/managers/CEO?
In a year of two, having a human in the loop, will all of their biases and inconsistencies will be considered risky and irresponsible.
Getting to that point is likely going to involve a lot of (the business and personal equivalent of) Teslas electing to drive through white semitrailers.
Or autonomous weapons?
> AI driven cars have better risk profiles than humans.
From which company? I hope you say "Waymo", because Tesla is lying through its teeth and hiding crash statistics from regulators.
"Did the vehicle just crash" has a short feedback loop, very amenable to RL. "Did this product strategy tank our earnings/reputation/compliance/etc" can have a much longer, harder to RL feedback loop.
But maybe not that much longer; METR task length improvement is still straight lines on log graphs.