But we do give humans responsibility to govern and manage critical things. We do give intrinsic trust to people. There are people at your company who have high level access and could do bad things, but they don't do it because they know better.
This article acts like we can never possibly give that sort of trust to AI because it's never really on our side or aligned with our goals. IMO that's a fools errand because you can never really completely secure something and ensure there are no possible exploits.
Honestly it doesn't really seem like AI to me if it can't learn this type of judgement. It doesn't seem like we should be barking up this tree if this is how we have to treat this new tool IMO. Seems too risky.
> they don't do it because they know better.
That's completely false. People get deceived all the time. We even have a word for it: social engineering.
> we can never possibly give that sort of trust to AI because it's never really on our side or aligned with our goals
Right now we can't! AI is currently the equivalent of a very smart child. Would you give production access to a child?
> you can never really completely secure something and ensure there are no possible exploits.
This applies to any system, not just AI.