Non-sentient technology has no concept of good or bad. We have no idea how to give it one. Even if we gave it one, we'd have no idea how to teach it to "choose good".
> In general we trust people that we bring onto our team not to betray us and to respect general rules and policies and practices that benefit everyone. An AI teammate should be no different.
That misses the point completely. How many of your coworkers fail phishing tests? It's not malicious, it's about being deceived.
But we do give humans responsibility to govern and manage critical things. We do give intrinsic trust to people. There are people at your company who have high level access and could do bad things, but they don't do it because they know better.
This article acts like we can never possibly give that sort of trust to AI because it's never really on our side or aligned with our goals. IMO that's a fools errand because you can never really completely secure something and ensure there are no possible exploits.
Honestly it doesn't really seem like AI to me if it can't learn this type of judgement. It doesn't seem like we should be barking up this tree if this is how we have to treat this new tool IMO. Seems too risky.