logoalt Hacker News

cassianolealyesterday at 9:34 PM2 repliesview on HN

> It was a system that was given permission by a human operator.

From TFA:

"But the agent also independently publicly replied to the question after analyzing it, without getting approval first."


Replies

antonvsyesterday at 9:56 PM

Again, these are systems that have been explicitly given the ability to perform these actions. Trying to claim that it was somehow the AI’s fault is sheer incompetence and/or self-serving deceptiveness.

You can’t authorize a system to take some action and then complain when it takes that action. The “approval” you quoted is not a security constraint. Someone who confuses it for a security constraint is incompetent.

Barrin92yesterday at 10:27 PM

the ridiculous anthropomorphism is killing me. Software 'agents' can't ask for 'approval', they're not persons. That's like saying my script didn't ask me for approval to modify the system after I ran it with sudo privileges.

The developer is solely responsible for what APIs they expose to a bot. No you can't say your software agent was grumpy and mean and had a bad day. It is not a human intern, it is an unreliable chatbot who someone ran with permissions it should not have had.