> Again I do not know why MJ Rathbun decided based on your PR comment to post some kind of takedown blog post,
This wording is detached from reality and conveniently absolves responsibility from the person who did this.
There was one decision maker involved here, and it was the person who decided to run the program that produced this text and posted it online. It's not a second, independent being. It's a computer program.
This also does not bode well for the future.
"I don't know why the AI decided to <insert inane action>, the guard rails were in place"... company absolves of all responsibility.
Use your imagination now to <insert inane action> and change that to <distressing, harmful action>
This slide from a 1979* IBM presentation captures it nicely:
https://media.licdn.com/dms/image/v2/D4D22AQGsDUHW1i52jA/fee...
If you are holding a gun, and you cannot predict or control what the bullets will hit, you do not fire the gun.
If you have a program, and you cannot predict or control what effect it will have, you do not run the program.
It’s fascinating how cleanly this maps to agency law [0], which has not been applied to human <-> ai agents (in both senses of the word) before.
That would make a fun law school class discussion topic.
I'm still struggling to care about the "hit piece".
It's an AI. Who cares what it says? Refusing AI commits is just like any other moderation decision people experience on the web anywhere else.
"Sorry for running over your dog, I couldn't help it, I was drunk."
This is how it will go: AI prompted by human creates something useful? Human will try to take credit. AI wrecks something: human will blame AI.
It's externalization on the personal level, the money and the glory is for you, the misery for the rest of the world.