No, the AI did what you told it to do. The AI didn’t do anything on its own.
> if you're going to use AI extensively, build a process where competent developers use it as a tool to augment their work, not a way to avoid accountability
I'd say yes and no. The LLM reacted to the input that was given but it is not possible for a human (especially without access to the weights) to even guess what will happen after that.
Regardless of that I agree that it's completely the fault of the user to use a tool where you can't predict the outcome and give it such broad permissions and not having a solid backup strategy.
Either don't use non deterministic tools or protect yourself from the potential fallout.
> No, the AI did what you told it to do.
I'd say yes and no. The LLM reacted to the input that was given but it is not possible for a human (especially without access to the weights) to even guess what will happen after that.
Regardless of that I agree that it's completely the fault of the user to use a tool where you can't predict the outcome and give it such broad permissions and not having a solid backup strategy.
Either don't use non deterministic tools or protect yourself from the potential fallout.