When I was a masters student in STS[1], one of my concepts for a thesis was arguing that one of the primary uses of software was to shift or eschew agency and risk. Basically the reverse of the famous IBM "a computer can not be held responsible" slide. Instead, now companies prefer computers be responsible because when they do illegal things they tend to be in a better legal position. If you want to build as tool that will break a law, contract it out and get insurance. Hire a human to "supervise" the tool in a way they will never manage and then fire them when they "fail." Slice up responsibility using novel command and control software such that you have people who work for you who bear all the risk of the work and capture basically none of the upside.
It's not just AI. It's so much of modern software - often working together with modern financialization trends.
[1] Basically technology-focused sociology for my purposes, the field is quite broad.
That's really interesting. Are there any things you advocate for with respect to curtailing those practices? I hesitate to throw all liability on the individual, but I don't see how we can even legislate this category of behavior, much less enforce regulations on them.