I am not sure what the other side of this argument looks like: Unlimited liability (i.e. liability no matter how poor an implementation and use of the tech is)?
The would be quite a novel burden, that no other tech (afaik) had to carry so far. We always assumed some operator responsibility. It's interesting to think of AI as a tech that could feasible be able to internally guardrail itself, and, maybe more so with increasing capability, no human can be expected to do so in it's stead – but, surely, some limits must apply and the more interesting question is what they are, as with any other tool?
Every other field in history considers it de rigeur that you're liable for the failure of quality in the products you produce. You make drugs that hurt people? You're liable. You build a building that falls down? You're liable. You serve coffee that literally burns the people drinking it? You're liable. It's also not new--the Code of Hammurabi (some 6000 years ago) prescribes the death penalty for people who build houses that fall down and kill the inhabitants inside.
It's only computer scientists who think it's some unreasonable burden to be held liable for the consequences of their work.
People who cause death from either action or inaction are criminally liable for it, that's the other side of it.
If I tell someone to kill someone else and they do, then I should be held responsible.
If I write instructions in a book that I give to someone telling them to kill someone else and they do, then I should be held responsible.
If I give someone a tool I made that I bill as more-than-PhD-level intelligence and it tells someone to kill someone else and they do, then I should be held responsible.
All of the above situations seem equivalent to me; I'm not the only person responsible in each case, but I gave them instructions and they followed them.