> are the tools built in such a way as to deceive the user into a false sense of trust or certainty? _Some_ of the blame lies on the UX here. It must.
Are AI code assist tools built in such a way as to deceive the user into a false sense of trust or certainty? Very much so (even if that isn't a primary objective).
Does any part of the blame lie on the UX if a dev submits a bad change? No, none.
You are ultimately, solely responsible for your work output, regardless of which tool you choose to use. If using your tool wrong means you make someone homeless, car-less, and also you kill their dog, then you should be a lot more cautious and perform a lot more verification than the average senior engineer.
I agree with all that. Maybe the word isn't "blame," then. Surely there must be some code, perhaps moral or ethical, but ideally more rigorously enforcible, which ought to prevent the development of intentionally deceiving tools. Sure you could say this about all software, but that which can cause actual physical harm ought to be held to a higher standard.