logoalt Hacker News

frumplestlatzlast Tuesday at 12:53 AM1 replyview on HN

A simple question: would you rather live in a world in which responsibility for AI action is dispersed to the point that individuals are not responsible for what their AI tools do, or would you rather live in a world of strict liability in which individuals are responsible for what AI under their control does?

Alignment efforts, and the belief that AI should itself prevent harm, shifts us much closer to that dispersed responsibility model, and I think that history has shown that when responsibility is dispersed, no one is responsible.


Replies

tomrodlast Tuesday at 1:01 AM

> A simple question: would you rather live in a world in which responsibility for AI action is dispersed to the point that individuals are not responsible for what their AI tools do, or would you rather live in a world of strict liability in which individuals are responsible for what AI under their control does

You promised a simple question, but this is a reductive question that ignores the legal and political frameworks within which people engage with and use AI, as well as how people behave generally and strategically.

Responsibility for technology and for short-sighted business policy is already dispersed to the point that individuals are not responsible for what their corporation does, and vice versa. And yet, following the logic, you propose as the alternative a watchtower approach that would be able to identify the culpability of any particular individual in their use of a tool (AI or non-AI) or business decision.

Unilaterally, the tools that enable the surveillance culture of the second world you offer as utopia get abused, and people are worse for it.