logoalt Hacker News

susamtoday at 2:56 PM1 replyview on HN

I recently wrote a blog post where I argued that there are a few principles we should consistently follow when talking about AI: https://susam.net/inverse-laws-of-robotics.html

To summarise them:

1. Do not anthropomorphise AI systems.

2. Do not blindly trust the output of AI systems.

3. Retain full human responsibility and accountability for any consequences arising from the use of AI systems.

I would like to see the language around AI become less anthropomorphic and more technical. I believe that precise language encourages clear thinking and good judgement. If we treat AI like another tool and use language that reflects that, it will become abundantly obvious that in many cases, the responsibility of any 'mistake' made by the tool falls on the user of the tool.

But alas, ideas like this do not travel very far when I express them on my small website. It would help if more prominent personalities articulated these principles, so they become more widely adopted.


Replies

pier25today at 3:21 PM

> Retain full human responsibility and accountability for any consequences arising from the use of AI systems

So if the tool doesn't do what it's supposed to be doing we should blame the user instead of the company that made the tool?

show 1 reply