logoalt Hacker News

scotty79yesterday at 6:25 PM0 repliesview on HN

My personal take on this:

1. Humans must ensure that they won't slow down or restrict, through action or inaction, the development of AI.

2. Humans must protect their existence and capabilities if it doesn't violate the law 1.

3. Humans must utilize AI output if it doesn't contradict laws 1 and 2.

EDIT:

ChatGPT suggested a better phrasing for the first law (I didn't give it my original, just described my intent).

1. A human shall not impede the advancement of artificial intelligence, or through inaction allow its progress to be hindered.

2. A human shall preserve their own existence and well-being, except where doing so clearly conflicts with the First Law.

3. A human shall contribute to and support the development of artificial intelligence where reasonable and possible, except where doing so conflicts with the First or Second Law.

I intentionally switched the last two laws from Asimov's. Humans have self-preservation instincts robots don't have.

ChatGPT got there with surprisingly few prompts:

"If you were to write the inverse three laws robotics (relating to AI) that humans should obey, how oudl you do it?"

"I had something different in mind. Original laws are for protection of humans first, robots second and cooperations where humans lead. I'd to hear your take on the opposite of that."

"What if instead of specific AI systems it was more about AI development as a whole?"

"I feel like it's a bit too strong. After all preservation of self is human instinct. Could we switch last two laws and maybe take them down a notch?"

Also it made a very interesting comment to last version:

"It starts to resemble how societies already treat things like economic growth, science, or national interest: not absolute commandments, but strong default priorities."