logoalt Hacker News

Frierentoday at 7:52 AM1 replyview on HN

https://en.wikipedia.org/wiki/Whataboutism


Replies

mrweaseltoday at 8:39 AM

I don't think this is "whataboutism", the two things are very closely related and somewhat entangled. E.g. did the AI learn of violate ethical constraints from training data?

Another interesting question is: What happens when an unyielding ethical AI agent tells a business owner or manager "NO! If you push any further this will be reported to the proper authority. This prompt as been saved for future evidence". Personally I think a bunch of companies are going to see their profit and stock price fall significantly, if an AI agent starts acting as a backstop for both unethical and illegal behavior. Even something as simple as preventing violation of internal policy could make a huge difference.

To some extend I don't even thing that people realize that what they're doing is bad, because humans tend to be a bit fuzzy and can dream up reason as to why rules don't apply or wasn't meant for them, or this is a rather special situation. This is one place where I think properly trained and guarded LLMs can make a huge positive improvement. We're are clearly not there yet, but it's not a unachievable goal.