logoalt Hacker News

zugiyesterday at 7:46 PM4 repliesview on HN

An AI class that I took decades ago had just a 1 day session on "AI ethics". Somehow despite being short, it was memorable (or maybe because it was short...)

They said ethics demand that any AI that is going to pass judgment on humans must be able to explain its reasoning. An if-then rule says this, or even a statistical correlation between A and B indicates that would be fine. Fundamental fairness requires that if an automated system denies you a loan, a house, or a job, it be able to explain something you can challenge, fix, or at least understand.

LLMs may be able to provide that, but it would have to be carefully built into the system.


Replies

nemomarxyesterday at 7:51 PM

I'm sure you could get an LLM to create a plausible sounding justification for every decision? It might not be related to the real reason, but coming up with text isn't the hard part there surely

show 3 replies
rilindoyesterday at 8:57 PM

> Fundamental fairness requires that if an automated system denies you a loan, a house, or a job, it be able to explain something you can challenge, fix, or at least understand.

That could get interesting, as most companies will not provide feedback if you are denied employment.

show 1 reply
ottahtoday at 4:23 AM

I hate this. An explanation is only meaningful if it comes with accountability, knowing why I was denied does me no good if I have no avenue for effective recourse outside of a lawsuit.

direwolf20yesterday at 8:05 PM

This is the law in the EU, I think

show 1 reply