logoalt Hacker News

toofytoday at 11:41 AM2 repliesview on HN

i think when most people bring up mistakes that these models make, much of their concern is that little can be done.

when one of the juniors makes a mistake, i can talk to them about it and help them understand where they went wrong, if they continue to make mistakes we can change their position to something more suited for them. we can always let them go if they have too much hubris to learn.

who do we hold to account when a model makes a mistake? we’re already beginning to see, after major fuckups, companies blackhole nullrouting accountability into “not our fault, don’t look at us, ai was wrong”

the other thing is, if you have done a good job selecting your team, you’ll have people who understand their limits, who understand when to ask for help, who understand when they don’t know something. a major problem with current models is that it will always just guess or stretch toward random rather than halt.

so yes, people will make mistakes, but at least you can count on being able to mitigate for those after.


Replies

terminalshorttoday at 3:10 PM

Who was held to account when the IRS made a mistake and sent me a demand letter for over $100K of "unpaid taxes" I didn't owe? Who compensated me for the hours I spent on hold and the money I had to pay an accountant to deal with it?

chrisjjtoday at 2:41 PM

> who do we hold to account when a model makes a mistake?

First we stop anthromorphising the program as capable of making a "mistake". We recognise it merely as machine providing incorrect output, so we see the only mistake was made by the human who chose to rely upon it.

The courts so far agree. Judges are punishing the gulled lawyers rather than their faux-intelligent tools.