logoalt Hacker News

throw_m239339yesterday at 9:29 PM2 repliesview on HN

> they don't know how to use than the tools themselves.

No, the tools work perfectly as they were design to work. The problem is that the tools are flawed.

Ultimately, every single of these decisions should be approved by a human, which should be responsible for the fuck up no matter what the consequences are.

> _Some_ of the blame lies on the UX here. It must.

No, the blame lies with the person or the group who approve the usage of these tools, without understanding their shortcomings.


Replies

jolmgyesterday at 9:48 PM

>> are the tools built in such a way as to deceive the user into a false sense of trust or certainty? _Some_ of the blame lies on the UX here. It must.

> No, the blame lies with the person or the group who approve the usage of these tools, without understanding their shortcomings.

The person who approved the tools might've understood, but that doesn't mean the user understands. _Some_ of the reason why the user doesn't understand the shortcomings of the tool might be because of misleading UX.

Pxtlyesterday at 9:31 PM

I miss the days of earlier AI image-recognition software that would emit a confidence percentage.

New LLM-related AIs are all supremely confident in every assertion, no matter how wrong.

show 1 reply