logoalt Hacker News

the_aftoday at 1:27 PM3 repliesview on HN

They cannot even claim they weren't aware of the danger. LLM hallucinations have been a discussed topic, not some obscure failure mode. Almost every article on problems with AI mentions this.

So the judge was lazy, incompetent, or both.


Replies

ghywertellingtoday at 4:40 PM

Or she was conniving like Skylar in Breaking Bad as she convinced the investigator that she got hired because she seduced the owner.

nerdjontoday at 3:44 PM

I do think that for this particular situation we need to step outside of our tech bubble a little bit.

I am still having regular conversations with people that either don't know about hallucinations or think they are not a big problem. There is a ton of money in these companies pushing that their tools are reliable and its working for the average user.

I mean there are people that legitimately think these tools are conscious or we already have AGI.

So I am not fully sure if I would jump too quick to attack the judge when we see the marketing we are up against.

show 1 reply
lukantoday at 1:34 PM

Not just discussed, but under every chat interface explicitely mentioned "This tool can make misstakes"

(Sure, more honest would be "this tool makes stuff up in a convincing way")