logoalt Hacker News

throwatdem12311today at 11:54 AM0 repliesview on HN

I mean this in the nicest way possible.

But if someone dies because this thing hallucinates their reporting - would you feel any sense of culpability?

“GPL says no warranty”

“People need to double check LLM output”

“You’re holding it wrong”

I really don’t know if we, collectively as a civilization, should be willing to accept this kind of hand-waving when it comes to creating things like this. Sure, tools make mistakes or people misinterpret reports without the help of LLMs - but LLMs are just on a whole other level where the mistakes are just part of how these things work from a fundamental level.

I don’t even trust AI scribes at my doctors office to transcribe my appointment due to errors. There is no way in hell I would ever use something like this that could just straight up lie about something that kills me if I get it wrong.