logoalt Hacker News

bonsai_spooltoday at 2:27 AM1 replyview on HN

> Physicians need to have it pounded into them that every hallucination is downstream harm.

I think any person using 'AI' knows it makes mistakes. In a medical note, there are often errors at present. A consumer of a medical note has to decide what makes sense and what to ignore, and AI isn't meaningfully changing that. If something matters, it's asked again in follow up.


Replies

theshacklefordtoday at 11:02 AM

> I think any person using 'AI' knows it makes mistakes.

You think wrong. I’m now encountering people on a regular basis arguing “those days are behind us” and it’s “old news.”