logoalt Hacker News

wawayandatoday at 12:38 AM9 repliesview on HN

A year or so ago, I fed my wife's blood work results into chatgpt and it came back with a terrifying diagnosis. Even after a lot of back and forth it stuck to its guns. We went to a specialist who performed some additional tests and explained that the condition cannot be diagnosed with just the original blood work and said that she did not have the condition. The whole thing was a borderline traumatic ordeal that I'm still pretty pissed about.


Replies

greenknighttoday at 3:22 AM

On the flip side, i had some pain in my chest... RUQ (right upper quadrant for those medical folk).

On the way to the hospital, ChatGPT was pretty confident it was a issue with my gallbladder due to me having a fatty meal for lunch (but it was delicious).

After an extended wait time to be seen, they didnt ask about anything like that, and at the end they were like anything else to add, added it in about ChatGPT / Gallbladder... discharged 5 minutes later with suspicion of Gallbladder as they couldnt do anything that night.

Over the next few weeks, got test after test after test, to try and figure out whats going on. MRI. CT. Ultrasound etc.etc. they all came back negative for the gallbladder.

ChatGPT was persistant. It said to get a HIDA scan, a more specialised scan. My GP was a bit reluctant but agreed. Got it, and was diagnosed with a hyperkinetic gallbladder. It is still unrecognised as an issue, but mostly accepted. So much so my surgeon initally said that it wasnt a thing (then after doing research about it, says it is a thing)... and a gastroentologist also said it wasnt a thing.

Had it taken out a few weeks ago, and it was chroically inflammed. Which means the removal was the correct path to go down.

It just sucks that your wife was on the other end of things.

show 1 reply
fouctoday at 3:37 AM

> it stuck to its guns

Everyone that encounters this needs to do a clean/fresh prompt with memory disabled to really know if the LLM is going to consistently come to the same conclusion or not.

SchemaLoadtoday at 2:04 AM

I asked a doctor friend why it seems common for healthcare workers to keep the results sheets to themself and just give you a good/bad summary. He told me that the average person can't properly understand the data and will freak themselves out over nothing.

fn-motetoday at 2:09 AM

> I fed my wife's blood work results into chatgpt and it came back with a terrifying diagnosis

I don't get it... a doctor ordered the blood work, right? And surely they did not have this opinion or you would have been sent to a specialist right away. In this case, the GP who ordered the blood work was the gatekeeper. Shouldn't they have been the person to deal with this inquiry in the first place?

I would be a lot more negative about "the medical establishment" if they had been the ones who put you through the trauma. It sounds like this story is putting yourself through trauma by believing "Dr. GPT" instead of consulting a real doctor.

I will take it as a cautionary tale, and remember it next time I feed all of my test results into an LLM.

show 1 reply
themafiatoday at 2:20 AM

> it stuck to its guns

It gave you a probabilistic output. There were no guns and nothing to stick to. If you had disrupted the context with enough countervailing opinion it would have "relented" simply because the conversational probabilities changed.

show 1 reply
terriblepersontoday at 3:27 AM

Do you have a custom prompt/personality set? What is it?

irjustintoday at 2:04 AM

Isn't it two sides to the same coin?

You should be happy about it that it's not the thing specifically when the signs pointed towards it being "the thing"?

show 1 reply
orionsbelttoday at 2:09 AM

> "A year or so ago"

What model?

Care to share the conversation? Or try again and see how the latest model does?

daveguytoday at 12:44 AM

Please keep telling your story. This is the kind of shit that medical science has been dealing with for at least a century. When evaluating testing procedures false positives can have serious consequences. A test that's positive every time will catch every single true positive, but it's also worthless. These LLMs don't have a goddamn clue about it. There should be consequences for these garbage fires giving medical advice.