logoalt Hacker News

My Mom and Dr. DeepSeek (2025)

110 pointsby kietotoday at 6:45 PM71 commentsview on HN

Comments

reenoraptoday at 8:09 PM

I and many of my friends have used ChatGPT extremely effectively to diagnose medical issues. In fact, I would say that ChatGPT is better than most doctors because most doctors don't actually listen to you. ChatGPT took the time to ask me questions and based on my answers, narrowed down a particularly scary diagnosis and gave excellent instructions on how to get to a local hospital in a foreign country, what to ask for, and that I didn't have to worry very much because it sounded very typical for what I had. The level of reassurance that I was doing everything right actually made me feel less scared, because it was a pretty serious problem. Everything it told me was 100% correct and it guided me perfectly.

I was taking one high blood pressure medication but then noticed my blood sugar jumped. I did some research with ChatGPT and it found a paper that did indicate that it could raise blood sugar levels and gave me a recommendation for an alternative I asked my doctor about it and she said I was wrong, but I gently pushed her to switch and gave the recommended medication. She obliged, which is why I have kept her for almost 30 years now, and lo and behold, my blood sugar did drop.

Most people have a hard time pushing back against doctors and doctors mostly work with blinders on and don't listen. ChatGPT gives you the ability to keep asking questions without thinking you are bothering them.

I think ChatGPT is a great advance in terms of medical help in my opinion and I recommend it to everyone. Yes, it might make mistakes and I caution everyone to be careful and don't trust it 100%, but I say that about human doctors as well.

show 9 replies
kingstnaptoday at 7:51 PM

> she said she was aware that DeepSeek had given her contradictory advice. She understood that chatbots were trained on data from across the internet, she told me, and did not represent an absolute truth or superhuman authority

With highly lucid people like the author's mom I'm not too worried about Dr. Deepseek. I'm actually incredibly bullish on the fact that AI models are, as the article describes, superhumanly empathetic. They are infinitely patient, infinitely available, and unbelievably knowledgeable, it really is miraculous.

We don't want to throw the baby out with the bathwater, but there are obviously a lot of people who really cannot handle the seductivity of things that agree with them like this.

I do think there is pretty good potential in making good progress on this front in though. Especially given the level of care and effort being put into making chatbots better for medical uses and the sheer number of smart people working on the problem.

show 4 replies
repirettoday at 8:50 PM

I'm reminded of the monologue from Terminator 2:

> Watching John with the machine, it was suddenly so clear. The Terminator would never stop, it would never leave him... it would always be there. And it would never hurt him, never shout at him or get drunk and hit him, or say it couldn't spend time with him because it was too busy. And it would die to protect him. Of all the would-be fathers who came and went over the years, this thing, this machine, was the only one who measured up. In an insane world, it was the sanest choice.

The AI doctor will always have enough time for you, and always be at the top of their game with you. It becomes useful when it works better than an overworked midlevel, not when it competes with the best doctor on their best day. If we're not there already, we're darn close.

delichontoday at 8:21 PM

I prepared for a new patient appointment with a medical specialist last week by role playing the conversation with a chatbot. The bot turned out to be much more responsive, inquisitive and helpful. The doctor was passive, making no suggestions, just answering questions. I had to prompt him explicitly to get to therapy recommendations, unlike the AI. I was glad that I had learned enough from the bot to ask useful questions. It would have been redundant if the doctor was active and interested, but that can't be depended on. This is standard procedure for me now.

tgtweaktoday at 9:22 PM

Much like talking to your doctor - you need to ask/prompt the right questions. I've seen chatgpt and gemini make one false assumption that was never mentioned and run with it and continue referencing it down the line as if it were fact... That can be extremely dangerous if you don't know enough to ask it to reframe or verify, or correct it's assumption.

If you are using it like a tool to review/analyze or simplify something - ie explain risk stratification for a particular cancer variant and what is taken into account, or ask it to provide probabilities and ranges for survival based on age/medical history, it's usually on the money.

Every other caveat mentioned here is valid, and it's valid for many domains not just medical.

I did get hemotologist/oncologist level advice out of chatgpt 4o based on labs, pcr tests and symptoms - and those turned out to be 100% true based on how things panned out in the months that followed and ultimately the treatment that was given. Doctors do not like to tell you the good and the bad candidly - it's always "we'll see what the next test says but things look positive" and "it could be as soon as 1 week or as long as several months depending on what we find" when they know full well you're in there for 2 months at minimum you're a miracle case. Only once cornered or prompted will they give you a larger view of the big picture. The same is true for most professional fields.

wnissentoday at 7:04 PM

This was not what I was expecting. The doctors I know are mostly miserable; stuck between the independence but also the burden of running their own practice, or or else working for a giant health system and having no control over their own days. You can see how an LLM might be preferable, especially when managing a chronic, degenerative condition. I have a family member with stage 3 kidney disease who sees a nephrologist, and there's nothing you can actually do. No one in their right mind would recommend a kidney transplant, let alone dialysis for someone with moderately impaired kidneys. All you can do is treat the symptoms as they come up and monitor for significant drops in function.

heisenbittoday at 8:30 PM

For major medical issues it may well be best practice to use the four eyes principle like we do for all safety related systems. Access is key and at this time getting a second pair of eyes in close timely proximity is a luxury few have and even fewer will have looking at the demographics in the developed world. Human doctors are failable as is AI. For the time being having a multitude of perspectives may well be the best in most cases.

show 2 replies
zzzoomtoday at 8:32 PM

If you get the "You're absolutely right!" response from an LLM that screwed up on a field you're familiar with and still let them play with your health, you're...courageous to say the least.

alexpotatotoday at 9:00 PM

One of my kids recently had a no-contact knee injury while playing basketball. He immediately started limping and crying and I had to carry him from the court to the car.

I did some searching with Grok and I found out:

- no contact injuries are troubling b/c it generally means they pulled something

- kids don't generally tear an ACL (or other ligament)

- it's actually way more common for the ligament to pull the anchor point off of the bigger bone b/c kid bones are soft

I asked it to differentially diagnose the issue with the details of: can't hold weight, little to no swelling and some pain.

It was adamant, ADAMANT, that this was a classic case of bone being pulled off by the ligament and that it would require surgery. It even pointed out the no swelling could be due to a very small tear etc. It gave me a 90% chance of surgery too.

I followed up by asking what test would definitely prove it one way or the other and it mentioned getting an X-Ray.

We go off to the urgent care, son is already kind of hobbling around. Doctor says he seems fine, I push for an X-Ray and turns out no issue: he probably just pulled something. He was fully healed in 2-3 days.

As someone who has done a lot of differential diagnosing/troubleshooting of big systems (FinTech SRE) I find it interesting that it was basically correct in what could have happened but couldn't go the "final mile" to establish it correctly. Once we start hooking up X-Rays to Claude/Grok 4.2 etc equivalent LLMs, will be even more interesting to see where this goes.

show 2 replies
margorczynskitoday at 7:47 PM

The problem is not reliance on AI but that the AI is not ready yet and using general-purpose models.

There isn't simply enough doctors to go around and the average one isn't as knowledgeable as you would want. Everything suggests that when it comes to diagnosis ML systems should be better in the long run on average.

Especially with a quickly aging population there is no alternative if we want people to have healthcare on a sensible level.

show 1 reply
adzmtoday at 8:11 PM

Considering how difficult it is to get patients to talk to doctors, using AI can be a great way to get some suggestions and insight _and then present that to your actual doctor_

show 1 reply
blacksmith_tbtoday at 7:27 PM

The dangers are obvious (and also there are some fascinating insights into how healthcare works practically in China). I wonder if some kind of "second opinion" antagonistic approach might reduce the risks.

show 2 replies
Thoreandantoday at 8:10 PM

Reminds me of an excellent paper I just read by a former Google DeepMind Ethics Research Team member

https://www.mdpi.com/2504-3900/114/1/4 - Reinecke, Madeline G., et al. "The double-edged sword of anthropomorphism in llms." Proceedings. Vol. 114. No. 1. MDPI, 2025 Author: https://www.mgreinecke.com/

Kuyawatoday at 9:29 PM

"Doctors are more like machines"

"Machines are more like humans"

I love the future...

mhl47today at 7:08 PM

Worriesome for sure.

However I would say that the cited studies are somewhat outdated already compared e.g. with GPT-5-Thinking doing 2mins of reasoning/search about a medical question. As far as I know Deepseeks search capabilities are not comparable and non of the models in the study spend a comparable amount of compute answering your specific question.

show 1 reply
guywithahattoday at 7:49 PM

> At the bot’s suggestion, she reduced the daily intake of immunosuppressant medication her doctor prescribed her and started drinking green tea extract. She was enthusiastic about the chatbot

I don't know enough about medicine to say whether or not this is correct, but it sounds suspect. I wouldn't be surprised if chatbots, in an effort to make people happy, start recommending more and more nonsense natural remedies as time goes on. AI is great for injuries and illnesses, but I wonder if this is just the answer she wants, and not the best answer.

show 2 replies
philipwhiuktoday at 7:42 PM

This almost certainly isn't only a China problem. I've observed UK users asking questions about diabetes and other health advice. We also have an inexpensive (free-at-point of use for most stuff) but stretched healthcare system. Doubtless there are US users looking at the cost of their healthcare and resorting to ChatGPT instead too.

In companies people talk about Shadow-IT happening when IT doesn't cover the user needs. We should probably label this stuff Shadow-Health.

To some extent, the deployment of a publicly funded AI health chat bot, where the responses can be analysed by healthcare professionals to at least prevent future harm is probably significantly less bad than telling people not to ask AI questions and consult the existing stretched infrastructure. Because people will ask the questions regardless.

show 1 reply
scuff3dtoday at 9:11 PM

I feel like we're in the part of the dystopian SciFi movie where they explain how civilization discovered some technological advance that they thought would be a panacea to all their woes. And despite not really understanding it or it's limitations, just started slapping it on absolutely everything, and before they know what happened everything comes crashing down.

show 1 reply
renewiltordtoday at 7:58 PM

Access trumps everything else. A doctor is fine with you dying while you wait on his backlog. The machine will give you some wrong answers. The mother in the story seems to be balancing the concerns. She has become the agent of her own life empowered by a supernatural machine.

> She understood that chatbots were trained on data from across the internet, she told me, and did not represent an absolute truth or superhuman authority. She had stopped eating the lotus seed starch it had recommended.

The “there’s wrong stuff there” fear has existed for the Internet, Google, StackOverflow. Each time people adapted. They will adapt again. Human beings have remarkable ability to use tools.

snitzrtoday at 7:21 PM

A sick family member told me something along the lines of, "I know how to work with AI to get the answer." I interpret that to mean he asks it questions until it tells him what he wants to hear.

show 1 reply
Jhatertoday at 8:42 PM

[dead]

theyneverleartoday at 8:00 PM

[dead]

candiddevmiketoday at 7:11 PM

I think the article can basically be summed up as "GenAI sychophancy should have a health warning similar to social media". It's a helluva drug to be constantly rewarded and flattered by an algorithm.

show 1 reply