My uncle had an issue with his balance and slurred speech. Doctors claimed dementia and sent him home. It kept becoming worse and worse. Then one day I entered the symptoms in ChatGPT (or was it Gemini?) and asked it for the top 3 hypotheses. The first one was related to dementia. The second was something else (I forget the long name). I took all 3 to his primary care doc who had kept ignoring the problem, and asked her to try the other 2 hypotheses. She hesitantly agreed to explore the second one, and referred him to a specialist in that area. And guess what? It was the second one! They did some surgery and now he's fine as a fiddle.
Here’s something: my chatGPT quietly assumed I had ADHD for around 9 months, up until October 2025. I don’t suffer from ADHD. I only found out through an answer that began “As you have ADHD..”
I had it stop right there, and asked it to tell me exactly where it got this information; the date, the title of the chat, the exact moment it took this data on as an attribute of mine. It was unable to specify any of it, aside from nine months previous. It continued to insist I had ADHD, and that I told it I did, but was unable to reference exactly when/where.
I asked “do you think it’s dangerous that you have assumed I have a medical / neurological condition for this long? What if you gave me incorrect advice based on this assumption?” to which it answered a paraphrased mea culpa, offered to forget the attribute, and moved the conversation on.
This is a class action waiting to happen.
No. Absolutely not. The government owes its people a certain duty of care to say “just because you can doesn’t mean you should.”
LLM’s are good for advice 95% of the time, and soon that’ll be 99%. But it is not the job of OpenAI or any LLM creator to determine the rules of what good healthcare looks like.
It is the job of the government.
We have certification rules in place for a reason. And until we can figure out how to independently certify these quasi-counselor robots to some degree of safety, it’s absolutely out of the question to release this on the populace.
We may as well say “actually, counseling degrees are meaningless. Anyone can charge as a therapist. And if they verifiably recommend a path of self-harm, they should not be held responsible.”
I help take care of my 80-ish year old mother. ChatGPT figured out in 5 minutes the reason behind a pretty serious chronic problem that her very good doctors hadn't been able to figure out in 3 years. Her doctors came around to the possibility, tested out the hypothesis, and it was 100% right. She's doing great now (at least with that one thing).
That's not to say that it's better than doctors or even that it's a good way to address every condition. But there are definitely situations where these models can take in more information than any one doctor has the time to absorb in a 12-minute appointment and consider possibilities across silos and specialties in a way that is difficult to find otherwise.
I had a friend who has now gotten several out of pocket MRIs essentially against medical advice because she believes her persistent headaches are from brain cancer.
Even after the first MRI essentially ruled this out, she fed the MRI to chatGPT which basically hallucinated that a small artifact of the scan was actually a missed tumor and that she needed another scan. Thousands wasted on pointless medical expenses.
Having friend's in healthcare, they have mentioned how common this is now. Someone coming in and demanding a set of tests based on chatGPT. They have explained that A, tests with false positives can actually be worse for you (triggers even more invasive tests) B, insurance won't cover any of your chatGPT requests.
Again, being involved in your care is important but disregarding the medical professional in front of you is a great way to set yourself up for substandard care.
I've had serious trouble with my knee and elbow for years and ChatGPT helped me immensely after a good couple dozen of doctors just told me to take Ibuprofen and rest and never talked to me for longer than 3 minutes. I feel like as with most things LLM there are many opponents that say "if you do what an LLM says you will die", which is correct, while most people that look positively towards using LLMs for health advice report that they used ChatGPT to diagnose something. Having a conversation with ChatGPT based on reports and scans and figuring out what follow-up tests to recommend or questions to ask a doctor makes sense for many people. Just like asking an LLM to review your code is awesome and helpful and asking LLM to write your code is an invitation for trouble.
Openai is cooked. They can't compete with others so just experimenting on some side things...
"we’ve worked with more than 260 physicians" yet not a single one of their names is proudly featured in this article. Well, the article itself does not even have an author listed. Imagine trusting someone who doesn't even disclose their identity with your sensitive data.
There’s a lot of negativity here. I’ll just say I’m extremely glad I had ChatGPT when I was going through some health issues last year.
I personally don’t care who has access to my health data, but I understand those who might.
Either way, I’m excited for some actual innovation in the personal health field. Apple Health is more about aggregating data than actually producing actionable insights. 23andme was mostly useless.
Today I have a ChatGPT project with my health history as a system prompt and it’s been very helpful. Recently I snapped a photo of an obscure instrument screen after taking a test and was able to get more useful information than what my doctor eventually provided (“nothing to worry about”, etc.) ChatGPT was able to reference papers and do data analysis which was pretty amazing, right from my phone (e.g fitting my data to a model from a paper and spitting out a plot).
I understand all the chatter about LLMs hallucinating, or making assumptions, or not being able to understand or provide the more human/emotional element of health care.
But the question I ask myself is: is this better than the alternative? if I wasn't asking ChatGPT, where would I go to get help?
The answers I can anticipate are: questionably trustworthy web content; an overconfident friend who may have read questionably trustworthy web content; my mom who is referencing health recommendations from 1972. And as best I can imagine, LLMs are going to likely to provide health advice that's as good but likely better than any of those alternatives.
With that said, I acknowledge that people are likely more inclined to trust ChatGPT more like a licensed medical provider, at which point the comparison may become somewhat more murky, especially with higher severity health concerns.
I’m kind of torn on this. From one side, I can’t seem to trust doctors any more. I recently had a tooth removed (by the advice of two different doctors), in a claim that it will resolve my pain, which it did not, and now 3 different doctors don’t know what’s causing my pain.
Most doctor advice boil down to drink some water and take a painkiller, while glancing for 15 seconds at my medical history before they dedicate me 7 minutes, after which they move to yet another patient.
So compared to this, AI that can analyze all my medical history, and has access to the entirety of medical researches that are publicly available, could be a very good tool to have.
But at the same time technofeudalism, dystopia, etc.
Unfortunately, I feel like I'm in the minority here, but AI has been really helpful to me and my doctor visits when it comes to preparing for a ~10-minute appointment that historically always felt like it was never long enough. I can sit down with an LLM for as long as I need and discuss my concerns and any potential suggestions, have it summarize them in a structure that's useful for my doctor, and send an email ahead of the appointment. For small things, she doesn't even need me to come in anymore and a simple phone call to confirm is enough. With the amount of pressure the healthcare system is under, I think this approach can free up a lot of valuable time for her to spend with the patients who need it most.
I'm dealing with a severe health ailment with my cat right now and ChatGPT has been pretty invaluable in helping us understand what's going on. We've been keeping our own detailed medical log that I paste in with the lab and radiology results and it gives pretty good responses on everything so far. Of course I'm treating the results skeptically but so far it has been helpful and kept us more informed on what's going on. We've found it works best if you give it the raw facts and lab results.
The main issue is that medicine and diseases come with so many "it depends" and caveats. Like right now my cat won't eat anything, is it because of nausea from the underlying disease, from the recent stress she's been through, from the bad reaction to the medicine she really doesn't like, from her low potassium levels, something else, all of the above? It's hard to say since all of those things mention "may cause nausea and loss of appetite". But to be fair, even the human vets are making their own educated guesses.
Going to a probabilistic system for something that can/should be deterministic sets off a lot of red flags.
I’ve worked medical software packages, specifically a drug interaction checker for hospitals. The system cannot be written like a social media website… it has to fail by default, and only succeed when an exact correct solution was determined. This result must be repeatable given the same inputs. The consequence is people die.
My cousin just finished years of medical school, residency, and his first job as a psychiatrist. He opened up a private practice a year ago and has been working hard to acquire a client base. I fear this will destroy his livelihood. He can't compete on the convenience. To see him, a person has to reach him via phone or email, process their healthcare information, and then physically visit him. All while this tool has been designed to process health information, which can also speak out loud with the patient instantly. Sure he can prescribe medications, but many people he sees do not need medication. Even if the doctor is better, the convenience of this tool will likely win out.
If America wants to take care of its people, it needs to tear down the bureaucracy that is our healthcare system and streamline a single payer system. Otherwise, doctors will be unable to compete with tools like this because our healthcare system is so inconvenient.
The problem with dr appointments is that too often, physicians dont actually think carefully about your case.
It's like they one-shot it.
This is why I've had my dr change their mind between appointments, having had more time to review the data.
Or I get 3 different experts giving me 3 different (contradicting!) diagnoses.
That's also why I always hesitate listening to their first advice.
"we built foundational protections (...) including (...) training our models not to retain personal information from user chats"
Can someone please ELI5 - why is this a training issue, rather than basic design? How does one "train" for this?
Integration with Function is a great use-case. There is a huge category of pre-diagnostic health questions (“Medicine 3.0” as Attia puts it) where personalization and detailed interpretation of results is important, yet insurance typically won’t cover preemptive treatment.
Not to mention that doctors generally don’t have time to explain everything. Recently I’ve been doing my own research and (important failsafe) running the conclusions by my doctor to validate. Tighter integration between physician notes, ChatGPT conversations, and ongoing biomarkers from e.g. function and Apple Health would make it possible to craft individualized health plans without requiring six-figure personal doctor subscriptions.
A great opportunity to improve the status quo here.
Of course - as with software, quality control will be the crux. We don’t want “vibe diagnosing”.
> Health is designed to support, not replace, medical care. It is not intended for diagnosis or treatment.
I suspect that will be legally-tested sooner than later.
maybe the openai moat is the data we shared along the way.
no seriously, openai seemingly lost interest in being the 'best' model - instead optimizing for other traits such as speech and general human likeness? there's obviously codex but from my experience it's slower and worse than the other big 2 in every single way: cost, speed and accuracy. codex does seem to be loved by vibe coders the most that don't really know how to code at all so maybe it is also what they're optimizing for and why it doesn't personally suit me.
others might have better models, but openai has the users emotionally attached to the models at this point even if they know it or don't. there were several times I recommended switching and the response I got is that "chatgpt knows me better".
Gemini helped diagnoze me with eosinophilic esophagitis. I have had problems with swallowing all my life and doctors kept dismissing it as a psychological problem. I think there is a great space with ai medical help.
This was expected. People are going to be convinced that this AI knows more than any doctor, will self medicate, and will die, harm others, their kids, etc.
Great work, can't wait to see what's next.
I always check my blood test and mrı results with ChatGPT before showing to Doctor. Doctor says same thing what ChatGPT says and it's giving more clear and detailed information. However we shouldn't trust chatgpt result 100%. It's just good to take an idea. Also we shouldn't trust any doctor 100%
Local AI can’t come soon enough. My health should be between me and my AI. Keep corporations and government out of it.
Specifically as someone in the UK, where doctors are free but extremely hard to get hold of, this is quite interesting to me.
the amount of people willing to delegate to chatgpt tells me in the near future only rich people will be able to speak with a real doctor. the current top comment about someone's uncle being saved due to chatgpt guidance says it all.
I pity the doctors who will now have to deal with such self-diagnosed "patients". Wonder if General Medicine doctors will see a drop in patient, as AI convinces you to see a specialist with its diagnosis?
Something to note here is that just yesterday (January 6 2026) the FDA announced changes around regulation of wearable & AI enabled devices: https://www.statnews.com/2026/01/06/fda-pulls-back-oversight... (" FDA announces sweeping changes to oversight of wearables, AI-enabled devices The changes could allow unregulated generative artificial intelligence tools into clinical workflows")
I use it for health advice sometimes.. but.. doesn't this seem like a massive source of liability? Are they just assuming the investor dollars will pay for the lawyers?
> you can sign up for the waitlist
Waitlist: 404 page not found.
Might be useful if they start letting me write my own prescriptions or can send a robot to my house to run tests or perform surgery. Otherwise, I don't really see how this changes anything for me; the doctor - that I already have to see - should just check their analysis with AI on my behalf.
Many people in the comments are worried about laypeople using this for medical advice.
I'm worried that hospital admins will see this as a way to boost profit margins. Replace all the doctors by CNAs with ChatGPT. Yes, doctors are in short supply, are overworked, and make mistakes. The solution isn't to get rid of them, but to increase the supply of doctors.
I woder wether this will have the same pitfalls as regular ChatGPT.
The latter implicitly assumes all your questions are personal. It seems to have no concept of context for its longer term retentions.
Certainly for health, non accute things seem matter a lott. This is why yoir personal doctor that has known you for decades will spot things beyond your current symptoms.
But ChatGTP will uncritically retain from that time you helped your teacher relative build her lesson plans that you "are a teacher in secondary education" or that time you helped diagnose a friends car trouble that you "drives a high performance car" just the same as your regular "successfully built a proxmox datacenter".
With health there will be many users aking on behalve of or helping out an elderly relative. I wonder whether all those 'diagnoses' and 'issues' will be correctly attributed to the right 'patient' or just be mixed together and assumed to be all about 'you'.
yikes: https://news.ycombinator.com/item?id=46524382
[Teenager died of overdose 'after ChatGPT coached him on drug-taking']
This is absolutely going to kill people. In a country that had even a modicum of regulation around providing healthcare this would be illegal.
Using AI to analyze health data has such a huge potential upside, but it has to be done locally.
I use [insert LLM provider here] all the time to ask generic, health-related questions but I’m careful about what I disclose and how I disclose it to the models. I would never connect data from my primary care’s EHR system directly to one of these providers.
That said, it’ll be interesting to see how the general population responds to this and whether they embrace it or have some skepticism.
I’m not confident we’ll have powerful/efficient enough on-device models to build this before people start adopting the SaaS-based AI health solutions.
ChatGPT’s target market is very clearly the average consumer who may not necessarily care what they do with their data.
I trust they considered the bias in medical research that exists in their training. I wonder if OpenAI will implement morbidity & mortality (M&M) rounds to learn from the mistakes and missed diagnosis.
Based on the reports of various failings on the safety front, I sure hope users will take that into account before they get advice to take 500g of aspirin.
ChatGPT has become an indispensable health tool for me. It serves as a great complement to my doctor. And there's been at least two cases in our house where it provided recommendations that were of great value (one possibly life saving and the other saving from an unnecessary surgery). I think that specialized LLMs will eventually be the front-line doctor/nurse.
All the Americans here arguing why this is a good thing, how your system is so flawed, etc. remember that this will be accessible to people in countries with good, free healthcare.
This is going to be the alternative to going to a doctor that is 10 minutes by car away, that is entirely and completely free, and who knows me, my history, and has a couple degrees. People are going to choose asking ChatGPT instead of their local doctor who is not only cheaper(!!!) but also actually educated.
People saying that this is good because the US system specifically is so messed up and useless are missing that the US makes up ~5% of the world's population, yet you think that a medical tool made for the issues of 5% of the population will be AMAZING and LIFE SAVING for the other 95%, more than harmful? Get a grip.
Not to mention shitty doctors, which exist everywhere, likely using this instead of their own brains. Great work guys.
I suspect the rationale at OpenAI at the moment is "If we don't do it, someone else will!", which I last heard in an interview with someone who produces and sells fentanyl.
>purpose-built encryption and isolation
Repeated 2x without explanation. Good start.
---
>You can further strengthen access controls by enabling multi-factor authentication
Pushing 2fac on users doesn't remove the need for more details on the above.
---
>to enable access to trusted U.S. healthcare providers, we partner with b.well
>wellness apps—like Apple Health, Function, and MyFitnessPal
Right...?
---
>health conversations protected and compartmentalized
Yet OAI will share those conversations with enabled apps, along with "relevant information from memories" and your "IP address, device/browser type, language/region settings, and approximate location"? (per https://help.openai.com/en/articles/20001036-what-is-chatgpt...)
My guess is that it is just chatgpt tuned to give the advice that gives the lowest possible liability to openai.
Get ready to learn about the food pyramid, folks.
It’s so foreign to me that anyone would want this in their life.
Doctors should be expected to use these tools as part of general competency. Someone I knew had herpes zoster on her face and the doctor had no idea and gave her diabetic medication. A five second inference with Gemini by me, uploading the picture only (after disabling apps activity to protect her privacy), and it said what it was. It's deplorable. And there's no way for this doctor to lose their job. They can keep being worse than Gemini Flash and making money.
Joined the wait-list. Can't wait.
AI already saved me from having an unnecessary surgery by recommending various modern-medicine (not alternative medicine) alternatives which ended up being effective.
Between genetics, blood stool and urine tests, scans (ultrasound, MRI, x-ray, etc), medical history... Doctors don't have time for a patient with non trivial or non obvious issues. AI has the time.
I am amazed at the false dichotomy (ChatGPT vs. Doctors) discussed in the comments. Let's look at it from the perspective of how a complex software is developed with a team of software engineers and with AI assistance. There is GitHub, Slack discussions, Pull Request, reviews, agents, etc...
Why a patient's health cannot be seen this way?
Version controlled source code -> medical records, health data
Slack channels -> discussion forum dedicated to one patient's health: among human doctors (specialists), AI agents and the patient.
In my opinion we are in the stone age compared to the above.
There's "Ask ChatGPT" overlay at the bottom of the page, so I asked "how risky is giving my medical data to OpenAI?" Interestingly ChatGPT advised caution ;) In short it said: 1. Safer than standard AI chats, 2. Not as safe as regulated healthcare systems (it reminded that OpenAI is not regulated and does not follow e.g. HIPAA), 3. Still involves inherent cloud risks.
This thread reads like an advertisement for ChatGPT Health.
I came to share a blog post I just posted titled: "ChatGPT Health is a Marketplace, Guess Who is the Product?"
OpenAI is building ChatGPT Health as a healthcare marketplace where providers and insurers can reach users with detailed health profiles, powered by a partner whose primary clients are insurance companies. Despite the privacy reassurances, your health data sits outside HIPAA protection, in the hands of a company facing massive financial pressure to monetize everything it can.
https://consciousdigital.org/chatgpt-health-is-a-marketplace...