The old chat gpt models scanning the nih pub med repositories with proper prompting (e.g. …backed by randomized control trial data) was an amazing health care tool. The stripped down cheaper versions today are junk and I’ve had to start relying on grok :-( I’m not convinced OpenAI can make this work
Handily appearing next to this story on the front page of HN:
"Health care data breach affects over 600k patients"
I am amazed at the false dichotomy (ChatGPT vs. Doctors) discussed in the comments. Let's look at it from the perspective of how a complex software is developed with a team of software engineers and with AI assistance. There is GitHub, Slack discussions, Pull Request, reviews, agents, etc...
Why a patient's health cannot be seen this way?
Version controlled source code -> medical records, health data
Slack channels -> discussion forum dedicated to one patient's health: among human doctors (specialists), AI agents and the patient.
In my opinion we are in the stone age compared to the above.
Doctors should be expected to use these tools as part of general competency. Someone I knew had herpes zoster on her face and the doctor had no idea and gave her diabetic medication. A five second inference with Gemini by me, uploading the picture only (after disabling apps activity to protect her privacy), and it said what it was. It's deplorable. And there's no way for this doctor to lose their job. They can keep being worse than Gemini Flash and making money.
Most doctors are just people who had a strong ability to pass tests and finish medical school.
Most of the ones I've worked with aren't passionate about their specialty or their patients, and their neglect and mistakes show it.
Zero mention of actual compliance standards or HIPAA on the launch page of a product that is supposed to interconnect with medical records other health apps! No thanks...
Not a chance.
All the Americans here arguing why this is a good thing, how your system is so flawed, etc. remember that this will be accessible to people in countries with good, free healthcare.
This is going to be the alternative to going to a doctor that is 10 minutes by car away, that is entirely and completely free, and who knows me, my history, and has a couple degrees. People are going to choose asking ChatGPT instead of their local doctor who is not only cheaper(!!!) but also actually educated.
People saying that this is good because the US system specifically is so messed up and useless are missing that the US makes up ~5% of the world's population, yet you think that a medical tool made for the issues of 5% of the population will be AMAZING and LIFE SAVING for the other 95%, more than harmful? Get a grip.
Not to mention shitty doctors, which exist everywhere, likely using this instead of their own brains. Great work guys.
I suspect the rationale at OpenAI at the moment is "If we don't do it, someone else will!", which I last heard in an interview with someone who produces and sells fentanyl.
AI apocalyptic hegemony
Expectations : I, Robot
Reality: Human extinction after ChatGPT kills everyone with halucinated medical advice, and lives alone..
High risk high reward? Or just the level of regulation companies can expect in the year 2026 that they’re not afraid to take this path?
Is it cancer?
Oh no wait, you’re right it’s heart disease!
Oh it’s not heart disease? It’s probably cancer
Rinse and repeat
AI in health - makes sense.
OpenAI in health - I'm reticent.
As someone who pays for ChatGPT and Claude, and uses them EVERYDAY... I still am not sure how I feel about these consumer apps having access to all my health data. OpenAI doesn't have the best track record of data safety.
Sure OpenAI business side has SOC2/ISO27001/HIPAA compliance, but does the consumer side? In the past their certifications have been very clearly "this is only for the business platform". And yes, I know regular consumer don't know what SOC2 is other than a pair of socks that made it out of the dryer.... but still. It's a little scary when getting into very personal/private health data.
Gattaca is supposed to be a warning, not a prediction. Then again neither was Idiocracy, yet here we are.
ChatGPT arguably saved my fathers life two weeks ago. He was in a rehab center after breaking his hip and his condition suddenly deteriorated.
While waiting for the ambulance at the rehab center, i plugged all his health data from is MyChart and described the symptoms. It accurately predicted (in its top two possibilities) C Diff infection.
Fast forward two days, ER has prescribed in general antibiotics. I pushed the doctors to check for C Diff and sure enough he tested positive for it - and they got him on the right antibiotics for it.
I think it was just in time as he ended up going to the ICU before he got better.
Maybe they would have tested for C Diff anyway, but definitely made me trust ChatGPT. Throughout his stay after every single update in his MyChart I copy and paste the pdf to the long running thread for his health.
I think ChatGPT health- being able to automatically import this directly will be a huge game changer. This is probably my number one use case of AI is health and wellness.
My dad is getting discharged tomorrow (to a different rehab center, thankfully)
What could possibly go wrong?
Isn't it illegal to provide health advice without a license?
Yeah, gotta say I'm not enthusiastic about handing over any health data to OpenAI. I'd be more likely to trust Google or maybe even Anthropic with this data and that's saying something.
recommending people eat horse dewormer as a service here we go
At first I was reading this like 'oh boy here we go, a marketing ploy by ChatGPT when Gemini 3 does the same thing better', but the integration with data streams and specialized memory is interesting.
One thing I've noticed in healthcare is for the rich it is preventative but for everyone else it is reactive. For the rich everything is an option (homeopathics/alternatives), for everyone else it is straight to generic pharma drugs.
AI has the potential to bring these to the masses and I think for those who care, it will bring a concierge style experience.
I would not trust a company with no path to profitability with my medical health records, because they are more likely to do something unethical and against my interests, like selling insights about me to other companies, out of desperation for new revenue streams.
I'd rather not talk to a commercial LLM about personal (health) details. Guess how they will/do try to make this completely overhyped chat bots profitable. OpenAI could sell relevant personal health related data to insurance companies, or maybe the hr department of your next job. Just saying...
> ChatGPT Health is not yet available in the UK or EU
Ah yes. Because in the EU you cannot actually steal people's data.
Brave new world we live in...
My guess is that it is just chatgpt tuned to give the advice that gives the lowest possible liability to openai.
Get ready to learn about the food pyramid, folks.
This is going to kill people
Since there are a lot of positive comments at the top (not surprising given the astroturfing on HN), please watch this video from ChubbyEmu, a real doctor, about a case study from someone who self-diagnosed using AI: https://www.youtube.com/watch?v=yftBiNu0ZNU
For every positive experience there are many more that are negative, if not life threatening or simply deadly.
Another nail in the coffin for apps that depend on AI APIs because the AI companies themselves are working on products using their own APIs (unless you can make the UX significantly better). UX now seems like the prime motivator when building apps.
Absolutely not.
These discussions pit ChatGPT against doctors — but we can combine them! Doctors can use it to improve their diagnostics and treatment options.
>>Designed with privacy and security at the core ...Health is built as a dedicated space with added protections for sensitive health information and easy-to-use controls.
Good words at a high level, but it would really help do have some detail about the "dedicated space with added protections"
How isolated is the space? What are the added protections? How are they implemented? What are the ways our info could leak?, and many more.
I wish I didn't have to be so skeptical of something that should be a great good providing more health info to more people, but the leadership of this industry really has, "gone to the dark side".
I've already been using ChatGPT to evaluate my blood results and give me some solutions for my diet and workouts. It's been great so far without any special model.
America can't afford public healthcare and now can't afford RAM so OpenAI can offer a bad version of healthcare.
this is great. I've been using AI for health optimization (exercise routine, diet, etc.) based on my biomarkers for a while now.
Once ChatGPT recommended me a simple solution to a chronic health issue - a probiotic with a specific bacteria strand. And while I used probiotics with before, apparently they have all different stands of bacteria. The one ChatGPT recommended me really worked.
"Medical attention is all you need."
[dead]
Such a dystopian nightmare we live in now. The US is cutting our actual healthcare services to subsidize this shit so billionaires can become trillionares while the rest of us suffer
LLMs were actually not that great so far in the preventative side / risk prediction. They are very good if you already have the disease but if you are trending there they were chill about it. The better way would be to first calculate the risks via deterministic algos and then do differential diagnosis. So something that specialized doc would do. This is an example of an online tool, that does it like that: https://www.longevity-tools.com/liver-function-interpreter
I strained my groin/abs a few weeks ago and asked ChatGPT to adjust my training plan to work around the problem. One of its recommendations was planks, which is exactly the exercise that injured me.
My cleaning lady's daughter had trouble with her ear. ChatGPT suggested injecting some oil into it. She did and it became a huge problem, so that she had to go to the hospital.
I'm sure ChatGPT can be great, but take it with a huge grain of salt.
This reads like desperation to get a headline more than an actual product, especially when half the links don't even work