ChatGPT has made a material difference in my ability to understand health problems, test results, and to communicate with doctors effectively. My wife and I were talking last night about how helpful it was in 2025. I hope that it continues to be good at this.
I want regulators to keep an eye on this and make smart laws. I don't want it to go away, as its value is massive in my life.
(One example, if you are curious: I've been doing rehab for a back injury for about 10 years. I worked with a certified trainer/rehab professional for many years and built a program to keep me as pain-free as possible. I rebuilt the entire thing with ChatGPT/Gemini about 6 weeks ago, and I've had less pain than at any other point in my life. I spent at least 12 hours working with AI to test and research every exercise, and I've got some knowledge to help guide me, but I was amazed by how far it has come in 12 months. I ran the results by a trainer to double-check it was well thought out.)
I've had a similar positive experience and I'm really surprised at the cynicism here. You have a system that is good at reading tons of literature and synthesizing it, which then applies basic logic. What exactly do the cynics think that doctors do?
I don't use LLMs as the final say, but I do find them pretty useful as a positive filter / quick gut check.
I also think health (and car-problem diagnosis) are excellent tasks for LLMs.
The you-are-the-product thing, and privacy, has me wondering when Apple will step in and provide LLM health in a way we can trust.
I know I say that and I face the slings and arrows of those distrusting Apple, but I still believe they're the one big company out there that knows that there is money in being the one guy that doesn't sell your data.
I don't think one can deny the benefits here. The detractors here are like don't build a side walk coz someone may trip and fall or don't plant trees in your front yard coz of what happened to the Texas governor.
Most would likely agree that everything needs a balanced approach, bashing a service completely as evil and fully advocating people to stay away vs claiming the service is flawless (which the OP isn't doing btw) aren't either a balanced position.
Think different doesn't have to mean think extreme.
On the other hand, sometimes you end up like this guy. Are you feeling lucky?
https://arstechnica.com/health/2025/08/after-using-chatgpt-m...
If you'd been doing the rehab for 10 years, what did you need exactly? It seems like you should have had a decade to ask whatever questions you wanted.
It seems like outcomes are probably K-shaped: those who are capable of critical thinking and deciding what type of information should be confirmed by a healthcare professional and what type of information is relatively riskless to consume from ChatGPT should have positive outcomes.
Those who are prone to disinformation and misinterpretation may experience some very negative health outcomes.
Or it's a placebo effect.
And if it didn't work out and made you worse or, god forbid, the advice caused you to get seriously injured, then what? ChatGPT won't take any responsibility.
I have so many issues with our current health system but an alternative is not an unreliable search tool that takes no responsibility for the information it provides.
It can be helpful, but also untrustworthy.
My mother-in-law has been struggling with some health challenges the past couple of months. My wife (her daughter) works in the medical field and has been a great advocate for her mother. This whole time I've also been peppering ChatGPT with questions, and in turn I discuss matters with my wife based on this.
I think it was generally correct in a lot of its assertions, but as time goes on and the situation does it improve, I occasionally revisit my chat and update it with the latest results and findings, and it keeps insisting we're at a turning point and this is exactly what we should expect to be happening.
6 weeks ago, I think its advice was generally spot on, but today it's just sounding more tone-deaf and optimistic. I'd hate to be _relying_ on this as my only source of advice and information.
That's awesome that it's helped you so much, chronic back pain is awful. Is it possible though, that this could be interpreted as a failure of the trainer to come up with a successful treatment plan for you? "Sudden" relief after 10 years of therapy just because you changed the program seems like they were just having you perform the wrong exercises no?
> to communicate with doctors effectively
Did the doctors agree? I never thought of AI as a good patient navigator, but maybe that’s its proper role in healthcare.
I agree. LLMs cannot and should not replace professionals but there are huge gaps that can be filled by intro provided and the fact that you can dig deeper into any subject is huge.
This is probably a field that MistralAI could use privacy and GDPR as leverage to build LLMs around that.
It doesn’t even have to be that well-read (although it is),
it just has to listen to your feedback more than 11 minutes per visit,
so it can have a chance at effectively steering you…
This kind of comment scares me because it's an example of people substituring professional advice for an LLM where LLMs are known to hallucinate or otherwise simply make stuff up. I see this all the time when I write queries and get the annoying Gemini AI snippet on a subject I know about and often I'll see the AI make provably and objectively false statements.
>my ability to understand health problems
How do you know that this understanding is correct? To me, epistemologically, this is not too different from gaining your health knowledge from a homeopath or gaining your physics knowledge from a Flat Earther. You are in no position to discern the validity of your "knowledge".
Sounds like you’re a good little product… abundant potential for shareholder value to be extracted from you and others like you. A trip to the library or a consult with a professional would’ve given you the same or better results.
This sounds like excellent evidentiary material for a future insurer or government health provider to decide you're uninsurable, not eligible for a job, and so on.
And the great thing about it is that you already signed all your rights away for them to do this exact thing, when we could have had an open world with open models run locally instead where you got to keep your private health information private.