This thread reads like an advertisement for ChatGPT Health.
I came to share a blog post I just posted titled: "ChatGPT Health is a Marketplace, Guess Who is the Product?"
OpenAI is building ChatGPT Health as a healthcare marketplace where providers and insurers can reach users with detailed health profiles, powered by a partner whose primary clients are insurance companies. Despite the privacy reassurances, your health data sits outside HIPAA protection, in the hands of a company facing massive financial pressure to monetize everything it can.
https://consciousdigital.org/chatgpt-health-is-a-marketplace...
Great write up. I'd even double down on this statement: "You can opt in to chat history privacy". This is really "You can opt in to chat history privacy on a chat-by-chat basis, and there is no way to set a default opt-out for new chats".
This. It’s the same play with their browser. They are building the most comprehensive data profile on their users and people are paying them to do it.
I get that impression too - but also it's HN and enthusiastic early adoption is unsurprising.
My concern, and the reason I would not use it myself, is the alto frequent skirting of externalities. For every person who says "I can think for myself and therefore understand if GPT is lying to me," there are ten others who will take it as gospel.
The worry I have isn't that people are misled - this happens all the time especially in alternative and contrarian circles (anti-vaxx, homeopathy, etc.) - it's the impact it has on medical professionals who are already overworked who will have to deal with people's commitment to an LLM-based diagnosis.
The patient who blindly trusts what GPT says is going to be the patient who argues tooth and nail with their doctor about GPT being an expert, because they're not power users who understand the technical underpinnings of an LLM.
Of course, my angle completely ignores the disruption angle - tech and insurance working hand in hand to undercut regulation, before it eventually pulls the rug.
May your piece stay at the highest level of this comment section.
> This thread reads like an advertisement for ChatGPT Health.
This thread has a theme I see a lot in ChatGPT users: They're highly skeptical of the answers other people get from ChatGPT, but when they use it for themselves they believe the output is correct and helpful.
I've written before on HN about my friend who decided to take his health into his own hands because he trusted ChatGPT more than his doctors. By the end he was on so many supplements and "protocols" that he was doing enormous damage to his liver and immune system.
The more he conversed with ChatGPT, the better he got at getting it to agree with him. When it started to disagree or advise caution, he'd blame it on overly sensitive guardrails, delete the conversation, and start over with an adjusted prompt. He'd repeat this until he had something to copy and paste to us to "prove" that he was on the right track.
As a broader anecdote, I'm seeing "I thought I had ADHD and ChatGPT agrees!" at an alarming rate in a couple communities I'm in with a lot of younger people. This combined with the TikTok trend of diagnosing everything as a symptom of ADHD is becoming really alarming. In some cohorts, it's a rarity for someone to believe they don't have ADHD. There are also a lot of complaints from people who are angry their GP wouldn't just write a prescription for Adderall and tips for doctor shopping around to find doctors who won't ask too many questions before dispensing prescriptions.