logoalt Hacker News

jillesvangurplast Thursday at 10:51 AM0 repliesview on HN

Same here. It’s a double-edged sword, though. I know some people who work in health care, including some doctors. They deal with a lot of hypochondriacs — people who imagine they have all sorts of issues and then try to MacGyver themselves to better health. You can’t read an HN thread on health care issues without dozens of those coming out of the woodwork to share their magical, special way of beating the system. Silicon Valley has a long history of people that did all sorts of weird crap. There's a great anecdote about Steve Jobs turning orange when he was restricting himself to a diet of carrots because he believed god knows what. In the end he died young of pancreatic cancer. Probably not connected but smart person that did some wacky stuff that probably wasn't that good for him.

I'm on statins that have side effects that I'm experiencing. That's a common thing. ChatGPT was useful for me to figure out some of that. I've had other minor issues where even just trying to understand what the medication I'm being prescribed is supposed to do can be helpful. Doctors aren't great at explaining their decisions. "Just take pill x, you'll be fine".

Doctors have to diagnose patients in a way that isn't that different from how I would diagnose a technical issue. Except they are starved for information and have to get all their information out of a 10-15 minute consult with a patient that is only talking about vague symptoms. It's easy to see how that goes wrong sometimes or how they would miss critical things. And they get to deal with all the hypochondriacs as well. So they have to poke through that as well and can't assume the patient is actually being truthful/honest.

LLMs are useful tools if you know how to use them. But they can also lead to a lot of confirmation bias. The best doctors tell you what you need to hear, not what you want to hear. So, tools like this are great and now a reality that doctors need to deal with whether they like it or not.

Some of the Covid crisis intersected with early ChatGPT usage. It wasn't pretty. People bought into a lot of nonsense that they came up with while doom scrolling Reddit, or using early versions of LLMs. But things have improved since then. LLMs are better and less likely to go completely off the rails.

I try to look at this a bit rationally: I know I don't get the best care possible all the time because doctors have to limit time they spend on me and I'm publicly insured in Germany so subject to cost savings. I can help myself to some extent by doing my homework. But in the end, I have to trust my doctor to confirm things. My mode is that I use ChatGPT to understand what's going on and then try to give my doctor a complete picture so he has all the information needed to help me.