logoalt Hacker News

paulgrimes1last Wednesday at 9:04 PM12 repliesview on HN

Here’s something: my chatGPT quietly assumed I had ADHD for around 9 months, up until October 2025. I don’t suffer from ADHD. I only found out through an answer that began “As you have ADHD..”

I had it stop right there, and asked it to tell me exactly where it got this information; the date, the title of the chat, the exact moment it took this data on as an attribute of mine. It was unable to specify any of it, aside from nine months previous. It continued to insist I had ADHD, and that I told it I did, but was unable to reference exactly when/where.

I asked “do you think it’s dangerous that you have assumed I have a medical / neurological condition for this long? What if you gave me incorrect advice based on this assumption?” to which it answered a paraphrased mea culpa, offered to forget the attribute, and moved the conversation on.

This is a class action waiting to happen.


Replies

raframlast Wednesday at 9:27 PM

> nine months previous

It likely just hallucinated the ADHD thing in this one chat and then made this up when you pushed it for an explanation. It has no way to connect memories to the exact chats they came from AFAIK.

show 1 reply
heavyset_golast Thursday at 10:04 AM

ChatGPT used the name on my credit card, a name which isn't uncommon, and started talking about my business, XYZ, that I don't have and never claimed to.

Did some digging and there was an obscure reference to a company that folded a long time ago associated with someone who has my name.

What makes it creepier is that they have the same middle name, which isn't in my profile or on my credit card.

When I signed up for ChatGPT, not only did I turn off personalization and training on my data, I even filled out the privacy request opt-out[1] that they're required to adhere to by law in several places.

Also, given that my name isn't rare, there are unfortunately some people with unsavory histories documented online with the name. I can't wait to be confused for one of them.

[1] https://privacy.openai.com/policies/en/

show 1 reply
lm28469last Thursday at 10:15 AM

I wouldn't be surprised it's because people self diagnose and talk about their """adhd""" all the time on reddit &co. where chatgpt was trained a lot.

show 1 reply
roger_last Wednesday at 9:37 PM

Disable memories so each chat is independent.

If you want chats to shared info, then use a project.

show 2 replies
soaredlast Wednesday at 10:16 PM

It doesn’t have itself as a data source to reference, so asking “tell me when you said this” etc will never work

show 1 reply
Liolast Thursday at 9:03 AM

This actually highlights a big privacy problem with health AI.

Say I’m interested in some condition and want to know more about it so I ask a chatbot about it.

It decides “asking for a friend” means I actually have that condition and then silent passes that information on data brokers.

Once it’s in the broker network it’s truth.

We lack the proper infrastructure for to control our own personal data.

Hell, I bet there’s anyone alive that can even name every data broker, let alone contacts them to police what information they’re passing about.

show 2 replies
usmanitylast Wednesday at 10:26 PM

this seems to be a memory problem with ChatGPT, in your case, I bet it was changing a lot of answers due to that. For me, it really liked referring to the fact that I have an ADU in my backyard, almost pointlessly, something like "Since you walk the dogs before work, and you have a backyard ADU, you should consider these items for breakfast..."

llmslave2last Thursday at 11:57 AM

I feel like the right legal solution is to make the service providers liable in the same way if you offered a service where you got diagnosed by a human and they fucked up, the service is liable. And real liability, with developers and execs going to jail or fined heavily.

The AI models are just tools, but the providers who offer them are not just providing a tool.

This also means if you run the model locally, you're the one liable. I think this makes the most sense and is fairly simple to draw a line.

GuB-42last Thursday at 10:00 AM

I wonder if that's because so many people claim to have ADHD for dubious reasons, often some kind of self-diagnosis. Maybe because being "neurodivergent" is somewhat trendy, or maybe to get some amphetamines.

ChatGPT may have picked that up and give people ADHD for no good reason.

dunk010last Wednesday at 9:26 PM

Perhaps you do ;-)

mountainriverlast Wednesday at 11:34 PM

Machine learning has been used in healthcare forever now

show 1 reply
dyauspitrlast Thursday at 7:12 AM

[flagged]

show 4 replies