They’ll get there. Tech people have been exposed to it longer. They’ve been around long enough to see people embarrassed by LLM hallucinations.
For people who are newer to it (most people) they think it’s so amazing that errors are forgivable.
No, I don't believe they will.
If anything, I expect this to get worse.
The problem is that ChatGPT results are getting significantly better over time. GPT-5 with its search tool outputs genuinely useful results without any glaring errors for the majority of things I throw at it.
I'm still very careful not to share information I found using GPT-5 without verifying it myself, but as the quality of results go up the social stigma against sharing them is likely to fade.