logoalt Hacker News

jmyeetlast Saturday at 2:33 PM3 repliesview on HN

This kind of comment scares me because it's an example of people substituring professional advice for an LLM where LLMs are known to hallucinate or otherwise simply make stuff up. I see this all the time when I write queries and get the annoying Gemini AI snippet on a subject I know about and often I'll see the AI make provably and objectively false statements.


Replies

bwblast Saturday at 5:44 PM

You have to use critical thinking + it helps to have some info on the subject + it shouldn't be used to perform self-surgery :)

I spent about 12 hours over 2 days, checking, rechecking, and building out a plan. Then I did 2-hour sessions on YouTube, over several weeks, learning the new exercises with proper form (and that continues as form is hard). Followed by an appointment with a trainer to test my form and review the workout as a hole (which he approved of). No trainer really knows how this injury will manifest, so a lot is also helped because I have 10 years of exp.

This isn't a button click, and now follow the LLM lemming. This is a tool like Google search but better.

I could not have done this before using the web. I would have had to read books and research papers, then try to understand which exercises didn't target x muscle groups heavily, etc. I just couldn't do that. The best case would have been a trainer with the same injury, maybe.

simianwordslast Saturday at 3:08 PM

You are exaggerating. LLMs simply don’t hallucinate all that often, especially ChatGPT.

I really hate comments such as yours because anyone who has used ChatGPT in these contexts would know that it is pretty accurate and safe. People also can generally be trusted to identify good from bad advice. They are smart like that.

We should be encouraging thoughtful ChatGPT use instead of showing fake concern at each opportunity.

Your comment and many others just try to signal pessimism as a virtue and has very less bearing on reality.

show 2 replies
travisgriggslast Saturday at 2:40 PM

I have this same reaction.

But I also have to honestly ask myself “aren’t humans also prone to make stuff up” when they feel they need to have an answer, but don’t really?

And yet despite admitting that humans hallucinate and make failures too, I remain uncomfortable with ultimate trust in LLMs.

Perhaps, while LLMs simulate authority well, there is an uncanny valley effect in trusting them, because some of the other aspect of interacting with an authority person are “off”.