logoalt Hacker News

PeterHolzwarthtoday at 5:27 AM9 repliesview on HN

I don't yet see how this case is any different from trusting stuff you see on the web in general. What's unique about the ChatGPT angle that is notably different from any number of forums, dark-net forums, reddit etc? I don't mean that there isn't potentially something unique here, but my initial thought is that this is a case of "an unfortunate kid typed questions into a web browser, and got horrible advice."

This seems like a web problem, not a ChatGPT issue specifically.

I feel that some may respond that ChatGPTS/LLMs available for chat on the web are specifically worse by virtue of expressing things with some degree of highly inaccurate authority. But again, I feel this represents the Web in general, not uniquely ChatGPTS/LLMs.

Is there an angle here I am not picking up on, do you think?


Replies

toofytoday at 6:09 AM

if it doesn’t know medical advice, then it should say “why tf would i know?” instead it confidently responds “oh, you can absolutely do x mg of y mixed with z.”

these companies are simultaneously telling us it’s the greatest thing ever and also never trust it. which is it?

give us all of the money, but also never trust our product.

our product will replace humans in your company, also, our product is dumb af.

subscribe to us because our product has all the answers, fast. also, never trust those answers.

ninjintoday at 6:35 AM

The uniqueness of the situation is that OpenAI et al. poses as an intelligent entity that serves information to you as an authority.

If you go digging on darkweb forums and you see user Hufflepuffed47___ talking about dosages on a website in black and neon green, it is very different from paying a monthly subscription to a company valued in the billions that serves you the same information through the same sleek channel that "helps" you with your homework and tells you about the weather. OpenAI et al. are completely uprooting the way we determine source credibility and establish trust on the web and they elected to be these "information portals".

With web search, it is very clear when we cross the boundary from the search engine to another source (or it used to be before Google and others muddied it with pre-canned answers), but in this case it is entirely erased and over time you come to trust the entity you are chatting with.

Cases like these were bound to happen and while I do not fault the technology itself, I certainly fault those that sell and profit from providing these "intelligent" entities to the general public.

stvltvstoday at 5:44 AM

Those other technologies didn't come with hype about superintelligence that causes people to put too much trust in it.

Animatstoday at 5:57 AM

> highly inaccurate authority.

The presentation style of most LLMs is confident and authoritative, even when totally wrong. That's the problem.

Systems that ingest social media and then return it as authoritative information are doomed to do things like this. We're seeing this in other contexts. Systems believing all their prompt history equally, leading to security holes.

falkensmaizetoday at 5:44 AM

AI companies are actively marketing their products as highly intelligent superhuman assistants that are on the cusp of replacing humans in every field of knowledge work, including medicine. People who have not read deeply into how LLMs work do not typically understand that this is not true, and is merely marketing.

So when ChatGPT gives you a confident, highly personalized answer to your question and speaks directly to you as a medical professional would, that is going to carry far more weight and authority to uninformed people than a Reddit comment or a blog post.

anonzzziestoday at 6:18 AM

The big issue remains that llms cannot know their response is not accurate, even after 'reading' a page with the correct info, it can still simply generate wrong data for you. With authority as it just read and there is a link so it is right.

show 1 reply
xyzzy123today at 5:29 AM

The different is that OpenAI have much deeper pockets.

I think there's also a legal perception that since AI is a new area, anything related to liability, IP, etc might be "up for grabs".

show 1 reply
wat10000today at 6:20 AM

A major difference is that it’s coming straight from the company. If you get bad advice on a forum, well, the forum just facilitated that interaction, your real beef is with the jackass you talked to. With ChatGPT, the jackass is owned and operated by the company itself.

squigztoday at 6:05 AM

The difference is that those other mediums enable a conversation - if someone gives bad advice, you'll often have someone else saying so.