logoalt Hacker News

softwaredougtoday at 1:16 PM2 repliesview on HN

The other day I was researching with ChatGPT.

* ChatGPT hallucinated an answer

* ChatGPT put it in my memory, so it persisted between conversations

* When asked for a citation, ChatGPT found 2 AI created articles to back itself up

It took a while, but I eventually found human written documentation from the organization that created the technical thingy I was investigating.

This happens A LOT for topics on the edge of knowledge easily found on the Web. Where you have to do true research, evaluate sources, and make good decisions on what you trust.


Replies

visargatoday at 4:01 PM

Simple solution - run the same query on 3 different LLMs with different search integrations, if they concur chances of hallucination are low.

fireflash38today at 2:52 PM

AI reminds me of combing through stackoverflow answers. The first one might work... Or it might not. Try again, find a different SO problem and answer. Maybe third times the charm...

Except it's all via the chat bot and it isn't as easy to get it to move off of a broken solution.