logoalt Hacker News

robotresearcheryesterday at 5:12 PM4 repliesview on HN

I tried that out in my field of expertise, to calibrate my expectations. ChatGPT invented multiple references to non-existent but plausibly-titled papers written by me.

I think of that when asking questions about areas I don’t know.

That was about 18mo ago, so maybe this kind of hallucination is under control these days.


Replies

wat10000yesterday at 6:02 PM

LLMs are good for tasks where you can verify the result. And you must verify the result unless you're just using it for entertainment.

show 1 reply
SoftTalkeryesterday at 5:24 PM

Turns out Gell-Mann amnesia applies to LLMs too.

show 1 reply
wahnfriedenyesterday at 5:45 PM

I would use an agent (Codex) for this task: use the Pro model in ChatGPT for deep research and to assemble the information and citations, then have Codex systematically go through the citations with a task list to web search and verify or correct each. Codex can be used like a test suite.