logoalt Hacker News

bossyTeacher12/07/20252 repliesview on HN

>LLMs can actually make up for their negative contributions. They could go through all the references of all papers and verify them,

They will just hallucinate their existence. I have tried this before


Replies

sansseriff12/07/2025

I don’t see why this would be the case with proper tool calling and context management. If you tell a model with blank context ‘you are an extremely rigorous reviewer searching for fake citations in a possibly compromised text’ then it will find errors.

It’s this weird situation where getting agents to act against other agents is more effective than trying to convince a working agent that it’s made a mistake. Perhaps because these things model the cognitive dissonance and stubbornness of humans?

show 4 replies
knome12/07/2025

I assumed they meant using the LLM to extract the citations and then use external tooling to lookup and grab the original paper, at least verifying that it exists, has relevant title, summary and that the authors are correctly cited.