And yet we are discussing this in the context of a reporter having been fired from Ars Technica for publishing an article which included inaccurate LLM-generated summaries in 2026. How come?
https://news.ycombinator.com/item?id=47226608
Maybe you should read the article? :)
What failed was extracting verbatim quotes, not summarizing.
If you want an LLM to do verbatim anything, it has to be a tool call. So I’m not surprised.
Maybe you should read the article? :)
What failed was extracting verbatim quotes, not summarizing.
If you want an LLM to do verbatim anything, it has to be a tool call. So I’m not surprised.