logoalt Hacker News

knighthacktoday at 8:13 AM4 repliesview on HN

> LLMs are terrible at accurately summarizing anything. They very randomly latch on to certain keywords and construct a narrative from them, with the result being something that is plausibly correct but in which the details are incorrect, usually subtly so, or important information is omitted because it wasn't part of the random selection of attention.

I don't know what you've been doing, but the summaries I get from my LLMs have been rather accurate.

And in any event, summaries are just that - summaries.

They don't need to be 100% accurate. Demanding that is unreasonable.


Replies

lamaserytoday at 11:51 AM

The LLM meeting-summary bot in Teams seems accurate… unless you were in the meeting, and also closely read the summary afterward. It misrepresents what people actually said all the time.

avereveardtoday at 10:59 AM

Depends on topic, often what they consider important isn't what is important and details that are essential get out of view. I'm having good success with youtube video, not as much with technical docs.

carefree-bobtoday at 8:16 AM

Yes, search and summarization is where LLMs shine. I use them all the time for that, and much less for code generation. I would say search > summarization > debugging > code gen/image gen

suddenlybananastoday at 8:16 AM

>They don't need to be 100% accurate. Demanding that is unreasonable.

If an intern was routinely making up stuff in the summaries they provided to their bosses, they'd be let go.