> Is summarizing by a human much different?
One thing I have noticed and drives me up the wall with AI-generated summaries is that they don't provide decent summaries most of the time. They are summaries of an actual summary.
For instance: "This document describes a six-step plan to deploy microservices to any cloud using the same user code, leading to various new trade-offs."
OK, so what are these six steps and what are the trade-offs? That would be the real summary I want, not the blurb.
The point of a summary is to tell me what the most important ideas are, not make me read the damn document. This also happens with AI summaries of meetings: "The team had a discussion on the benefits of adopting a new technology." OK, so what, if any, were the conclusions?
Unfortunately, LLMs have learned to summarize from bad examples, but a human can and ought to be able to provide a better one.