I feel like part of this post is a bit of hypocrisy.
> This is why reading actual books in full might now be more valuable than it ever has been: Only if you’ve seen every word will you discover insights and links an AI would never include in its average-driven summary.
Is summarizing by a human much different? Let's check if the author has a consistent stance on reading every word.
> The 4 Minute Millionaire: 44 Lessons to Rethink Money, Invest Wisely, and Grow Wealthy in 4 Minutes a Day > This book compiles 44 lessons from some 20 of history’s best books about money, finance, and investing. Each lesson can be read in about 4 minutes and comes with a short action item.
Hmmmm
Not necessarily: assuming I've been following Nik for a while, I have reasons to trust his summary more than an LLMs summary. I would understand Nik's biases, and understand why he would focus on one thing over another. Nik would have a reputational incentive to do a good job and not completely misrepresent the book. I would also value Nik's personal, subjective view on the material, having an understanding of his background, and, again, his biases. On the other hand, I would have no idea what an LLM would focus on when summarizing, I would have no reason to trust it (LLMs fail in unpredictable ways), and an LLMs "opinion" is some average over the internet's + annotator's opinions.
Not sure that's fair- claiming you prefer reading texts in full to summaries doesn't seem the same as saying you don't ever want to read a summary in any context?
Aside from that, it seems more valuable to think about the odeas in the blog on their own merit, rather than attacking the writer for not having been true to those ideas in every past action.
> Is summarizing by a human much different?
One thing I have noticed and drives me up the wall with AI-generated summaries is that they don't provide decent summaries most of the time. They are summaries of an actual summary.
For instance: "This document describes a six-step plan to deploy microservices to any cloud using the same user code, leading to various new trade-offs."
OK, so what are these six steps and what are the trade-offs? That would be the real summary I want, not the blurb.
The point of a summary is to tell me what the most important ideas are, not make me read the damn document. This also happens with AI summaries of meetings: "The team had a discussion on the benefits of adopting a new technology." OK, so what, if any, were the conclusions?
Unfortunately, LLMs have learned to summarize from bad examples, but a human can and ought to be able to provide a better one.