I sent the entire series by Aphyr [1] to some friends. Two of them, independently, responded with a variant of, "TLDR, can you give a summary?"
I chat with these friends a lot but I rarely send articles that I suggest they read and that I think are profound, so I expected them to read it. These are smart people that have a history of reading lots of books.
They are both huge AI proponents now and use AI for nearly everything now. Debates on various topics with them used to be rich; now, they're shallow and they just send me AI summaries of points they're clearly just predisposed to. Their attention spans are dwindling.
[1] https://aphyr.com/data/posts/411/the-future-of-everything-is...
Or maybe they just don't want to read a long form analysis on something?
I also enjoy the series. But sometimes my friends send me things and I'm like, "not gonna read all of that."
Just because you're friends don't want to invest the same amount of time that you want to invest in your own personal enrichment doesn't mean they're getting stupid.
MIT actually has a paper on how ChatGPT use impacted cognitive skills for essay writing.
> Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task
> https://arxiv.org/abs/2506.08872
> Cognitive activity scaled down in relation to external tool use. …
> Self-reported ownership of essays was the lowest in the LLM group and the highest in the Brain-only group. LLM users also struggled to accurately quote their own work. While LLMs offer immediate convenience, our findings highlight potential cognitive costs. Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels.
You might just overhype this blog.
I read one of his last week? and didn't like it that much. I read this one despite it because its quite high on hn for whatever reason.
I don't think everything is lies and i don't like how he thinks a LLM is just some bullshit machine.
Its also waaaay to early to even understand were this is going. We as humans have never had that much compute and used it this particular way. It could literlay be the road to a utopia or dystopia. But its very crazy to experience it.
His article series feels so negative and dismissive, that i'm not taking anything from it.
There is so much more research, money and compute behind this AI topic right now, every week or two weeks something relevant better/new comes out of this. From 2d, 3d models, new LLM versions, smaller LLms, faster inferencing (Nvidias Nemotron), we don't know how this will continue.
And the weird thing is that he clearly knows plenty about LLMs but it feels so negative dismissive, hard to put a finger to it.
maybe it means they were never really as smart as you thought?
Not meant to be snarky. It's been two decades now since my first wide-eyed entry into the workforce, moving for new opportunities, meeting new people. it's been great. There's a lot of smart people out there. I also realize that many people I seen as smart had more access to more content then i did. i still appreciated their sharing , it was enlightening to me. But after 20 years, I think back and it's literally quoting things from smart youtube videos. and regurgitating the latest thought leaders.
We all do this, but like you, what's meaningful to me is the chewing, the dissection and synthesis. coming together to share different perspectives and so on. i've had those friends too! it's just not 1:1