This reminds me of the recurring pattern with every new medium: Socrates worried writing would destroy memory, Gutenberg's critics feared for contemplation, novels were "brain softening," TV was the "idiot box." That said, I'm not sure "they've always been wrong before" proves they're wrong now.
Where I'm skeptical of this study:
- 54 participants, only 18 in the critical 4th session
- 4 months is barely enough time to adapt to a fundamentally new tool
- "Reduced brain connectivity" is framed as bad - but couldn't efficient resource allocation also be a feature, not a bug?
- Essay writing is one specific task; extrapolating to "cognition in general" seems like a stretch
Where the study might have a point:
Previous tools outsourced partial processes - calculators do arithmetic, Google stores facts. LLMs can potentially take over the entire cognitive process from thinking to formulating. That's qualitatively different.
So am I ideologically inclined to dismiss this? Maybe. But I also think the honest answer is: we don't know yet. The historical pattern suggests cognitive abilities shift rather than disappear. Whether this shift is net positive or negative - ask me again in 20 years.
[Edit]: Formatting
> "they've always been wrong before"
In my opinion, they've almost always been right.
In the past two decades, we've seen the less-tech-savvy middle managers who devalued anything done on computer. They seemed to believe that doing graphic design or digital painting was just pressing a few buttons on the keyboard and the computer would do the job for you. These people were constantly mocked among online communities.
In programmers' world, you have seen people who said "how hard it could be? It's just adding a new button/changing the font/whatever..."
And strangely, in the end those tech muggles were the insightful ones.
Needs more research. Fully agree on that.
That said:
TV very much is the idiot box. Not necessarily because of the TV itself but rather whats being viewed. An actual engaging and interesting show/movie is good, but last time I checked, it was mostly filled with low quality trash and constant news bombardment.
Calculators do do arithmetic and if you ask me to do the kind of calculations I had to do in high school by hand today I wouldnt be able to. Simple calculations I do in my head but my ability to do more complex ones diminished. Thats down to me not doing them as often yes, but also because for complex ones I simply whip out my phone.
Not sure "they've always been wrong before" applies to TV being the idiot box and everything after
> 4 months is barely enough time to adapt to a fundamentally new tool
Yes, but also the extra wrinkle that this whole thing is moving so fast that 4 months old is borderline obsolete. Same into the future, any study starting now based on the state of the art on 22/01/2026 will involve models and potentially workflows already obsolete by 22/05/2026.
We probably can't ever adapt fully when the entire landscape is changing like that.
> Previous tools outsourced partial processes - calculators do arithmetic, Google stores facts. LLMs can potentially take over the entire cognitive process from thinking to formulating. That's qualitatively different.
Yes, but also consider that this is true of any team: All managers hire people to outsource some entire cognitive process, letting themselves focus on their own personal comparative advantage.
The book "The Last Man Who Knew Everything" is about Thomas Young, who died in 1829; since then, the sum of recorded knowledge has broadened too much for any single person to learn it all, so we need specialists, including specialists in managing other specialists.
AI is a complement to our own minds with both sides of this: Unlike us, AI can "learn it all", just not very well compared to humans. If any of us had a sci-fi/fantasy time loop/pause that let us survive long enough to read the entire internet, we'd be much more competent than any of these models, but we don't, and the AI runs on hardware which allows it to.
For the moment, it's still useful to have management skills (and to know about and use Popperian falsification rather than verification) so that we can discover and compensate for the weaknesses of the AI.
I think novels and tv are bad examples, as they are not substituting a process. The writing one is better.
Here’s the key difference for me: AI does not currently replace full expertise. In contrast, there is not a “higher level of storage” that books can’t handle and only a human memory can.
I need a senior to handle AI with assurances. I get seniors by having juniors execute supervised lower risk, more mechanical tasks for years. In a world where AI does that, I get no seniors.
> TV was the "idiot box."
TV is the uber idiot box, the overlord of the army of portable smart idiot boxes.
> The historical pattern suggests cognitive abilities shift rather than disappear.
Shift to what? This? https://steve-yegge.medium.com/welcome-to-gas-town-4f25ee16d...
I think that is a VERY false comparison. As you say, LLMs try to take over entire cognitive and creative processes and that is a bigger problem then outsourcing arithmetic
Were any of the prior fears totally wrong?
> That said, I'm not sure "they've always been wrong before" proves they're wrong now.
I think a better framing would be "abusing (using it too much or for everything) any new tool/medium can lead to negative effects". It is hard to clearly define what is abuse, so further research is required, but I think it is a healthy approach to accept there are downsides in certain cases (that applies for everything probably).
"Socrates worried writing would destroy memory".
He may have been right... Maybe our minds work in a different way now.
If you realize that what we remember are the extremized strawman versions of the complaints then you can realize that they were not wrong.
Writing did eliminate the need for memorization. How many people could quote a poem today? When oral history was predominant, it was necessary in each tribe for someone to learn the stories. We have much less of that today. Writing preserves accuracy much more (up to conquerors burning down libraries, whereas it would have taken genocide before), but to hear a person stand up and quote Desiderata from memory is a touching experience to the human condition.
Scribes took over that act of memorization. Copying something lends itself to memorization. If you have ever volunteered extensively for project Gutenberg you can also witness a similar experience: reading for typos solidifies the story into your mind in a way that casual writing doesn't. In losing scribes we lost prioritization of texts and this class of person with intimate knowledge of important historical works. With the addition of copyright we have even lost some texts. We gained the higher availability of works and lower marginal costs. The lower marginal costs led to...
Pulp fiction. I think very few people (but I would be disappointed if it was no one) would argue that Dan Brown's da Vinci Code is on the same level as War and Peace. From here magazines were created, even cheaper paper, rags some would call them (or use that to refer to tabloids). Of course this also enabled newspapers to flourish. People started to read things for entertainment, text lost its solemnity. The importance of written word diminished on average as the words being printed became more banal.
TV and the internet led to the destruction of printed news, and so on. This is already a wall of text so I won't continue, but you can see how it goes:
Technology is a double edged sword, we may gain something but we also can and did lose some things. Whether it was progress or not is generally a normative question that often a majority agrees with in one sense or another but there are generational differences in those norms.
In the same way that overuse of a calculator leads to atrophy of arithmetic skills, overuse of a car leads to atrophy of walking muscles, why wouldn't overuse of a tool to write essays for you lead to atrophy of your ability to write an essay? The real reason to doubt the study is because its conclusion seems so obvious that it may be too easy for some to believe and hide poor statistical power or p-hacking.
None of the examples you provided were being sold as “intelligence”
How do we know they were wrong before?
> they've always been wrong before
Were they? It seems that often the fears came true, even Socrates’
Soapbox time.
They were arguably right. Pre literate peole could memorise vast texts (Homer's work, Australian Aboriginal songlines). Pre Gutenberg, memorising reasonably large texts was common. See, e.g. the book Memory Craft.
We're becoming increasingly like the Wall E people, too lazy and stupid to do anything without our machines doing it for us, as we offload increasing amounts onto them.
And it's not even that machines are always better, they only have to be barely competent. People will risk their life in a horribly janky self driving car if it means they can swipe on social media instead of watching the road - acceptance doesn't mean it's good.
We have about 30 years of the internet being widely adopted, which I think is roughly similar to AI in many ways (both give you access to data very quickly). Economists suggest we are in many ways no more productive now than when Homer Simpson could buy a house and raise a family on a single income - https://en.wikipedia.org/wiki/Productivity_paradox
Yes, it's too early to be sure, but the internet, Google and Wikipedia arguably haven't made the world any better (overall).