I actually wrote up quite a few thoughts related to this a few days ago but my take is far more pessimistic: https://www.neilwithdata.com/outsourced-thinking
My fundamental argument: The way the average person is using AI today is as "Thinking as a Service" and this is going to have absolutely devastating long term consequences, training an entire generation not to think for themselves.
The interesting axis here isn’t how much cognition we outsource, it’s how reversible the outsourcing is. Using an LLM as a scratchpad (like a smarter calculator or search engine) is very different from letting it quietly shape your writing, decisions, and taste over years. That’s the layer where tacit knowledge and identity live, and it’s hard to get back once the habit forms.
We already saw a softer version of this with web search and GPS: people didn’t suddenly forget how to read maps, but schools and orgs stopped teaching it, and now almost nobody plans a route without a blue dot. I suspect we’ll see the same with writing and judgment: the danger isn’t that nobody thinks, it’s that fewer people remember how.
The “lump of cognition” framing misses something important. it’s not about how much thinking we do, but which thinking we stop doing. A lot of judgment, ownership, and intuition comes from boring or repetitive work, and outsourcing that isn’t free. Lowering the cost of producing words clearly isn’t the same as increasing the amount of actual thought.
Outsourcing to thinking is exactly what I tell our developers. They are hired to do the kind of thinking I’d rather not do.
Some of humanity’s most significant inventions are language (symbolic communication), writing, the scientific method, electricity, the computer.
Notice something subtle.
Early inventions extend coordination. Middle inventions extend memory. Later inventions extend reasoning. The latest inventions extend agency.
This suggests that human history is less about tools and more about outsourcing parts of the mind into the world.
How many of you know how to do home improvement? Fix your own clothes? Grow your own food? Cook your own food? How about making a fire or shelter? People used to know all of those things. Now they don't, but we seem to be getting along in life fine anyway. Sure we're all frightened by the media at the dangers lurking from not knowing more, but actually our lives are fine.
The things that are actually dangerous in our lives? Not informing ourselves enough about science, politics, economics, history, and letting angry people lead us astray. Nobody writes about that. Instead they write about spooky things that can't be predicted and shudder. It's easier to wonder about future uncertainty than deal with current certainty.
I still read the LLMs output quite critically and I cringe whenever I do. LLMs are just plain wrong a lot of the time. They’re just not very intelligent. They’re great at pretending to be intelligent. They imitate intelligence. That is all they do. And I can see it every single time I interact with them. And it terrifies me that others aren’t quite as objective.
Interesting read..
To his point: personally, I find it shifts 'where and when' I have to deal with the 'cognitive load'. I've noticed (at times) feeling more impatient, that I tend to skim the results more often, and that it takes a bit more mental energy to maintain my attention..
Distributed verification. 8 billions of us can divide up the topics and subjects and pool together our opinions and best conclusions.
A lot of this stuff depends on how a person chooses to engage, but my contrarian take is that actually throughout history whenever anyone said X technology will lead to the downfall of humanity for y reasons, that take was usually correct.
The article he references gives this example:
“Is it lazy to watch a movie instead of making up a story in your head?”
Yes, yes it is, this was a worry when we transitioned from oral culture to written culture, and I think it was probably prescient.
For many if not most people cultural or technological expectations around what skills you _have_ to learn probably have an impact on total capability. We probably lost something when Google Maps came out and the average person didn’t have to learn to read a map.
When we transitioned from paper and evening news to 24 hour partisan cable news, I think more people outsourced their political opinions to those channels.
See Scott Alexander’s The Whispering Earring (2012):
https://gwern.net/doc/fiction/science-fiction/2012-10-03-yva...
Social media has given me a rather dim view of the quality of people's thinking, long before AI. Outsourcing it could well be an improvement.
[dead]
Thinking developed naturally as a tool that helps our species to stay dominant on the planet, at least on land. (Not by biomass but by the ability to control.)
If outsourcing thought is beneficial, those who practice it will thrive; if not, they will eventually cease to practice it, one way or another.
Thought, as any other tool, is useful when it solves more problems than it creates. For instance, an ability to move very fast may be beneficial if it gets you where you want to be, and detrimental, if it misses the destination often enough, and badly enough. Similarly, if outsourced intellectual activities miss the mark often enough, and badly enough, the increased speed is not very helpful.
I suspect that the best results would be achieved by outsourcing relatively small intellectual acts in a way that guarantees very rare, very small errors. That is, AI will become useful when AI becomes dependable, comparable to our other tools.
This list of things not to use AI for is so quaint. There's a story on the front page right now from The Atlantic: "Film students who can no longer sit through films". But why? Aren't they using social media, YouTube, Netflix, etc responsibly? Surely they know the risks, and surely people will be just as responsible with AI, even given the enormous economic and professional pressures to be irresponsible.