logoalt Hacker News

Your brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant

253 pointsby misswaterfairyyesterday at 10:41 PM168 commentsview on HN

Comments

mcvtoday at 8:16 AM

This seems to confirm my feeling when using AI too much. It's easy to get started, but I can feel my brain engaging less with the problem than I'm used to. It can form a barrier to real understanding, and keeps me out of my flow.

I recently worked on something very complex I don't think I would have been able to tackle as quickly without AI; a hierarchical graph layout algorithm based on the Sugiyama framework, using Brandes-Köpf for node positioning. I had no prior experience with it (and I went in clearly underestimating how complex it was), and AI was a tremendous help in getting a basic understanding of the algorithm, its many steps and sub-algorithms, the subtle interactions and unspoken assumptions in it. But letting it write the actual code was a mistake. That's what kept me from understanding the intricacies, from truly engaging with the problem, which led me to keep relying on the AI to fix issues, but at that point the AI clearly also had no real idea what it was doing, and just made things worse.

So instead of letting the AI see the real code, I switched from the Copilot IDE plugin to the standalone Copilot 365 app, where it could explain the principles behind every step, and I would debug and fix the code and develop actual understanding of what was going on. And I finally got back into that coding flow again.

So don't let the AI take over your actual job, but use it as an interactive encyclopedia. That works much better for this kind of complex problem.

show 5 replies
sdoeringtoday at 8:57 AM

This reminds me of the recurring pattern with every new medium: Socrates worried writing would destroy memory, Gutenberg's critics feared for contemplation, novels were "brain softening," TV was the "idiot box." That said, I'm not sure "they've always been wrong before" proves they're wrong now.

Where I'm skeptical of this study:

- 54 participants, only 18 in the critical 4th session

- 4 months is barely enough time to adapt to a fundamentally new tool

- "Reduced brain connectivity" is framed as bad - but couldn't efficient resource allocation also be a feature, not a bug?

- Essay writing is one specific task; extrapolating to "cognition in general" seems like a stretch

Where the study might have a point:

Previous tools outsourced partial processes - calculators do arithmetic, Google stores facts. LLMs can potentially take over the entire cognitive process from thinking to formulating. That's qualitatively different.

So am I ideologically inclined to dismiss this? Maybe. But I also think the honest answer is: we don't know yet. The historical pattern suggests cognitive abilities shift rather than disappear. Whether this shift is net positive or negative - ask me again in 20 years.

[Edit]: Formatting

show 16 replies
carterschonwaldtoday at 7:23 AM

idk, if anything I’m thinking more. The idea that I might be able to build everything I’ve ever planned out. At least the way I’m using them, it’s like the perfect assistive device for my flavor of ADHD — I get an interactive notebook I can talk through crazy stuff with. No panacea for sure, but I’m so much higher functioning it’s surreal. I’m not even using em in the volume many folks claim, more like pair programming with a somewhat mentally ill junior colleague. Much faster than I’d otherwise be.

this actually does include a crazy amount of long form latex expositions on a bunch of projects im having a blast iterating on. i must be experiencing what its almost like not having adhd

show 7 replies
blackqueerirohtoday at 6:10 AM

I encourage folks to listen to brilliant psychologist for software teams Cat Hicks [1] and her wife, teaching neuroscientist Ashley Juavinett [2] on their excellent podcast, Change, Technically discussing the myriad problems with this study: https://www.buzzsprout.com/2396236/episodes/17378968

1: https://www.catharsisinsight.com 2: https://ashleyjuavinett.com

show 1 reply
culitoday at 8:53 AM

My friend works with people in their 20s. She recently brought up her struggles to do the math in her head for when to clock in/out for their lunches (30 minutes after an arbitrary time). The young coworker's response was "Oh I just put it into ChatGPT"

The kids are using ChatGPT for simple maths...

show 2 replies
softwaredougyesterday at 11:55 PM

Druids used to decry that literacy caused people to lose their ability to memorize sacred teachings. And they’re right! But literacy still happened and we’re all either dumber or smarter for it.

show 5 replies
0dayztoday at 10:21 AM

It's a bit tiring seeing these extreme positions on Ai sticking out time and time again, Ai is not some cure all for code stagnation or creating products nor is it destroying productivity.

It's a tool, and this study at most indicates that we don't use as much brain power for the specific tasks of coding but do they look into for instance maintenance or management of code?

As that is what you'll be relegated to when vibe coding.

show 1 reply
netsharcyesterday at 11:44 PM

An obvious comparison is probably the habitual usage of GPS navigation. Some people blindly follow them and some seemingly don't even remember routes they routinely take.

show 8 replies
k8sToGotoday at 7:08 AM

The title is missing an important part "... for Essay Writing Task"

treenodetoday at 10:11 AM

I don't see why this is unexpected. 'Using your brain actively vs evaluating AI' is neurally equivalent to 'active recall vs reading notes'.

misswaterfairyyesterday at 10:41 PM

It seems this study has been discussed on HN before, though was recently revised very late December 2025.

https://arxiv.org/abs/2506.08872

show 1 reply
captain_coffeeyesterday at 11:22 PM

Curious what the long-term effects from the current LLM-based "AI" systems embedded in virtually everything and pushed aggressively will be in let's say 10 years, any strong opinions or predictions on this topic?

show 5 replies
canxeriantoday at 9:35 AM

My use case for ChatGPT is to delegate mental effort on certain tasks, so that I can pour my mental energy on to things I truly care about, like family, certain hobbies and relationships.

If you are feeling over reliant on these tools then I quickfix that's worked me is to have real conversations with real people. Organise a coffee date if you must.

coopykinstoday at 7:51 AM

When I have to put together a quick fix. I reach out to Claude Code these days. I know I can give it the specifics and, Im my recent experience, it will find the issue and propose a fix. Now, I have two options: I can trust it or I can dig in myself and understand why it's happening myself. I sacrifice gaining knowledge for time. I often choose the later, and put my time in areas I think are more important than this, but I'm aware of it.

If you give up your hands-on interaction with a system, you will lose your insight about it.

When you build an application yourself, you know every part of it. When you vibe code, trying to debug something in there is a black box of code you've never seen before.

That is one of the concerns I have when people suggest that LLMs are great for learning. I think the opposite, they're great for skipping 'learning' and just get the results. Learning comes from doing the grunt work.

I use LLMs to find stuff often, when I'm researching or I need to write an ADR, but I do the writing myself, because otherwise it's easy to fall into the trap of thinking that you know what the 'LLM' is talking about, when in fact you are clueless about it. I find it harder to write about something I'm not familiar with, and then I know I have to look more into it.

show 2 replies
samthebaamtoday at 8:49 AM

This has been the same argument since the invention of pen and paper. Yes, the tools reduce engagement and immediate recall and memory, but also free up energy to focus on more and larger problems.

Seems to focus only on the first part and not on the other end of it.

show 1 reply
potatoman22yesterday at 11:58 PM

I've definitely noticed an association between how much I vibe code something and how good my internal model of the system is. That bit about LLM users not being able to quote their essay resonates too: "oh we have that unit test?"

curl-uptoday at 9:06 AM

Prompt they use in `Figure 28.` is a complete mess, all the way from starting it with "Your are an expert" to the highly overlapping categories to the poorly specified JSON without clear direction on how to fill in those fields.

Similar mess with can be found in `Figure 34.`, with an added bonus of "DO NOT MAKE MISTAKES!" and "If you make a mistake you'll be fined $100".

Also, why are all of these research papers always using such weak LLMs to do anything? All of this makes their results very questionable, even if they mostly agree with "common intuition".

kachapopopowtoday at 10:08 AM

I mean I think this is okay I can't do math in my head at all and it hasn't stopped me from solving mathematical problems. You might not be able to write code, but you are still the primary problem solver (for now).

I have actually been improving in other fields instead like design and general cleanliness of the code, future extensability and bug prediction.

My brain is not 'normal' either so your mileage might vary.

yndoendotoday at 12:44 AM

How can you validate ML content when you don't have educated people?

Thinking everything ML produces is just shorting the brain.

I see AI wars as creating coherent stories. Company X starts using ML and they believe what was produced is valid and can grow their stock. Reality is that Company Y poised the ML and the product or solution will fail, not right away but over time.

footayesterday at 11:42 PM

Imo programming is fairly different between vibes based not looking at it at all and using AI to complete tasks. I still feel engaged when I'm more actively "working with" the AI as opposed to a more hands off "do X for me".

I don't know that the same makes as much sense to evaluate in an essay context, because it's not really the same. I guess the equivalent would be having an existing essay (maybe written by yourself, maybe not) and using AI to make small edits to it like "instead of arguing X, argue Y then X" or something.

Interestingly I find myself doing a mix of both "vibing" and more careful work, like the other day I used it to update some code that I cared about and wanted to understand better that I was more engaged in, but also simultaneously to make a dashboard that I used to look at the output from the code that I didn't care about at all so long as it worked.

I suspect that the vibe coding would be more like drafting an essay from the mental engagement POV.

show 2 replies
pfannkuchentoday at 12:00 AM

Talking to LLMs reminds me of arguing with a certain flavor of Russian. When you clarify based on a misunderstanding of theirs, they act like your clarification is a fresh claim which avoids them ever having to backpedal. It strikes me as intellectually dishonest in a way I find very grating. I do find it interesting though as the incentives that produce the behavior in both cases may be similar.

show 1 reply
spongebobstoestoday at 1:15 AM

the article suggests that the LLM group had better essays as graded by both human and AI reviewers, but they used less brain power

this doesn't seem like a clear problem. perhaps people can accomplish more difficult tasks with LLM assistance, and in those more difficult tasks still see full brain engagement?

using less brain power for a better result doesn't seem like a clear problem. it might reveal shortcomings in our education system, since these were SAT style questions. I'm sure calculator users experience the same effects vs mental mathematics

jchwyesterday at 11:51 PM

I try my best to make meta-comments sparingly, but, it's worth noting the abstract linked here isn't really that long. Gloating that you didn't bother to read it before commenting, on a brief abstract for a paper about "cognitive debt" due to avoiding the use of cognitive skills, has a certain sad irony to it.

The study seems interesting, and my confirmation bias also does support it, though the sample size seems quite small. It definitely is a little worrisome, though framing it as being a step further than search engine use makes it at least a little less concerning.

We probably need more studies like this, across more topics with more sample size, but if we're all forced to use LLMs at work, I'm not sure how much good it will do in the end.

nothrowawaysyesterday at 11:28 PM

> Cognitive activity scaled down in relation to external tool use

ReptileMantoday at 10:49 AM

I have a whole phonebook of numbers I know by heart, all of them before my first mobile phone. Not a single one remember afterwards. A lot of stuff I remembered when there was no google, afterwards - remembering how to find it by using google. And so on.

mrvmochitoday at 12:27 AM

I wonder what would happen if we used RL to minimize the user's cognitive debt. Could this lead to the creation of an effective tutor model?

show 1 reply
falloutxyesterday at 11:42 PM

I think a lot more people, especially at the higher end of the pay scale, are in some kind of AI psychosis. I have heard people at work talk about how they are using chatGPT to quick health advice, some are asking it for gym advice and others are just saying they just dump entire research reports into it and get the summary.

show 3 replies
mettlerseyesterday at 11:19 PM

Article seems long, need to run it through an LLM.

show 2 replies
somewhatrandom9yesterday at 11:29 PM

"Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI's role in learning."

newswasboringtoday at 9:02 AM

"For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem [275b] to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise."

- Socrates on Writing.

bethekidyouwanttoday at 12:06 AM

I’m gonna make a new study one where I give the participant really shitty tools and one more give them good tools to build something and see which one takes more brain power

show 1 reply
xenophonfyesterday at 11:45 PM

I'm very impressed. This isn't a paper so much as a monograph. And I'm very inclined to agree with the results of this study, which makes me suspicious. To what journal was this submitted? Where's the peer review? Has anyone gone through the paper (https://arxiv.org/pdf/2506.08872) and picked it apart?

show 1 reply
bethekidyouwantyesterday at 11:26 PM

“LLM users also struggled to accurately quote their own work” - why are these studies always so laughably bad?

The last one I saw was about smartphone users who do a test and then quit their phone for a month and do the test again and surprisingly do better the second time. Can anyone tell me why they might have paid more attention, been more invested, and done better on the test the second time round right after a month of quitting their phone?

usrbinbashtoday at 9:06 AM

No shit? When I outsource thinking to a chatbot, my brain gets less good at thinking? What a complete and utter surprise.

/s

orliesaurusyesterday at 11:38 PM

i think i can guess this article without reading it: ive never been on major drugs, even medically speaking yet using AI makes me feels like i am on some potent drug that eating my brain. what's state management? what's this hook? who cares, send it to claude or whatever

show 2 replies
lacooljyesterday at 11:24 PM

Dont even need to read the article if you been using em. You already know just as well as I do how bad it gets.

A door has been opened that cant be closed and will trap those who stay too long. Good luck!

show 2 replies
DocTomoeyesterday at 11:30 PM

TL;DR: We had one group not do some things, an later found out that they did not learn anything by not doing the things.

This is a non-study.

show 1 reply
Der_Einzigeyesterday at 11:54 PM

Good. Humans don’t need to waste their mental energy on tasks that other systems can do well.

I want a life of leisure. I don’t want to do hard things anymore.

Cognitive atrophy of people using these systems is very good as it makes it easier to beat them in the market, and it’s easier to convince them that whatever slop work you submitted after 0.1 seconds of effort “isn’t bad, it’s certainly great at delving into the topic!”

Also, monkey see, monkey speak: https://arxiv.org/abs/2409.01754

show 1 reply
trees101today at 12:14 AM

Skill issue. I'm far more interactive when reading with LLMs. I try things out instead of passively reading. I fact check actively. I ask dumb questions that I'd be embarrassed to ask otherwise.

There's a famous satirical study that "proved" parachutes don't work by having people jump from grounded planes. This study proves AI rots your brain by measuring people using it the dumbest way possible.