There are two aspects to this from my pov. And I think it might be controversial.
When i have a question about any topic, and I ask Chatgpt, i usually chat about more things, coming up with questions based on the answer, and mostly stupid questions. I feel like I am taking in the information, analyzing, and then diving deeper because I am curious. This is based on how I learn about stuff. I know i need to check a few things, and that it's not fully accurate, but the conversation flows in a direction I like.
compared this to researching on the internet, there are some good aspects, but more often than not, I end up reading an opinionated post by someone (no matter the topic, if you go deep enough, you will land on an opinionated factual telling). That feels like someone decided what questions are important, what angles we need to look at, and what the conclusion should be. Yes, it is educational, but I am always left with lingering questions.
The difference is curiosity. If people are curious about a topic, they will learn. If not, they are happy with the answer. And that is not laziness. You cannot be curious about everything.
I've been calling this out since OpenAI first introduced ChatGPT.
The danger in ubiquitously available LLMs, which seemingly have an answer to any question, isn’t necessarily their existence.
The real danger lies in their seductive nature - over how tempting it becomes to immediately reach for the nearest LLM to provide an answer rather than taking a few moments to quietly ponder the problem on your own. That act of manipulating the problem in your head—critical thinking—is ultimately a craft. And the only way to become better at it is by practicing it in a deliberate, disciplined fashion.
A preprint is available on arxiv [0], see the top of page 18 for what metacognitive laziness is:
"In the context of human-AI interaction, we define metacognitive laziness as learners’ dependence on AI assistance, offloading metacognitive load, and less effectively associating responsible metacognitive processes with learning tasks."
And they seem to define, implicitly, “metacognitive load” as the cognitive and metacognitive effort required for learners to regulate their learning processes effectively, particularly when engaging in tasks that demand active self-monitoring, planning, and evaluation.
The analogize metacognitive laziness to cognitive offloading, where we have our tools do the difficult congnitive tasks for us, which robs us of opportunities to develop and ultimately dependent on those tools.
I'm certainly of two minds on this.
On one hand, this reminds me of how all of the kids were going to be completely helpless in the real world because "no one carries a calculator in their pocket". Then calculators became something ~everyone has in their pocket (and the kids ended up just fine).
On the other hand, I believe in the value of "learning to learn", developing media literacy, and all of the other positives gained when you research and form conclusions on things independently.
The answer is probably somewhere in the middle: leveraging LLMs as a learning aid, rather than LLMs being the final stop.
The abstract does not define, nor contextually suggest from the prior statements of the results what "metacognitive laziness" means.
Personally speaking, I find being able to ask ChatGPT continually more nuanced questions about an initial answer the one clear benefit over a Google search, where I have diminishing marginal returns on my inquisitiveness for the time invested over subsequent searches. The more precisely I am able to formulate my question on a traditional search engine, the harder it is for non-SEO optimized results to appear: it's either meant more for a casual reader with no new information, or is a very specialized resource that requires extensive professional background knowledge. LLMs really build that bridge to precisely the answers I want.
Cell phones and laptops in general have changed a couple of things for me, as someone who grew up without them:
- I realized about 20y-25y ago that I could run a Web search and find out nearly any fact, probably one-shot but maybe with 2-3 searches' worth of research
- About 10-15y ago I began to have a connected device in my pocket that could do this on request at any time
- About 5y ago I explicitly *stopped* doing it, most of the time, socially. If I'm in the middle of a conversation and a question comes up about a minor fact, I'm not gonna break the flow to pull out my screen and stare at it and answer the question, I'm gonna keep hanging out with the person.
There was this "pub trivia" thing that used to happen in the 80s and 90s where you would see a spirited discussion between people arguing about a small fact which neither of them immediately had at hand. We don't get that much anymore because it's so easy to answer the question -- we've just totally lost it.
I don't miss it, but I have become keenly aware of how tethered my consciousness is to facts available via Web search, and I don't know that I love outsourcing that much of my brain to places beyond my control.
1. Socrates criticized writing itself: in Plato's Phaedrus he said it would "create forgetfulness in the learners' souls, because they will not use their memories" (274e-275b)
2. Leonard Euler criticized the use of logarithm tables in calculating: in his 1748 "Introductio in analysin infinitorum" he insisted on deriving logarithms from first principles
3. William Thomson (Lord Kelvin) initially dismissed mechanical calculators, stating in an 1878 lecture at Glasgow University that they would make students "neglect the cultivation of their reasoning powers"
4. Henry Ford in his autobiography "My Life and Work" (1922) quoted a farmer who told him in 1907 that gasoline tractors would "make boys lazy and good for nothing" and they'd "never learn to farm"
5. In 1877, the New York Times published concerns from teachers about students using pencils with attached erasers, claiming it would make them "careless" because they wouldn't have to think before writing. The editorial warned it would "destroy the discipline of learning"
6. In "Elements of Arithmetic," (1846) Augustus De Morgan criticized the use of pre-printed multiplication tables, saying students who relied on them would become "mere calculative mechanism" instead of understanding numbers
7. In his 1906 paper "The Menace of Mechanical Music," John Philip Sousa attacked the phonograph writing that it would make people stop learning instruments because "the infant will be taught by machinery" and musical education would become "unnecessary"
8. In his 1985 autobiography "Surely You're Joking, Mr. Feynman!" Richard Feynman expressed concern about pocket calculators and students losing the ability to estimate and understand mathematical relationships
I could go on (Claude wrote 15 of them!). Twenty years from now (assuming AI hasn't killed us all) we'll look back and think that working with an LLM isn't the crutch people think it is now.
So humans are supposed to review all of the code that GenAI creates. We’re supposed to ensure that it doesn’t generate (obvious?) errors and that it’s building the “right thing” in a manner prescribed by our requirements.
The anecdotes from practitioners using GenAI in this way suggest it’s a good tool for experienced developers because they know what to look out for.
Now we admit folks who don’t know what they’re doing and are in the process of learning. They don’t know what to look out for. How does this tech help them? Do they know to ask what a use-after-free is or how cache memory works? Do they know the names of the algorithms and data structures? Do they know when the GenAI is bullshitting them?
Studies such as this are hard but important. Interesting one here even though the sample is small. I wonder if anyone can repeat it.
I'm at this very moment testing deepseek-r1, a so called "reasoning" llm, on the excellent "rustlings" tutorial. It is well documented and its solutions are readily available online. It is my lazy go-to-testing for coding tasks to assess for me if and when I have to start looking for a new job and take up software engineering as a hobby. The reason I test with rustlings is to also assess the value as a learning tool for students and future colleagues. Maybe these things have use as a teacher? Also, the rust compiler is really good in offering advice, so there's an excellent baseline to compare the llm-output.
And well, let me put it this way: deepseek-r1 won't be replacing anyone anytime soon. It generates a massive amount of texts, mostly nonsensical and almost always terribly, horribly wrong. But inexperienced devs or beginners, especially beginners, will be confused and will be led down the wrong path, potentially outsourcing rational thought to something that just sounds good, but actually isn't.
Currently, over-reliance on the ramblings of a statistical model seems detrimental to education and ultimately the performance of future devs. As the probably last generation of old school software engineers, who were trained on coffee and tears of frustration, who had to really work code and architecture themselves, golden times might lie ahead, because someone will have to fix the garbage produced en masse by llms.
Before pervasive GPS, it took me very little time to actually learn and internalize a route. Now it takes a lot longer to remember it when you're constantly guided. Same exact thing is happening with guided reasoning we get with LLMs
> What is particularly noteworthy is that AI technologies such as ChatGPT may promote learners' dependence on technology and potentially trigger “metacognitive laziness”. In conclusion, understanding and leveraging the respective strengths and weaknesses of different agents in learning is critical in the field of future hybrid intelligence.
Maybe I'm trying to read and understand it too quickly, but I don't see anything in the abstract that supports that strong conclusion.
> The results revealed that: (1) learners who received different learning support showed no difference in post-task intrinsic motivation; (2) there were significant differences in the frequency and sequences of the self-regulated learning processes among groups; (3) ChatGPT group outperformed in the essay score improvement but their knowledge gain and transfer were not significantly different. Our research found that in the absence of differences in motivation, learners with different supports still exhibited different self-regulated learning processes, ultimately leading to differentiated performance.
The ChatGPT group performed better on essay scores, they showed no deficit in knowledge gain or transfer, but they showed different self-regulated learning processes (not worse or better, just different?).
If anything, my own conclusion from the abstract would be that ChatGPT is helpful as a learning tool as it helped them improve essay scores without compromising knowledge learning. But again, I only read the abstract, maybe they go into more details in the paper that make the abstract make more sense.
This is not a concern when you are responsible for real results. If you aren’t responsible for real results you can pass off the good rhetoric of these models as an “answer”. But when you need results you realize most answers they give are just rhetoric. They are still extremely valuable, but they can only help you when you have done the work to get deep understanding of the problem, incentivized by actually solving it.
> Our research found that in the absence of differences in motivation, learners with different supports still exhibited different self-regulated learning processes, ultimately leading to differentiated performance.
That's the most convoluted conclusion I've ever seen.
> What is particularly noteworthy is that AI technologies such as ChatGPT may promote learners' dependence on technology and potentially trigger “metacognitive laziness”.
Calculator laziness is long known. It doesn't cause meta- but specific- laziness.
What did the researchers expect?
Humans are lazy by nature, they seek shortcuts.
So given the chance to go rote learning for years for an education which in most cases is simply a soon to be forgotten certification vs watching TikTok while letting ChatGPT do the lifting - this is all predictable, even without Behavioral Design, Hooked etc.
And that usually the benefits rise with IQ level - nothing new here, that’s the very definition of IQ.
Learning and academia is hard, and even harder for those with lower IQ scores.
A fool with a tool is still a fool and vice versa.
Motivation seems also at an all time low. Why put in hours when a prompt can works wonders?
Reading a book is a badge of honor nowadays more than ever.
In my recent programming exam (in an MSc in AI), I asked students to reflect on how generative AI has changed their coding. Almost all remarked that it's a great time-saver, but it makes them lazy and worse at coding.
And yes indeed, their ability to answer basic questions about coding on the same exam has drastically dropped versus last year.
My observation is that I learn more than ever using LLMs.
I tend to learn asking questions, I did this using Anki cards for years (What is this or that?) and find the answer on the back of the index card. Questions activate my thinking more than anything, and of course my attempt at answering the question in my own terms.
My motto is: Seek first to understand, then to be understood (Covey). And I do this in engaging with people or a topic—-by asking questions.
Now I do this with LLMs. I have been exploring ideas I would never have explored hadn’t there been LLMs, because I would not have had the to research material for learning, read it, create material in a Q&A session for me.
I even use LLMs to convert an article into Anki cards using Obsidian, Python, LLMs, and the Anki app.
Crazy times we are in.
I don’t see how the “metacognitive laziness” (a term used by the abstract, but not defined) follows from what they describe in the abstract as the outcomes they observed. They specifically called out no difference in post-task intrinsic motivation; doesn’t that imply that the ChatGPT users were no lazier after using ChatGPT than they were before?
I’m also a skeptic of students using and relying on ChatGPT, but I’m cautious about using this abstract to come to any conclusions without seeing the full paper especially given that they’re apparently using “metacognitive laziness” in a specific technical way we don’t know about if we haven’t read the paper.
I think this holds water.
Metacognition is really how the best of the best can continue to be at their best.
And if you don't use it, you lose it.
Idk, the "explain {X} to me like I'm 12" has certainly helped my delve into new topics, Nix with Flakes comes to mind as one of my latest ventures.
How's this any different than someone 5+ years ago blindly going by whatever a Google result said about anything? I've run into conflicting answers to things off Google's first page of results, some things aren't 100% certain and require more research.
I'm not surprised if this will make some lazier since you don't need to do the legwork of reading, but how many don't read only the headlines of articles before they share articles?
We destroyed our artists for a mash up and then wondered why there was nothing new under the sun.
Inevitably the advancement of knowledgeable information generation will have same mental effect as having a contact list on your phone. When I was a kid I knew at least 5 peoples phone numbers maybe more. Even now I can recall 2 of them. How many can you recall from your actual contact list?
its increasing my curiosity because it allows me to run more experiments
This technology is arguably as ubiquitous as a calculator. So long as I can understand that generative AI is a tool and not a solution is it bad to treat it like a bit of a calculator? Does this metacognitive laziness apply to those who depend on calculators?
I understand it is a bit apples to oranges, but I'm curious peoples take.
That just demonstrates the difference between idiots and intelligent people. I use AI and chatgpt to learn about a zillion topics I am interested about more efficiently.
Funny, I passed the link to a whatsapp group with some friends and the preview loaded with the title "error: cookies turned off".
I'm sure my friends will RUSH to read the article now...
This is the old "siiiiiir why do we need to do this if we have calculators"? It matters - https://www.edweek.org/education/little-numbers-add-up-to-bi... Students who know the facts will be better at math.
Even if the computer is doing all the thinking, it's still a tool. Do you know what to ask it? Can you spot a mistake when it messes up (or you messed up the input)? Can you simplify the problem and figure out what the important parts of the problem are? Do you even know to do any of that?
Sure, thinking machines will sometimes be autonomous and not need you to touch them. But when that's the case, your job won't be to just nod along to everything the computer says, you won't have a job anymore and you will need to find a new job (probably one where you need to prompt and interpret what the AI is doing).
And yes, there will be jobs where you just act as an actuator for the thinking machine. Ask an Amazon warehouse worker how great a job that is :/
Everything is the same as with calculators.
same is true for google, gps, etc.
“The kids these days are too lazy to be bothered to learn” is a psychological trap that people often fall into.
It’s not to say we shouldn’t do our best to understand and provide guardrails, but the kids will be fine.
I wonder how will LLMs learn anything new when no one does original research and just asks the LLM? Will LLMs just feed back on each other, effectively hallucinating false "learning"?
Maybe we'll end up as a society of a few elites who still know how to research, think, and/or write with LLMs digesting that and regurgitating it for the masses.
Any time an empirical research project has to add QUOTES around a common term, it sets off the non-sense radar:
..."laziness"...
In the battle cry of the philosopher: DEFINE YOUR TERMS!!
What they really mean: new and different. Outside-the-box. "Oh no, how will we grade this?!?" a threat to our definition and control of knowledge.
I mean this is the same exact thing that happened when calculators where invented. The amount of people who can count in their heads drastically dropped because why waste your time? Ditto for when maps app came out. No more need to memorize a bunch of locations because you can just use maps to take you there.
I feel this, because it’s like I don’t need to know about something, I just need to know how to know about something. Like, the initial contact with a mystery subject is overcome by knowing how to describe the mystery in a way that AI understands what I don’t understand, and seeks to fill in the understanding.
An example, I have no clue about React. I do know why I don’t like to use React and why I have avoided it over the years. I describe to some ML tool the difficulties I’ve had learning React and using it productively .. and voila, it plots a chart through the knowledge that, kinda, makes me want to learn React and use it.
It’s like, the human ability to form an ontology in the face of mystery even if it is in accurate or faulty, allows the AI to take over and plot an ontological route through the mystery into understanding.
Another thing I realized lately, as ML has taken over my critical faculties, is that it’s really only useful for things that are already known by others. I can’t ask ML to give me some new, groundbreaking idea about something - everything it suggests has already been thought, somewhere, by a real human - and this its not new or groundbreaking. It’s just contextually - in my own local ontological universe - filling in a mystery gap.
Pretty fun times we’re having, but I do fear for the generations that will know and understand no other way than to have ML explain things for them. I don’t think we have the ethics tools, as cultures and societies, to prevent this from becoming a catastrophe of glib, knowledge-less folks, collapsing all knowledge into a raging dumpster fire of collective reactivity, but I hope someone is training a model, somewhere, to rescue us from this, somehow ..
As technology gets more impressive, we internalize less knowledge ourselves.
There is a "plato" story on how he laments the invention of writing because now people don't need to memorize speeches and stuff.
I think there is a level of balance. Writing gave us enough efficiencies that the learned laziness made us overall more effective.
The internet in 2011 made us a bit less effective. I am not gonna lie; I spent a lot more time being able to get resources, whereas I would have to struggle on my own to solve a problem. You internalize one more than the other, but is it worth the additional time every time?
I worry about current students learning through LLMs just like I would worry about a student in 2012 graduating in physics when such a student had constant access to wolfram alpha.
This stands to reason. If you need the answer to a question, and you can either get it directly, or spend time researching the answer, you're going to learn much more with the latter approach than the former. You may be disciplined enough to do more research if the answer is directly presented to you, but most people will not do that, and most companies are not interested in that, they want quick 'efficient', 'competitive' solutions. They aren't considering the long term downside to this.