> What is particularly noteworthy is that AI technologies such as ChatGPT may promote learners' dependence on technology and potentially trigger “metacognitive laziness”. In conclusion, understanding and leveraging the respective strengths and weaknesses of different agents in learning is critical in the field of future hybrid intelligence.
Maybe I'm trying to read and understand it too quickly, but I don't see anything in the abstract that supports that strong conclusion.
> The results revealed that: (1) learners who received different learning support showed no difference in post-task intrinsic motivation; (2) there were significant differences in the frequency and sequences of the self-regulated learning processes among groups; (3) ChatGPT group outperformed in the essay score improvement but their knowledge gain and transfer were not significantly different. Our research found that in the absence of differences in motivation, learners with different supports still exhibited different self-regulated learning processes, ultimately leading to differentiated performance.
The ChatGPT group performed better on essay scores, they showed no deficit in knowledge gain or transfer, but they showed different self-regulated learning processes (not worse or better, just different?).
If anything, my own conclusion from the abstract would be that ChatGPT is helpful as a learning tool as it helped them improve essay scores without compromising knowledge learning. But again, I only read the abstract, maybe they go into more details in the paper that make the abstract make more sense.
I don't really know what "metacongnitive laziness" is even after they explain it in the paper, but I use LLMs to filter noise and help automate the drudgery of certain tasks, allowing me to use my energy and peak focus time on the more complicated tasks. Anecdotal, obviously. But I don't see how this hinders me in my ability to "self-regulate". It's just a tool, like a hammer.
From a learning perspective, it can also be a short cut to getting something explained in several different ways until the concept "clicks".
I drew a similar conclusion from the abstract as you. The only negative I could think out of that is with higher essay scores, one might expect higher knowledge gain, and that wasn’t present.
However, I agree that that doesn’t really seem to be a negative over other methods.
I have found ChatGPT is pretty good at explaining topics when the source documentation is poorly written or lacks examples. Obviously it does make mistakes so skepticism in the output is a good idea.
Yeah, the abstract could use a bit more work. The gist of it is being in a closed-loop cycle with ChatGPT only helps with the task at hand, and not with engaging with the full learning process. Instead they say "When using AI in learning, learners should focus on deepening their understanding of knowledge and actively engage in metacognitive processes such as evaluation, monitoring, and orientation, rather than blindly following ChatGPT's feedback solely to complete tasks efficiently."
I have been using LLMs for my own education since they came out and have watched my kid use it.
Some kids might pickup a calculator and then use it to see geometric growth, or look for interesting repeating patterns of numbers.
Another kid might just use it to get their homework done faster and then run outside and play.
The second kid isn't learning more via the use of the tool.
So the paper warns that the use of LLMs doesn't necessarily change what the student is interested in and how they are motivated. That we might need to put in checks for how the tool is being used into the tool to reduce the impact of scenario 2.