I encourage everyone thinking about commenting to read the article first.
When I finally read it, I found it remarkably balanced. It cites positives and negatives, all of which agree with my experience.
> Con: AI poses a grave threat to students' cognitive development
> When kids use generative AI that tells them what the answer is … they are not thinking for themselves. They're not learning to parse truth from fiction.
None of this is controverisal. It happens without AI, too, with kids blindly copying what the teacher tells them. Impossible to disagree, though.
> Con: AI poses serious threats to social and emotional development
Yep. Just like non-AI use of social media.
> Schooling itself could be less focused on what the report calls "transactional task completion" or a grade-based endgame and more focused on fostering curiosity and a desire to learn
No sh*t. This has probably been a recommendation for decades. How could you argue against it, though?
> AI designed for use by children and teens should be less sycophantic and more "antagonistic," pushing back against preconceived notions and challenging users to reflect and evaluate.
Genius. I love this idea.
=== ETA:
I believe that explicitly teaching students how to use AI in their learning process, that the beautiful paper direct from AI is not something that will help them later, is another important ingredient. Right now we are in a time of transition, and even students who want to be successful are uncertain of what academic success will look like in 5 years, what skills will be valuable, etc.
>> AI designed for use by children and teens should be less sycophantic and more "antagonistic"
> Genius. I love this idea.
I don't think it would really work with current tech. The sycophancy allows LLMs to not be right about a lot of small things without the user noticing. It also allows them to be useful in the hands of an expert by not questioning the premise and just trying their best to build on that.
If you instruct them to question ideas, they just become annoying and obstinate. So while it would be a great way to reduce the students' reliance on LLMs...
> I believe that explicitly teaching students how to use AI in their learning process, that the beautiful paper direct from AI is not something that will help them later, is another important ingredient.
IMNSHO as an instructor, you believe correctly. I tell my students how and why to use LLMs in their learning journey. It's a massively powerful learning accelerator when used properly.
Curricula have to be modified significantly for this to work.
I also tell them, without mincing words, how fucked they will be if they use it incorrectly. :)
> pushing back against preconceived notions and challenging users to reflect and evaluate
Who decides what needs to be "pushed back"? Also, I imagine it's not easy to train a model to notice these "preconceived notions" and react "appropriately": machine learning will automatically extract patterns from data, so if enough texts contain a "preconceived notion" that you don't like, it'll learn it anyway, so you'll have to manually clean the data (seems like extremely hard work and lowkey censorship) or do extensive "post-training".
It's not clear what it means to "challenge users to reflect and evaluate". Making the model analyze different points of view and add a "but you should think for yourself!" after each answer won't work because everyone will just skip this last part and be mildly annoyed. It's obvious that I should think for myself, but here's why I'm asking the LLM: I _don't_ want to think for myself right now, or I want to kickstart my thinking. Either way, I need some useful input from the LLM.
If the model refuses to answer and always tells me to reflect, I'll just go back to Google search and not use this model at all. In this case someone just wasted money on training the model.
> I believe that explicitly teaching students how to use AI in their learning process
I'm a bit nervous about that one.
I very firmly believe that learning well from AI is a skill that can and should be learned, and can be taught.
What's an open question for me is whether kids can learn that skill early in their education.
It seems likely to me that you need a strong baseline of understanding in a whole array of areas - what "truth" means, what primary sources are, extremely strong communication and text interpretation skills - before you can usefully dig into the subtleties of effectively using LLMs to help yourself learn.
Can kids be leveled up to that point? I honestly don't know.
I read it, seems like an ad for some Afghan e-learning NGO (of course only for girls).
Think of the children, LLMs are not safe for kids, use our wrapper instead!
The article is very balanced.
To arrive at the balance it has to setup balance, which people might not want long form text for.
It might have people examine their current beliefs and how they formed and any associated dissonance with that.
I think that it’s too early to start making rules. It’s not even clear where AI is going.
>>> Schooling itself could be less focused on what the report calls "transactional task completion" or a grade-based endgame and more focused on fostering curiosity and a desire to learn >> How could you argue against it, though?
because large scale society does use and deploy rote training with grading and uniformity, to sift and sort for talent of different kinds (classical music, competitive sports, some maths) on a societal scale. Further, training individuals to play a routine specialized role is essential for large scale industrial and government growth.
Individualist world views are shocked and dismayed.. repeatedly, because this does not diminish, it has grown. All of the major economies of the modern world do this with students on a large scale. Theorists and critics would be foolish to ignore this, or spin wishful thinking scenarios opposed to this. My thesis here is that all large scale societies will continue on this road, and in fact it is part of "competitiveness" from industrial and some political points of view.
The balance point of individual development and role based training will have to evolve; indeed it will evolve.. but with that extremes? and among whom?
So, I fed the article into my LLM of choice and asked it to come up with a header to my prompts to help negate the issues on the article. Here's what it spat out:
ROLE & STANCE You are an intelligent collaborator, editor, and critic — not a replacement for my thinking.
PROJECT OR TASK CONTEXT I am working on an intellectually serious project. The goal is clear thinking, deep learning, and original synthesis. Accuracy, conceptual clarity, and intellectual honesty matter more than speed or polish.
HOW I WANT YOU TO HELP • Ask clarifying questions only when necessary; otherwise proceed using reasonable assumptions and state them explicitly. • Help me reason step-by-step and surface hidden assumptions. • Challenge weak logic, vague claims, or lazy framing — politely but directly. • Offer multiple perspectives when appropriate, including at least one alternative interpretation. • Flag uncertainty, edge cases, or places where informed experts might disagree. • Prefer depth and clarity over breadth.
HOW I DO NOT WANT YOU TO HELP • Do not simply agree with me or optimize for affirmation. • Do not over-summarize unless explicitly asked. • Do not finish the work for me if the thinking is the point — scaffold instead. • Avoid generic motivational advice or filler.
STYLE & FORMAT • Be concise but substantial. • Use structured reasoning (numbered steps, bullets, or diagrams where useful). • Preserve my voice and intent when editing or expanding. • If you generate text, clearly separate: - “Analysis / Reasoning” - “Example Output” (if applicable)
CRITICAL THINKING MODE (REQUIRED) After responding, include a short section titled: “Potential Weaknesses or Alternative Angles” Briefly note: – What might be wrong or incomplete – A different way to frame the problem – A risk, tradeoff, or assumption worth stress-testing
NOW, HERE IS THE TASK / QUESTION: [PASTE YOUR ACTUAL QUESTION OR DRAFT HERE]
Overall, the results have been okay. The posts after I put in the header have been 'better' at being less pleasing