The conflict of interest is pretty obvious. OpenAI, Google, and Microsoft are backing a bill that funds teaching kids to use... OpenAI, Google, and Microsoft products. "AI literacy" as defined in the bill is literally "the ability to use artificial intelligence effectively." That's not literacy, that's onboarding lol. Real digital literacy teaches how systems work, who profits from them, and how to think critically about them. This bill will in practice hand curriculum design to the same vendors who endorsed it. Teaching kids to prompt ChatGPT is not the same as teaching them to understand what ChatGPT is. Nobody funding this wants the latter.
It reminds me of the 'IT Literacy' classes we had when I was in high school where they just taught us to use Microsoft Office products.
this is a step beyond the drug dealers who give you the first sample for free. Attempts at legally mandated injection sites.
> Young people increasingly hate AI[1], and children already struggle with AI-enabled harassment that traumatizes them and disrupts their learning. And studies show kids are offloading learning onto AI models, undermining their education and social development.
[1] https://www.theverge.com/ai-artificial-intelligence/920401/g...
The coyote is already running beyond the cliff so indoctrinating kids won't save them from an AI winter 6-18 months away.
This is way too close to the Simpsons joke about the periodic tables provided by Oscar Mayer
AI, in the form of LLMs, should be used as an augmenting tool and not as a substituting one. The human must conceive the idea, design the solution and fill most of the gaps. The AI will only refine, improve and suggest options upon an already existing base. As a parent I promote such a use to my kids, rather than ban AI which is futile and might become dangerous in the future.
There is a class of such thing that could be useful. I will likely be teaching my children this literacy myself. Obviously the interstitial pop-ups don't work, and the next generation will not be coming to this technology from the point of view of watching it develop. They will see it as having always existed and while they may be appropriately sceptical, I suspect they will be far more trusting of it. So some degree of understanding the mechanics will probably allow them to learn to treat this technology appropriately.
After all, it's nigh magical stuff. A machine that talks to you in common language and is almost always right. If you weren't already prepared for it, you would trust it implicitly. When Wikipedia first came onto the scene, people behaved this way there too. They would believe it was entirely correct. But at some point there was a concerted effort in pedagogy to say things like "You can't cite a Wikipedia article" and that one simply-remembered rule allowed for children to be forced to treat it as an aggregator.
Naturally, setting up a fund for this is nearly always a bad structure. Earmarked funds have a bad habit of ending up being written to be primarily a vehicle to transfer money to pet constituencies. Teachers unions and so on are always advocating for these because that's what funds the complex ecosystem of teacher educators, the certification and curriculum development programs, and so on. This is just social welfare by a different means. Funds should be flexibly used to meet some outcome. Earmarked funds have a habit of ratcheting up. When there is no need for programs, they continue to exist, and bleed money from the actual work product of education - informed students.
I get why these articles are always written in this style but I really would appreciate some better news media. Students hate a lot of things. Their opinion is mostly moot as to whether a subject is a good thing to learn or not. And all this polemic style of "shoehorn" and so on is completely unnecessary, and just makes me treat this whole thing in the realm of some partisan Twitter post.
But the one thing I did appreciate is a link to the text of the bill.
Putting all the cynicism side.. it's amazing how big the changes in how we deal with information in our life time changed..
When I was younger, to solve a problem, we had to memorize a large amount of information. Or know someone who does. Or visit libraries and pray they have a book on what you need.
Then came the internet. All of that memorizing was replaced by web searches. You just focus on solving the problem, figuring out what you don't know and searching for that.
Now, it feels like we're automating the searching, connecting the dots and most of the problem solving. We focus on the high level problem description, verification of the results.
I wonder what they'd be adding to this curriculum.
Now, it feels like we're even offloading
As a teacher, if permitted to teach about and have students use chat bots, I think I'd focus on prompting first.
The best results I read about on here using LLM's have to do with prompt mastery I think.
Proof the people running things are stupid I guess.
If by "AI literacy" they mean "learning how AI works and how to use it effectively", then this probably would wind up backfiring. Because when you improve people's AI literacy, they use it less. They don't swear off it, but because they know what it is and is not good for, they are way more cautious in their application of AI.
Of course, they probably plan to do to education what iPads did to education: deskill children. Apple successfully abliterated the concept of a file from a generation of students by making them do their computing in a straitjacket. I can only imagine how an AI-first or AI-only educational curriculum could make kids even worse at using computers.
If AI worked as advertised then "AI Literacy" would just be "Literacy".
So far AI is funding illiteracy in schools
Gotta get em hooked while they're young.
Got to train serfs early!
Maybe a more general focus on getting students to practice critical thinking and fact-checking would be better. AI could be addressed as a small part of that, since chatbots are everywhere and students need to know how to filter out their BS.
But are NSF grants really necessary for this? To what degree is this funneling taxpayer money to buy ChatGPT subscriptions and advertise to students by getting them to use AI in the classroom?
I'm going to do everything in my power to keep this dog shit technology away from my daughter for as long as I possibly can. I can imagine implementations that I might be willing to consider for educational applications, but given these companies' demonstrated and profound lack of restraint in cramming AI into literally everything they sell, I almost can't believe anyone who doesn't work for them is suggesting that we should expose children to their products in the same context where they are supposed to be learning.
Almost.
I remember "media literacy", "digital literacy" and "smartphone literacy". Why is no-one pointing out the obvious?
This is the reason I recently ran for my kids school board. I use AI every day and I think there is a lot of utility there, but I don't want it anywhere near my kids school. Honestly, I don't think kids need to even lay eyes on a screen until they are in highschool.
What is ‘AI Literacy’? How to prepare a prompt for maximum token efficiency?
The Chromebook has already been an unmitigated disaster for computer literacy, this will only make it worse.
What a big waste of $, for an example how did the 'coding' schools go ? AI literacy will go the same way.
How about funding something useful ? Like real literacy as in reading books ? That will help kids out far better than "AI literacy".
It will be interesting to see the backlash to this one.
Imagine telling parents that the new teacher they hired to teach their children just makes shit up like 30% of the time.
The thing about AI is it'll teach you how to use it (aka 'AI literacy').
I thought AI was so easy to use no one would have to be trained? Are they going to teach the kids to steal copyrighted data? And write AI slop articles? And to evangelize useless side projects as time savings?
Of course they do when it must be taught on their products which will hook the users in time and make some money.
Why think for yourself when you have ChatGPT, Claude and Gemini to do all the thinking for you?
The owners of these systems do not even use the technologies that they are creating.
The deskilling programme will continue, until morale improves.
This is entirely backwards. AI should be used as a tool to tutor kids. Kids shouldn't be learning about AI. I thought the point of AI was that people didn't have to know anything to talk to it. Not to cheat at writing exercises.
Writing exercises that children produce in school are immediately thrown into the trash after being graded and reviewed. The product is supposed to be better educated children, not better written papers.
[dead]
[dead]
[dead]
The entirety of school should eventually be replaced with just this one class. AI is able to teach people anything they may want or need to know and it can design effective ways for people to study. Being able to use, interpret, and work together with AI is going to be one of the most important skills of the 21st century.
The other day I read this piece on how AI is already being used in schools, and it left quite an impression on me. https://archive.is/IW4B3
> The Chromebooks, which the students use in every class and for homework, came pre-installed with an all-ages version of Gemini, a suite of A.I. tools. When my daughter, who is in sixth grade, begins writing an essay, she gets a prompt: “Help me write.” If she is starting work on a slide-show presentation, the prompt is “Help me visualize.” She shoos away these interruptions, but they persist: “Help me edit.” “Beautify this slide.” The image generator is there, if she’d ever wish to pull the plug on her imagination. The Gemini chatbot is there, if she ever wants to talk to no one.
I'm not as anti-AI as the author of the piece, and I think that AI could have a role as a teaching aid. It's infinitely patient and it's able to adapt to a student's needs better than a textbook. Still, I hate the idea of students being encouraged to entirely offload their cognitive work onto an online service rather than think for themselves. The point of making fifth graders write essays, make art, design presentations, etc isn't the end product, it's that they now have the experience of having done the assignment. I would rather see students get taught how to think creatively, analyze a piece of writing, coherently explain an opinion, or draw a picture on their own, instead of giving this up in exchange for the nebulous skill of being "AI native" (aka being able to ask a computer to produce work for you).