It seems like a good path forward is to somewhat try to replicate the idea of "once you can do it yourself, feel free to use it going forward" (knowing how various calculator operations work before you let it do it for you).
I'm curious if we instead gave students an AI tool, but one that would intentionally throw in wrong things that the student had to catch. Instead of the student using LLMs, they would have one paid for by the school.
This is more brainstorming then a well thought-out idea, but I generally think "opposing AI" is doomed to fail. If we follow a montessori approach, kids are naturally inclined to want to learn thing, if students are trying to lie/cheat, we've already failed them by turning off their natural curiosity for something else.
I agree, I think schools and universities need to adapt, just like calculators, these things aren't going away. Let students leverage AI as tools and come out of Uni more capable than we did.
AI _do_ currently throw in an occasional wrong thing. Sometimes a lot. A students job needs to be verifying and fact checking the information the AI is telling them.
The student's job becomes asking the right questions and verifying the results.